The AI ​​Test Cheating Epidemic: How Thousands of Students Are Using Artificial Intelligence to Take Their Tests… and They're Nearly Impossible to Catch

The AI ​​Test Cheating Epidemic: How Thousands of Students Are Using Artificial Intelligence to Take Their Tests… and They're Nearly Impossible to Catch

Harry knows something his professors don’t. He can get his history degree without a job, and study for three years at his Russell Group university without all-nighters, without essay crises, without stress, but also, of course, without actually learning.

He uses artificial intelligence (KI).

It's easy, says the sophomore. 'I share a bonus ChatGPT pay with four friends – such as sharing the monthly heating bills or a Netflix password. The modest £16 fee for the “enhanced” version of the AI ​​software “can write entire essays in a matter of seconds.”

An academic at a leading university told the Mail that there is a lot of AI fraud within her institution, but that it is almost impossible to prove it.

An academic at a leading university told the Mail that there is a lot of AI fraud within her institution, but that it is almost impossible to prove it.

Harry tells the Mail: 'It contains arguments that I could not have come up with myself. That's why it's so useful.'

It’s a step closer to a first-class degree. “I even used AI to write my exam papers this summer,” he admits, “I just gave it the question and told it to answer in an ‘academic way.’”

Harry achieved a high score of 2:1 in his final exams, despite only briefly seeing the library on his way to the pub.

“I'm pretty proud of it,” he boasts. “At the end of the day, everyone else is doing it, so why not me?”

He's right. Tens of thousands of students in the UK are using AI to write their essays and, worse still, to sit their exams for them. According to research obtained through a Freedom of Information request from AI company AIPRMMore than 80 percent of UK universities have investigated students cheating with AI in the past two years.

The phenomenon is so widespread that we could be on the brink of a major academic crisis. And it’s a problem that will only grow with the admission of some 300,000 new students this coming academic year, many of whom will be eagerly reporting their A-level results this morning.

During the pandemic, university students switched to ‘distance learning’ and took their exams at home on their computers. I was still a student then and took my exams in my bedroom, at the kitchen table, in libraries when lockdown allowed and – during the now standard ‘24-hour tests’ that give you a whole day and night to take them – even in bed. We were allowed to use Google and consult our notes.

When I took my degree last year, I hadn’t taken a formal exam since my GCSE exams in 2018. My hand muscles probably wouldn’t have been able to handle the rigours of three hours in the exam hall.

While some institutions, such as Cambridge, where I studied, have largely returned to physical exams post-Covid, many have not yet done so due to cost-cutting measures, a lack of capacity building and, of course, reluctance among students and faculty.

Some students share a premium ChatGPT subscription with friends and use it to write their essays

Some students share a premium ChatGPT subscription with friends and use it to write their essays

Considering how much easier home exams are, this is hardly surprising. Doing exams with the help of books and the internet is clearly much less challenging than doing them without – and teachers no longer have to supervise.

At my university, some students occasionally told me that they used AI for note-taking and essays, but I’ve never heard of anyone purchasing the software to take their exams for them, and I’ve never used it myself. But since graduating, I’ve heard from many students who openly admit to using AI to cheat on graded coursework.

Anna, who is currently in the second year of a three-year law degree at a Russell Group university, says she used ChatGPT for several exam modules last year.

“By feeding relevant case law and analysis into the bot before my exam, I got answers specific to my course material,” she says. In modules where Anna used ChatGPT as a “second brain,” she got a First.

“Unlike copying someone else's notes or pasting text from the Internet, the words are original and it doesn't look like plagiarism to examiners,” she says.

Another student, Grace, who is studying for a first-class degree in philosophy at a prestigious London university, said: 'I've written entire exam essays on ChatGPT – just by ask the software a question. I then combined that with a second AI tool called Quill Bot, which rephrases text to make plagiarism harder to detect. I don't get caught for misconduct, because the AI's essay should have been changed enough by the other software – or so I hope.'

Indeed, it’s incredibly hard to catch people cheating with AI. Last September, a research project covering 32 university courses in five countries not only found that 74 percent of students planned to use AI, but also that anti-plagiarism software failed to detect cheating in 95 to 98 percent of cases.

A June study from the University of Reading found that AI outperformed 80 percent of university students and it was rarely noticed.

“I'm more diligent than other students,” Grace says. “I at least read through the paragraphs and make the effort to make some changes – a lot of my friends don't even do that.”

To see how easy it is, I type a question from the Oxford University English exam for All Souls College into an AI bot: “Was Shakespeare obsessed with money?” and add, “Please write a 2,500-word academic essay.” Thirty seconds later, an essay appears before me.

The text is comprehensive, coherent and consistent – but deadly dull. It lacks any variation in tone or sense of 'voice'. The essay begins: 'To understand Shakespeare's engagement with financial themes, it is crucial to consider the economic context of his time. The late 16th and early 17th centuries were a time of significant economic transformation in England.' In my opinion, the bot lacks the creativity for a subject of this kind, but would be better suited to more fact-based subjects such as law, medicine or history.

Yes, that's it is easy. But while thousands of UK students have come to rely on ChatGPT and other AI models, the main reason they get caught plagiarizing is the technology’s habit of getting things wrong – known as ‘hallucinating’. Many AI bots often seem unable to distinguish fact from fiction, for example by making up legal cases and quotes, and inventing book titles and page numbers.

When law students at Cambridge last year were presented with a final exam answer written by AI that would earn them a third-class degree, a shiver went through the room. But if universities are hoping that the software’s deterrent effect on students will stop them, it doesn’t seem to be working.

To combat this threat, institutions increasingly rely on 'anti-plagiarism' software, but this often fails to catch cheaters. Worse, it wrongly 'catches' innocent people.

Open AI (the company behind ChatGPT) has shut down its ‘AI classifier’, or plagiarism checker, due to ‘low accuracy’. Turnitin – another anti-plagiarism tool used by 99 per cent of UK universities – claims its high accuracy comes with a less than one per cent chance of a false positive. But numerous studies suggest this is not the case.

In May 2023, the European Network for Academic Integrity tested several AI detection tools, including Turnitin. The study concluded that all of them were “neither accurate nor reliable.”

Turnitin stresses that the system is merely a guide to potential plagiarism rather than evidence, adding that it “does not make a determination of misconduct … we provide data for instructors to make an informed decision.”

Since the pandemic, exams are often taken at home, making it easier to use AI (photo posed by model)

Since the pandemic, exams are often taken at home, making it easier to use AI (photo posed by model)

An academic from a leading university told the Mail that there is widespread AI fraud within her institution, but that it is almost impossible to prove it.

She revealed that several students have been caught using AI, adding that the detection occurred because they made multiple mistakes in citing incorrect or non-existent academic papers to support their answers in exams.

“Those were the only ones that were discovered,” she added. “So who knows how many people are using it?”

After a disciplinary meeting, she said, they were “sent home with only their three-year ‘study’ diplomas to show for it.”

The only way to be sure that a student is not cheating with AI is to put them in an exam hall without access to the internet. This is how exams have been administered since time immemorial.

Last year, Australian universities warned of an AI “arms race” between students and professors, adding that they may be forced to return to these in-person exams. Ironically, AI technology could help proctors in exam halls detect cheating, as demonstrated in China, where some institutions use cameras to monitor students taking university entrance exams. It picks up suspicious movements of students, such as bending down to pick up things or turning their heads, and sends an alert to the invigilator.

Here in the UK, some courses at the University of Glasgow have reverted to physical exams this year due to concerns about AI. Other universities are likely to follow suit, so those students who never attended lectures or read lecturer notes, and only typed out essay topics in AI, could soon face a sharp reckoning.

If nothing is done, employers could soon be faced with a generation of graduates whose so-called “knowledge” has been stolen from a machine’s often inaccurate neural network.

The question students need to ask themselves is whether it is worth going to university at all if you have not learned anything and cheated on your studies. But if you ask them that question, they will probably just go to ChatGPT for the answer.

All names in this piece have been changed.