Skip to main content

Professor flunks entire class based on ChatGPT’s false claims

While the ethics of using artificial intelligence to commit academic dishonesty has been a hot topic for several months, one teacher has found himself in the hot seat for using ChatGPT carelessly against his own students.

An entire class at Texas A&M University–Commerce was accused of plagiarism and had their diplomas temporarily denied after a professor incorrectly used ChatGPT to test whether the students used AI to generate their final assignments, according to Rolling Stone.

The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.
Airam Dato-on/Pexels / Pexels

The professor, Dr. Jared Mumm, a campus rodeo instructor who also oversees agricultural classes at the university ran all of his students’ final papers through ChatGPT to test them for AI plagiarism, not aware that the AI chatbot does not work in this fashion. ChatGPT in turn replied to Mumm that all of the papers were in fact written by ChatGPT.

Inputting a prompt and a string of text, ChatGPT will confidently state that most original texts are its own work, even excerpts from famous novels. While the AI chatbot can be used for generating text such as collegiate-level essays, different AI programs need to be used to detect AI plagiarism. Some of these include Winston AI, Content at Scale, Writer AI, GPTZero, and Giant Language Model Test Room (GLTR). ChatGPT’s parent company OpenAI even has its own plagiarism detection tool; however, it is not considered very accurate.

Reddit user DearKick, who claims to be the fiancé of one of the students in the class told Rolling Stone that Mumm sent an email to the class explaining he used “Chat GTP,” which is not the chatbot’s accurate name, to detect plagiarism in the class’ last three essays and it determined that all of them were AI manufactured.

Failing every essay with an “X” grade, he offered the students the opportunity to submit a makeup assignment. The student’s alternative would be to potentially fail the class and not graduate.

Several students attempted to prove that their assignments are their legitimate work by providing the timestamps on their Google Documents, to which Mumm responded within the school’s grading software system, “I don’t grade AI bullshit.”

However, at least one student has had their name cleared by providing Google Docs timestamps and received an apology from Mumm, according to Rolling Stone.

DearKick’s partner has taken their complaint to the school’s administration, first emailing the dean and CC’ing the president of the university with no immediate response. She had plans to meet with the administrators in person on Tuesday to discuss the matter.

While no one in the class has admitted to using ChatGPT on their final papers at least two students did reveal their use of the chatbot earlier in the semester.

Texas A&M University confirmed to PC Magazine in a statement that it is aware of the situation, detailing that “no students failed the class or were barred from graduating as a result of this issue.”

Currently, the student’s diplomas are being held while individual investigations are underway.

Some schools in the U.S. quickly pushed to have ChatGPT blocked from being used on campuses after the chatbot became an internet phenomenon when it launched in November 2022. However, many schools internationally have not yet decided what they want to do about the AI chatbot. A student at Cardiff University in Wales confessed to BBC News that he used ChatGPT in comparison to his own work and got the highest scores of his college career using the chatbot.

Another Cardiff University student told the publication he was glad that he was able to take advantage of ChatGPT in his final year of school before the of use AI for plagiarism purposes could potentially affect the legitimacy of his degree in the future.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more