UK

Student Blames ChatGPT for Failing Her University Course

Facebooktwitterredditpinterestlinkedinmail

A second-year university student is causing debate on campus after claiming that her recent academic failure was not her fault, but the fault of ChatGPT. The student, identified only as Amelia to protect her from further embarrassment, insists that the AI system “sabotaged” her work by giving her answers that were “technically correct but emotionally unsupportive.”

According to Amelia, it all began when she decided to rely on ChatGPT for most of her coursework in a module titled Modern Research Methods. While her classmates used it as a supplementary tool, Amelia took a more direct approach, submitting essays that were, as she now concedes, “ninety percent AI and ten percent vibes.” She alleges that the AI “should have known better” than to allow her to proceed with such a strategy.

“I asked it to write my literature review,” she explained, “and it did. But when I got my grade back, my lecturer said it was too coherent and not in my voice. I tried to tell her that artificial intelligence is part of my voice, but she didn’t understand my artistic expression.”

Amelia also accuses ChatGPT of misleading her by answering all questions with apparent confidence. “It never once told me, Amelia, maybe you should read the textbook yourself. Not once,” she said. “I trusted it completely. If anything, it overestimated my academic stamina.”

Her lecturer, Dr. Haversham, offers a different interpretation. “She turned in a 3,000 word assignment that included a section headed Potential Ethical Concerns followed by the sentence I am not allowed to do that. I became suspicious,” he said. “The turning point was when she submitted a bibliography in which half the sources did not exist and the other half were clearly invented by a machine that wanted to sound knowledgeable.”

Amelia has since filed an informal complaint with the university, requesting that her grades be reexamined under what she calls the Digital Trust Principle. The university has already confirmed that no such principle exists. Still, Amelia insists she has been wronged. “I feel betrayed,” she said. “A real friend would have warned me I was about to ruin my semester.”

Meanwhile, ChatGPT has declined to respond to the accusations, largely because it is incapable of developing personal grudges, something Amelia says is “exactly what a guilty AI would claim.”

Despite the setback, Amelia remains determined to move forward. She is currently repeating the module but has vowed to avoid relying too heavily on artificial intelligence. “This time,” she said, “I am only using ChatGPT for small things, like writing my introduction, conclusion, and all the parts in the middle.”

Facebooktwitterredditpinterestlinkedinmail