The University of Minnesota expelled a third-year health economics Ph.D. student in November after faculty accused him of using artificial intelligence on an exam. He denies their claims and, this month, filed a lawsuit accusing the U of M of violating his due process. He has also filed a defamation suit against one of his professors.
In a federal lawsuit, Haishan Yang, 33, alleges a student conduct review panel unjustly found him guilty of academic dishonesty through a process riddled with “procedural flaws, reliance on altered evidence, and denial of adequate notice and opportunity to respond.”
The review was prompted by accusations that Yang used a large language model like ChatGPT on a written preliminary exam, which doctoral students must pass to start their dissertation.
Large language models are a type of artificial intelligence that use machine learning to generate human-like text. Products like ChatGPT, Claude, Gemini and others are marketed as helping with brainstorming, writing and more.
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
According to university documents shared by Yang, all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software.
But Yang denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect.
Yang appears to be the first Minnesota student to go public about being expelled over AI, a source of stress for students and instructors alike since ChatGPT became widely available in late 2022. Students are concerned about how a false accusation could upend their lives. Educators are seeking to curb cheating as the use of AI proliferates in academia.
In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus.
Yang said his student advocate at the U of M told him this is the first time an AI case has gone to a student conduct review committee at the Twin Cities campus.
The University of Minnesota would not comment on Yang’s expulsion because of federal and state data privacy laws. A U of M School of Public Health website listed Yang as a current Ph.D. student as of Thursday.
In court filings, Yang writes the experience has caused emotional distress and professional setbacks, among other harms. An international student, he lost visa status with the expulsion.
“In my case, it’s a death penalty,” Yang told MPR News.
Longtime professor vouches for Yang
Since subletting his off-campus apartment in July, Yang said he has been travelling in Africa as a tourist. Over several Zoom calls since November and through email, he shared hundreds of pages of documents related to his case.
Yang, who is from rural Fujian in southeastern China, says he was the first person from his village to get a scholarship to study in Europe and the United States. He said he got his bachelor’s degree in English Language and Literature from Nanjing Normal University before going abroad for a master’s in economics at Central European University.
In 2023, he earned a Ph.D. in economics from Utah State University. He decided to pursue another doctorate at the University of Minnesota so he can stay in academia and pursue research as a professor.
While Yang says he uses ChatGPT daily to find travel ideas, fix grammatical mistakes and help write code for research, he insists he did not use AI on his preliminary exam, nor in his preparation for it.
His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.”
Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable.
“I would be surprised if Haishan or any of our faculty didn’t use those tools,” he said.
Dowd said he was surprised at the suggestion that Yang would need AI to pass an exam.
“I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered,” said Dowd.
He described Yang, who was in three of his classes and has done research for him, as trustworthy. He enjoyed his office hour chats with Yang on a range of subjects and noted that Yang has a paper on track to publish at a top urban economics journal, in addition to other works in progress.
In December, an academic publishing spokesperson confirmed Yang is a solo author for a paper under editorial evaluation at the Journal of Urban Economics.
“I’d say the evidence of his ability is quite good,” said Dowd.
What we know about the exam, conflict in question
Citing student data privacy laws, the U of M declined to confirm Yang’s expulsion and the authenticity of documents Yang shared, but a spokesperson shared a short response over email last week.
“As in all student discipline cases, the University carefully followed its policies and procedures, and actions taken in this matter were appropriate,” reads a statement from Jake Ricker, senior director of public relations at the U of M. “The best source for the University of Minnesota’s perspective on this matter will be in our court filings.”
Yang sent MPR News a range of documents, including the initial complaint against him and the letter in which a U of M official wrote a student conduct review panel unanimously agreed on expulsion based on “a preponderance of the evidence.”
This case starts on Aug. 5 with an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited.
Exam graders argued the AI use was obvious enough. Yang disagrees.
Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples.
Golberstein’s complaint includes side-by-side comparisons of Yang’s answers and output from ChatGPT that allegedly highlight close similarities in language and structure, most visibly with the use of bullet points and subheadings. In one example, the faculty list a range of potential answers and said Yang chose the same three as ChatGPT. In another, they said Yang used an acronym that is not standard in their field, but which was produced by ChatGPT: “PCO.”
On a Zoom call, Yang pulled up a Google Scholar search where thousands of results populate for “PCO,” short for primary care organization.
He shared lecture slides where a U of M public health professor used comparable language and formatting to the ChatGPT-generated answers. He added that the professors selected one writing sample from many, choosing his handwritten responses to an exam two years ago that was considerably shorter — 1.5 hours.
“I don’t believe that ‘voice’ can be an objective or consistent measure to determine AI usage,” Yang told MPR News in an email. “I have delivered many presentations in different classes and at academic conferences. Depending on the audience, I sometimes make my content highly technical, while other times I simplify it.”
He said there is no standard way of answering questions. In this case, he felt bullet points allowed him to offer more clear and concise responses than paragraphs.
Yang also takes issue with the documentation used against him. In his federal lawsuit, Yang alleges the U of M hearing unfairly relied on “altered ChatGPT evidence,” claiming there are at least 10 instances of changes to the AI-generated responses used to compare against his. He alleges changes like “omission of critical content, such as summary paragraphs and headers” and “reduction of bold formatting in key sections of the output.”
From his perspective, the output is similar because humans and AI draw from the same sources. After the accusations, he ran the questions through ChatGPT himself and claims to have found the content and formatting to be different from what the professors found.
“ChatGPT will generate every answer,” he said.
Yang also objects to professors using AI detection software to make their case at the November hearing.
He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another.
“Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.
But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers. They also criticized Yang for not including many citations and having “inconsistencies” in his testimony.
These proceedings were the basis for Yang’s appeal, which the U of M denied earlier this month, and Yang’s subsequent lawsuit against the university in federal court. They are also at the heart of a defamation lawsuit he filed in December against one of the faculty who generated multiple ChatGPT responses in the case against him, associate professor Hannah Neprash.
Neprash declined to speak for this story through a U spokesperson, who reiterated there is little that staff can legally share about students. Golberstein, who reportedly also testified against Yang at the November hearing, has yet to respond to an MPR News request for comment.
Yang is seeking $575,000 in damages in the federal lawsuit and $760,000 in the defamation case, in addition to a reversal of his expulsion and a public apology. The federal lawsuit includes a request for $200,000 from the U of M “to deter future procedural violations and uphold fairness in disciplinary proceedings.”
Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.
He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course.
In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.” She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.
She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote.
Mason submitted a report to the U of M but ultimately decided not to pursue charges because she felt there was not “enough evidence to meet the standard of preponderance of the evidence in this case and the infraction seemed relatively minor.” She said there wasn’t an explicit prohibition on using AI in her syllabus, too.
Instead, the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations.
When it came to Yang’s expulsion in November, panelists “resonated with faculty statements that trust would be essential to continuing in the program” and agreed “it would be extremely difficult for that trust to be reestablished,” according to the decision letter Yang received.
The appeal officer upholding their decision echoed their sentiments.
“PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.”
AI experts say detection is tricky
Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing.
“As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades.
Rege is director of the Center for Applied Artificial Intelligence at St. Thomas. He notes a shift across higher education institutions from complete bans against AI to more open, evolving policies.
He said it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions.
“I think as educators, we have a responsibility to prepare students for the future of work,” said Rege.
Rege would not comment on specifics of Yang’s case, but said it highlights the need for transparent policies around AI detection methods and clear procedures for accusations and appeals.
But it’s hard as AI software becomes prevalent.
Stephen Kelly, project manager for the Minnesota State Colleges and Universities system, served on a team charged with creating AI guidance for its 33 public colleges and universities.
“Best practices can be a little tough to pin down right now in higher education, just because the technology itself is moving so quickly,” said Kelly. “It seems like one day when we think we have a good approach to grappling with artificial intelligence, you know, just a week later, something changes, and we’re having to reassess the things that we’re doing.”
Kelly said the Minnesota State system has observed more students using AI in coursework, with more instructor curiosity about its potential. He considers AI just the latest technology institutions are responding to, following advancements in online learning and virtual reality.
“A few things that everyone currently agrees on is that artificial intelligence is unlikely to disappear. Technology companies are likely to continue its development. And for the foreseeable future, at least, educators will need to grapple with the pressure AI places on traditional models for teaching, learning and assessment,” said Kelly.
AI at the U of M
Like many institutions of higher education, the U of M does not have a systemwide policy against AI use.
Instead, the U of M offers resources for faculty to determine whether and how to allow AI use in their classrooms. A U spokesperson shared a guiding document that encourages instructors to be clear about their expectations, revisit their assessment goals, and consider incorporating AI as part of class design as ways to address cheating in their courses.
The guidance says to use AI detection software “as an imperfect last resort.”
The U of M does not recommend instructors use AI detection software because of its known issues, according to Rachel Croson, U of M Executive Vice President and Provost, who shared existing policy with a Board of Regents subgroup at an October meeting that focused in part on the evolution of AI in higher education but was not related to Yang’s case. Another school official said the U of M is doubling down on helping instructors to lessen the need for detection tools.
“We have focused a lot of our energy in workshops and one-on-one consultations with faculty who want to take that proactive step rather than relying on the reactive, using detection tools or cross-referencing examples that might exist or be generated by the tools,” said Caroline Hilk, director of the Center for Education Innovation at the University of Minnesota.
At the meeting, student representative to the Board of Regents Joscelyn Sturm, a 4th year English major, told the panel that she runs her papers through an AI detector and finds the software flags her work as AI-generated when she has “straightforward” sentences with adjectives of more than five letters.
Sturm said she and many other students live in fear of AI detection software.
“AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said.
What’s next
Months before filing his lawsuit, Yang approached MPR News saying he wanted to go public with his story out of concern for other students.
“The next student could be prosecuted by the same reason. ‘Oh, your answer is so similar to ChatGPT.’ And I think it’s a — we have a deteriorating impact on the learning environment at UMN,” he said.
The University of Minnesota will soon share its side of events through court filings, according to a spokesperson. Neprash is required to respond to Yang’s defamation lawsuit in January.
Yang had planned to return to the U.S. in early December, but federal regulations prohibit re-entry to the U.S. on a student visa when a student is suspended.
So far, Yang is representing himself and trying to find affordable legal support.
Prior to his expulsion, Yang was optimistic the student conduct panel would rule in his favor. After their decision, he found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career, more so with widespread concern president-elect Donald Trump will curtail visa access to foreigners after his inauguration.
“Probably I should think to do something, selling potatoes on the streets or something else,” he said.