THis fall was the first time in almost 20 years that I did not return to the classroom. Most of my career has been teaching writing, literature, and language, primarily to college students. I left primarily because of Large Language Models (LLMs) like ChatGPT.
Virtually all experienced scholars agree that, as historian Lynn Hunt has argued, writing “does not transcribe thoughts already consciously present in[the writer’s]mind.” I know that. Rather, writing is a process that is closely tied to thinking. In graduate school, I spent months assembling the pieces of my dissertation in my head, but ultimately realized that I could only solve the puzzle by writing. Writing is hard work. It’s scary sometimes. Many, perhaps most, of my students have succumbed to the temptation of AI and are no longer willing to push through the discomfort.
In my most recent job, I taught academic writing to doctoral students at a technical college. Many of my graduate students were computer scientists and had a better understanding of the mechanics of generative AI than I did. They perceived the LLM to be an unreliable research tool that hallucinated and fabricated quotes. They acknowledged the environmental impact and ethical issues of this technology. They knew that the model could not generate new research because it would be trained on existing data. But that knowledge didn’t stop students from relying heavily on generative AI. Several students admitted that they drafted their research in memo format and asked ChatGPT to write the article.
Read more: Regulating AI is easier than you think
As an experienced teacher, I am familiar with pedagogical best practices. We have solidified our mission. I researched ways to incorporate generative AI into lesson plans and designed activities that drew attention to its limitations. I tell my students that ChatGPT can change the meaning of the text when asked to make corrections, that it can produce biased and inaccurate information, and that it does not produce stylistically strong sentences. I reminded them that, and for grade-oriented students, an A grade is not an option. – Level works. It didn’t matter. Students still use it.
In one activity, students drafted a paragraph during class, entered it into ChatGPT with revision prompts, and compared the output to the original text. However, this type of comparative analysis failed because most of my students had not developed sufficiently as writers to analyze the subtleties of meaning or evaluate style. When I pointed out weaknesses in the AI-revised text, one PhD student protested, “My writing looks fancy.”
My students also relied heavily on AI-powered paraphrase tools such as Quillbot. Properly rephrasing, like drafting your own research, is a process of developing understanding. Recent high-profile examples of “duplicate language” remind us that paraphrasing is a difficult task. It’s no wonder, then, that many students are tempted by AI-powered paraphrasing tools. However, these technologies often result in inconsistent writing, do not necessarily help students avoid plagiarism, and can lead writers to disguise their understanding. Online paraphrasing tools are only useful if students already have a deep knowledge of the art of writing.
Read more: 100 Most Influential People in AI
Students who delegate their writing to AI lose the opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang says: You can never improve your cognitive fitness that way. ” Chen also notes that the hundreds of small choices we make as writers are just as important as the initial idea. Although Chiang Kai-shek is a fiction writer, his logic applies equally to his academic writings. Decisions about syntax, vocabulary, and other elements of style give a text almost as much meaning as the underlying research.
Generative AI is, in a sense, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, this technology often changes vocabulary and changes meaning, even if the only prompt is “Please correct your grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I couldn’t convince them of the need for stylistic consistency or the need to develop opinions as a research writer.
The problem was that it couldn’t recognize AI-generated or AI-modified text. At the beginning of each semester, I had my students write an essay in class. Using that baseline sample as a point of comparison, we were able to easily distinguish between student writing and text generated by ChatGPT. I’m also familiar with AI detectors that tell you if something was generated by AI. However, these detectors have flaws. AI-assisted writing is easy to identify but difficult to prove.
As a result, I found myself spending hours grading passages that I knew were generated by AI. I pointed out where the discussion was inappropriate. I pointed out weaknesses, such as stylistic quirks that I knew were common in ChatGPT (I noticed a sudden spike in phrases like “dig”). In other words, I found myself spending more time giving feedback to the AI than to my students.
That’s why I quit.
Great educators adapt to AI. In some ways, the changes will be positive. Teachers need to move away from mechanical activities and simple summary assignments. They find ways to encourage students to think critically and learn that writing is a way to generate ideas, reveal contradictions, and clarify methodologies.
However, these lessons require students to sit with the temporary discomfort of not knowing. Students must learn to trust their cognitive abilities and move forward as they clearly chart and revise their path. With a few exceptions, my students have never wanted to enter such an uncomfortable space or stay there long enough to discover the revelatory power of writing. It was.