Are today’s college students turning in essays generated by ChatGPT? Are professors using AI to grade their students? Has artificial intelligence revealed a better way to learn? And is a university education at risk of becoming irrelevant?
Julie Shell, assistant vice president for Academic Technology and director of the Office of Academic Technology at the University of Texas, has been an expert on the use of technology in education since the 1990s, when PowerPoint was state of the art. Since then, she has held positions at Yale, Stanford, Columbia, and Harvard. From Harvard, she was recruited by then-Vice President Harrison Keller to serve as the founding director of OnRamps, a University of Texas program that improves college readiness for Texas high school students through technology-enabled education. During the COVID-19 pandemic, she led the School of Arts’ online learning efforts as associate dean for continuing education and innovation and assistant professor of design and higher education leadership.
current situation
The easiest question to answer is the second one, about grading students. “Faculty members should never be uploading student writing, getting feedback from an AI, and then providing that AI feedback to students. We take a very strong stance against that,” says Shell.
“AI is very attractive because it is a great technology. It saves a lot of time and the quality of the responses is amazing. But AI is also full of contradictions. AI can teach you a lot, but it can also teach you wrong information, which can lead to negative learning. AI can help you become more creative, but by making you more like other people, it can also weaken your creative opinions. Teachers should not use AI as a substitute for their own feedback.”
“This whole world is hard to understand,” she admits.
Shell believes there are two types of uses for AI: transactional and transformative. In both cases, using AI for good or bad purposes depends on the situation. “There are times when it’s OK to use AI as a transactional tool: ‘I need a list of ideas for planning dinner this week,’ ‘I need a list of ideas for my next meeting,’ ‘Help me brainstorm research ideas,’ etc. These are low-risk transactions, and we need to help students understand when it’s OK to use it transactionally.”
But in this line of trade, she once experimented with using AI to write recommendation letters, a time-consuming task for someone in academia. “When I read what was written, I felt it was unfair. It wasn’t me. It didn’t reflect my true thoughts about the students, and I felt it was unfair to the students to use that output,” she says. “That’s a moral bridge I can’t cross.” She adds, “It takes about 15 hours to realize that the AI is not as good as you. It’s good, it does things faster than you, it has a vast knowledge base, but it’s not as good as you because it doesn’t have your voice. It’s not you.” This applies to faculty and students alike. “I want to see the voice and identity of the students represented in their work,” she says.
And then there’s transformative use of AI. “Let’s say I type a prompt into a journalism class I’m teaching: ‘Help me write three learning outcomes for embedding my lead sentences.’ And it spits out three learning outcomes. If I just copy those and teach them, that’s transactional use. Transformative use is taking that output, looking at it, evaluating, critiquing, finding what doesn’t fit, editing, transforming, and using it further as scaffolding to transform it to integrate it into your perspective.” In this example, transactional use is bad, but transformational use is good.
To ban or to teach?
When it comes to student use of AI, the issue is more nuanced: “Some educators strongly oppose[student]use of AI, and some institutions have banned its use.” The view of Shell and his colleagues in the provost’s office is that “the cost of policing students to never use this highly relevant, timely, and transformative tool is greater than the cost of creating a culture of academic integrity and honesty.”
When it comes to AI, the horse has already escaped the stable. Ignoring AI or banning its use won’t prepare students for the world today, much less the world of the future. Students need and expect higher education institutions to help them engage ethically with AI, Schell said. “Simply telling them to never use AI doesn’t serve them well.”
Instead, she believes the effort should be put into helping people become “architects of their own ethical frameworks.” “When they leave here, there will be no restrictions on these things. So critical thinking, decision-making, misinformation, biases, all of that is built into the AI tools. Our students will grow up in an environment where they’re prepared to deal with that ambiguity.”
However, using AI tools such as ChatGPT to generate essays and submit them as your own is prohibited as it violates academic integrity: “Such activities are clearly prohibited, and that applies not only to writing but also to code generation and presentation preparation. But I don’t think academic integrity is a 1/0 decision on such activities: is it cheating or not cheating?”
Shell knows firsthand how challenging AI can be, as she introduces it in her design pedagogy classes. She introduces it in stages: “First, I talk to my students about AI, make it clear that if they use AI, they need to cite it, and show them how to do that. They need to document how they use AI.”
But she recalls one learning experience: “When we were creating user personas, a student submitted one with a really great graphic. I said, ‘Well done! The image really captures the user’s feelings. I feel a real connection.’ And the student said, ‘Thanks! We used AI!’ I was so surprised because I’d made it very clear that that’s not how we use AI in class. But in that moment, I realized the student needed more help. It needed a conversation. It wasn’t a 1/0. It wasn’t ‘follow my rules.’ A UT-quality learning experience is one that empowers students to become the architects of their own frameworks and engage effectively with AI.”
For her second project, she actively encourages her students to use AI, but introduces them to UT’s AI-centric framework. The framework includes six concerns about AI that students should always consider: 1. Privacy and security, 2. Hallucinations (when an AI states things as facts that are not), 3. Inconsistencies (when a user instructs an AI application to produce a certain output, but the application produces unexpected or unwanted results), 4. Bias, 5. Ethical dilemmas, and 6. Cognitive offloading. Regarding the last item, she explains: “If we’re not careful, if we give an AI too much, we can lose cognitive capabilities. So we have to be very careful and judicious about what we offload to the AI.”
Finally, the final project requires students to use AI, and with this gradual approach, she hopes to equip them with both the skills and knowledge of AI’s limitations.
Benefits: Introducing Sage
When asked about the benefits of AI in education, Shell says, “I get goosebumps talking about this. One of the things I’m most excited about is an AI tutor that we’re working on called Sage. Custom GPT (generative pre-trained transformer), also known as custom bots.”
Text-based AI tools like ChatGPT and Microsoft Copilot, which is being used on campus, are large-scale language models. You ask them a question, and they find where that information exists and give you an answer. With Custom GPT, you can create your own constraints on what you can train GPT to ask. “You can train it to ask the kinds of questions you want to ask,” Schell says. “You can train it to have the resources you want to respond to.”