FotografieLink/iStock/Getty Images Plus
Rafael Moron and Lexi Modrono are used to professors at Florida International University downplaying policies regarding emerging uses of generative artificial intelligence, or avoiding discussions of AI altogether.
“It was only discussed in a small percentage of classes,” said Moron, who graduated from FIU in May. “In most cases, the policy was no AI, and if AI was used, it would simply be classified as plagiarism.”
FIU has a general AI policy that is very similar to its plagiarism policy: According to a survey of college presidents conducted by Inside Higher Ed earlier this year, the majority of universities don’t have AI policies in place at all.
So Moron and Modrono were surprised when they and 12 other FIU students were asked to create their own AI guidelines for their Rhetorical Theory and Practice class earlier this year.
“I was definitely a little surprised because I felt like professors have been very tough on AI since it became more accessible,” Modrono says, “so it was a surprise to find out that we had a say in what the policy was going to be.”
For two semesters, FIU associate professor Christine Martorana has been allowing students in multiple courses to create their own policies governing the use of AI.
“Trying to regulate the use of AI is counterproductive,” she says. “As a professor, that’s not a position I want to be in, and it’s not the relationship I want to have with my students. I was trying to create a policy, but there were a lot of different ways to do it. It ended up being, ‘Let’s share this with students and see what they come up with.'”
In the spring semester, students worked in small groups to come up with what they considered best practices, then presented them to the whole class and fine-tuned their ideas. In the shorter summer course, Martorana had students look at the spring policies, tweak them, and create their own.
“For me personally, I feel more valued as a student,” Modrono said. “I felt like our teachers recognized that we were responsible students and that we knew what we were doing.”
A common consensus emerged: AI cannot be used for plagiarism. But differences of opinion also emerged. For example, students in the spring semester course decided that it was okay to use AI for brainstorming, but students in the summer section decided that brainstorming should only be allowed if students were alone and not with their peers in a classroom. Students in the spring semester said that generative AI could be used for structuring papers, but students in the summer course said that the technology should not be used for outlining.
The policies, which spanned both semesters, addressed how to use AI in courses and how to cite the use of AI in papers and other course materials.
Martorana recognized that AI will be an “integral part” of writing and communication in the future, and said policymaking is a useful way to prepare students for that future.
“I wanted them to buy in,” she said, “and I wanted them to understand (AI) first and then follow it, because it’s something they helped create.”
Brianna Dussault, provost and managing director of the Center for Reforming Public Education, said she hasn’t heard of other professors asking students for input on AI policy specifically, but it’s a tactic professors and even elementary school teachers are using to ask their classes to come up with general policies.
“We’ll spend a year setting norms and making assumptions together,” she says. “This is new territory for AI, but I think an exercise like this makes sense to bring students on board and co-create the learning environment.”
Dussault’s center is currently studying how faculty are using AI, and he pointed to its research (and others) that show professors’ adoption of AI is generally lower than students’.
“This is an example of professors taking on a role that the university as a whole is not yet ready for,” she said. “We’re still trying to get adults, let alone students, to understand.”
Both Dussault and Martorana said involving students in creating AI policy can increase AI literacy, noting the amount of research students have had to do about ethical (and unethical) uses of the technology. Martorana added that conversations around AI ethics extended into discussions throughout the semester, with students asking whether their use of AI fit into the policies they created.
“I’ve been teaching since 2008 and I’ve never had a student ask me about ethics or academic integrity,” she said. “To me, that seemed to suggest that students had been thinking about it throughout the semester and the conversation had become more open.”
Martorana plans to continue this fall by expanding upper-level courses through to first-year students to allow students to write their own AI policies.
“Trying to regulate the use of AI is ultimately a losing battle,” she said. “As AI technology continues to advance, our policies need to take a more productive, forward-thinking approach, saying, ‘Here’s how you can use AI in the classroom,’ rather than, ‘Here’s what you shouldn’t do.'”