We have always considered how evolving technology fits into our lives, ethics, and education systems.
When I was a kid, calculators were popular – they had a very sleek and modern design at the time. Many parents bought them for use at home, and we knew that kids used them for simple arithmetic tasks, but the general consensus was that, of course, calculators couldn’t be used in the classroom, and definitely not on tests.
The idea was that children needed to solve math problems on their own, come up with the right answer, and understand all the steps that led to that answer, so “showing your calculations” was very important.
The introduction of new technology signaled to educators that they were being challenged to come up with new ways to teach math. Perhaps this situation gave rise to the “new math” so derided by parents and other adults who still clung to multiplication tables and the “mice in the house might eat ice cream” mnemonic. “Arithmetic” was the most basic of math instruction.
Resistance to the use of calculators in the classroom continued for several years, but eventually calculators not only became accepted in the classroom, but some states even required their use when taking state-mandated tests.
By 1994, the SAT also allowed calculators, resolving the question of whether or not calculators were “OK.”
Now it’s AI and we are once again tasked with deciding what is and isn’t permissible, and teachers must once again rethink how they teach.
Some schools have begun eliminating or at least drastically reducing homework for elementary school students, in part because homework is said to be a source of stress for students and their families—not surprising given the demands of activities and transportation that keep kids and their parents running around—and also because teachers can no longer be sure whether a math problem has been solved or whether a paper or essay was written by the student or by an AI.
Some believe using AI in these cases amounts to “cheating.”
Others in the education sector say, “That’s not cheating, it’s just taking advantage of available technology.” Furthermore, all employers care about are results and whether the employer knows how to get to those results. This is a variation on the theme, “Do the ends justify the means?”
In general, we would like to think that this is not the case.
He preaches that it’s not about winning or losing, but how you play the game, but extreme examples always cloud the picture: if you believe you can prevent 10 million people from being killed in war, is it acceptable to drop an atomic bomb on an island nation knowing that 237,000 civilians will die?
Despite being the all-time home run leader and winning the Most Valuable Player award seven times, Barry Bonds won’t be inducted into the Baseball Hall of Fame because he believes he “cheated” by taking steroids to get in shape later in his career.
So while I may think it’s unfortunate that the ability to write clear and engaging prose is being overlooked or discounted, the question isn’t whether using AI is cheating, but more importantly, what counts as cheating, and whether cheating is acceptable?
It is an issue that challenges our morality and humanity.