I’m old enough to remember when AI was a sci-fi trope of the ’80s, and not to brag, but it has given us little mental hideaways where we can retreat to escape the overwhelming onslaught of real-life AI we face every day.
From word processors to web browsers to digital assistants on your favorite devices, AI is becoming mainstream. There are no AI orphanages, as almost every company is racing to incorporate some kind of generative tool or chatbot into their products and services at an astonishing rate.
I know this because I’ve been immersed in most of it for the past year and a half. I’ve encountered everything from image generators and artificial girlfriends to voice clones and masters of law who mimic dead loved ones. Needless to say, I’m no stranger to this super software, and I’m not some tinfoil-hatted technophobe who claims it’s all going to end badly. It’s just that most of it is… well. Well, most of it is, anyway.
When you get a whiff of new car smell from certain services, it’s hard not to feel like this technology full of potential is being wasted on the wrong end, and in fact, most of what’s offered to us as consumers is counterintuitive at best and pointless at worst.
It’s not all bad
We live in the age of AI and we need to take advantage of this new technology at every opportunity. AI is here to simplify our lives, make our jobs easier, and revolutionize the human-computer interface forever.
To be fair, none of this is impossible. I use generative AI every day, whether it’s interacting with Meta AI through my Ray-Ban Meta Smart Glasses, linking to ChatGPT for recipe ideas and watchlist suggestions, or answering random “I should Google that” questions I have during the day.
Generative AI has given virtual assistants their most valuable update yet, delivering Hollywood-level performance in just a few years, and while it’s still a long way from being fully realized, it’s poised to change the way we all interact with our devices.
We tend to think of computers as boxes with screens. Some are small enough to fit in your pocket or on your wrist, but they all share roughly the same visual format. But the rise of AI-powered virtual assistants could usher in a sea change in the way we see and use our devices on a scale not seen since the invention of the mouse.
That’s very exciting to me, and I hope it’s the same for others, but after spending a fair bit of time with different AI models, tools, and services, I’m starting to get sick of AI, because a lot of the other stuff that comes with it is just complete nonsense.
But there are many negative aspects
Upgrades to virtual assistants are just one small part of the broader generative AI pool, and one of the most divisive elements of it. As the AI toolset expands, it will cover all sorts of creative tasks with highly divisive results.
Thanks to generative AI, we can make almost anything. And this is kind of a problem: making things is a very human trait, and not one we should easily abandon. Especially as concerns grow over disinformation, defamation, and the use of deepfake technology to embarrass others.
Can you believe that article wasn’t written by ChatGPT? Are the unflattering images you saw online real? Or that a recording of a well-known, famous voice wasn’t generated by AI? As models become more and more sophisticated, it’s hard to know for sure. Even the best people have a hard time spotting a fake. Just ask the judges at the Sony World Photography Awards, who were unaware that their winning photo was generated by AI.
When AI isn’t busy confounding and bewildering our confidence in reality, it is alienating us from any kind of human interaction. I’m not sure who comes up with the ideas behind what corporate AI offers, but who thought it was a good idea to “let an AI have conversations with other humans for us”?
Nothing is more bleak and depressing than the realization that so much of generative AI exists as an obstacle preventing us from actually engaging with the world around us, whether that be summarizing emails or articles so that we don’t have to directly engage with human thought, or replying to texts from loved ones on your behalf.
For a peek into the generative AI dystopia, check out Google’s recent “Dear Sydney” ad, in which a father speaks passionately about his daughter’s love of running and her idol, American track and field star Sydney McLaughlin-Levrone.
She wants to write Sydney a letter and tell her how much of an inspiration she is, but perhaps because she isn’t actually that inspired, she outsources the task to her father, who can’t spare even ten minutes to sit down with his daughter and think the task through, outsourcing it instead to Google Gemini, who only knows inspiration through dictionary definitions.
It was meant to be an uplifting commercial, an aspirational one even, and if that doesn’t highlight the glaring disconnect between the people making these things and the audience they’re trying to market to, I don’t know what does.
Sydney never got to see the AI-generated email, but in an ideal world, it would end up in a spam folder along with the dozens of other AI-generated spam emails that land in inboxes like mine every day.
I recently saw a post on X by Joanna Maciejewska that hits the nail on the head: “I want an AI to do my laundry and the dishes so I can focus on my art and writing, not the laundry and the dishes.”
You know what the biggest problem with putting AI into everything is? It’s the wrong direction. I want an AI to do my laundry and my dishes and do my art and my writing. I don’t want an AI to do my laundry and my dishes and do my art and my writing. March 29, 2024
What makes things even more confusing is the fact that the people developing these tools don’t seem to be serious about their use, simultaneously enticing you to use them to create text, images, videos, and music, while developing other tools that detect when you have used them and scold you for accepting the initial suggestion.
When The Washington Post recently analyzed a dataset of 200,000 English conversations from two ChatGPT-like AI chatbots, homework help and creative writing topped the list of use cases. It’s no wonder OpenAI is hesitant to release its own AI identifier for texts, which could cause quite a bit of trouble for ChatGPT subscribers.
These companies are well aware that producing text and media that could be deemed fraudulent or plagiarized is one of the key selling points of their large-scale language models: there’s a reason you can’t ask ChatGPT to write sexy prose, but it would have no problem writing an entire paper on the mating habits of wood frogs.
Outlook
With a few exceptions, I don’t see the pure benefits of generative AI as much as I once did (at least not in the way it’s currently being marketed to us as consumers). While generative AI can make virtual personal assistants more personal, it’s equally great at making the communications and contributions of real people more impersonal.
As things stand, we would be in trouble if we reduced the generative power of AI in our devices, systems and platforms by a small amount. If I don’t mind absorbing a bit of hypocrisy, keep the digital assistants but take back the rest.
On the one hand, we are supposed to fully embrace everything that comes with this new wave of generative AI tools. On the other hand, we are all but banned from using them. AI purveyors speak to us with double standards, talking about the benefits of this technology while scolding us for its applications. Get ChatGPT help with your homework? That’s plagiarism. Create AI-generated images? Welcome to the wonderful world of fabricating and promoting disinformation.
The only thing I really know for sure at this point is that with each passing month, I look at my ChatGPT subscriptions and wonder if this is truly the foundation of the next big thing in technology, or if it’s just a house of cards.