A popular saying goes, “It’s human to make mistakes, but it takes computers to really make mistakes.”
This saying is older than you might think, but it didn’t predate the concept of artificial intelligence (AI).
And for as long as we’ve been waiting for AI technology to become commonplace, if AI has taught us anything this year, it’s that amazing things can happen when humans and AI work together. That means there is. But great doesn’t necessarily mean positive.
Over the past year, there have been several incidents that have made many people more fearful of AI than ever before.
2024 begins with a warning from the UK’s National Cyber Security Center (NCSC) that AI is expected to increase the global ransomware threat.
Many of this year’s AI-related articles dealt with social media and other public sources collected to train AI models.
For example, X was accused of illegally using the personal data of more than 60 million users to train an AI called Grok. Supporting that concern is the spread of a hoax on Instagram Stories claiming that copying and pasting text can stop metas from collecting content.
Facebook has been forced to admit that it collects public photos, posts and other data from the accounts of Australian adult users to train its AI models, leading to Australia’s social media ban on children under 16. There is no doubt that this was a contributing factor to the ban.
As with many technology developments, sometimes the race to stay ahead is more important than security. This was best demonstrated when an AI companion site called Muah.ai was compromised and the details of all users’ fantasies were stolen. The hacker described the platform as “a few open source projects duct-taped together.”
We also confirmed that our AI supply chain was compromised when a chatbot provider exposed 346,000 customer files, including ID documents, resumes, and medical records.
And even if the accidents didn’t scare people away, there were some outright scams targeting people who wanted to use some of AI’s popular applications. A free AI editor lured victims into installing both Windows and MacOS versions of the information theft tool.
We have seen further refinements to the ongoing AI-assisted fraud known as deepfakes. Deepfakes are AI-generated realistic media designed to trick people into believing that the content of a video or image actually happened. Deepfakes can be used for fraud and disinformation campaigns.
Elon Musk’s deepfake has been named the internet’s biggest scammer after tricking an 82-year-old man into paying him $690,000 through a series of transactions. And AI-generated deepfakes of celebrities, including Taylor Swift, have led to calls for laws to make it illegal to create such images.
Videos aside, we reported on scammers who use AI to fake the voice of a loved one and tell them they’ve been in an accident. Reportedly, with advances in technology, producing a convincing deepfake recording probably only requires one to two minutes of audio obtained from social media or other online sources.
However, voice recognition doesn’t always work the other way around. Some AI models have difficulty understanding spoken words. McDonald’s has ended its AI drive-thru order-taker experiment with IBM after too many incidents occurred, including customers getting 260 extra Chicken McNuggets or bacon added to their ice cream.
For better results, mobile network operators are using AI in the fight against phone scammers. AI Granny Daisy uses multiple AI models that work together to listen to what scammers are saying and respond in a believable manner, giving scammers the impression that they are tackling “easy” targets. give. Taking advantage of the scammer’s prejudice against the elderly, Daisy plays the talkative grandma while simultaneously wasting the scammer’s time that could not be used to deal with the real victim.
What do you think? Do the negatives outweigh the positives when it comes to AI, or vice versa? Let us know in the comments.