
Even with global conflict, propaganda has never been far apart. Go back in time to 515 BC and read Behistun Inscription, an autobiography by King Darius of Persia, which discusses the rise of his power. More recently, see how different newspapers report on wars that are said to be “the first victim is true.”
These forms of communication can shape people’s beliefs, but there are also scalability limitations. Messages and propaganda often lose their power after moving a certain distance. Of course, in the world of social media and online, there are few within reach, apart from where someone’s internet connection is reduced. Adding the rise of AI will not stop scalability.
In this article, we explore what this means for societies and organizations facing AI-powered information manipulation and deceptions.
Elevation of the echo chamber
About five Americans have got the news from social media, according to the Pew Research Center. In Europe, it’s an 11% increase to access news using social media platforms. The AI algorithm is at the heart of this behavioral shift. But they are not forced to present both sides of the story in a way that journalists are trained and needed by media regulators. With fewer restrictions, social media platforms can focus on delivering content that users like, want and respond to.
Focusing on eye maintenance can lead to digital echo chambers and potentially biased perspectives. For example, people can block disagreements, but the algorithm automatically adjusts user feeds and monitors scroll speed to promote consumption. When you only see content that consumers agree with, you have reached a consensus with what AI is showing, but not in the wider world.
Moreover, much of its content is generated synthetically using AI tools. This includes over 1,150 unreliable AI-generated news websites recently identified by NewsGuard, a company specializing in information reliability. The long-standing political process has been felt as it has had little limitations on the output capabilities of AI.
How AI is deployed for deception
It’s safe to say that we humans are unpredictable. Our multiple biases and countless contradictions occur constantly in each brain. And what we think of billions of neurons creating new connections that shape reality. When a malicious actor adds AI to this powerful mix, this leads to events like this:
Deepfake Videos Spread During US Elections: AI Tools allow cybercriminals to create fake footage. High levels of ease and speed means no technical expertise is required to create realistic AI-powered video. This democratization threatens the democratic process, as shown in the preparations for the recent US election. Microsoft highlighted the activities from China and Russia that were observed to integrate the generated AI into efforts to influence US elections. Voice cloning and what politicians say: Attack People can now use AI to copy someone’s voice by simply processing a few seconds of speech. That’s what happened to Slovak politicians in 2023. Fake audio recordings It spread online, and perhaps Mikal Simecca is discussing with journalists how to fix upcoming elections. The discussion quickly turned out to be fake, but all happened a few days before the vote began. Some voters may have voted while they believed that AI videos were authentic. LLMS Faking Public Sentiment: The enemy would communicate as many languages as selected LLMs, on any scale. You can do it. In 2020, the early LLM, GPT-3, was trained to write thousands of emails to US state lawmakers, advocating a mix of issues from the left and right sides of the political spectrum. Approximately 35,000 emails have been sent, and a mixture of human-written AI authors. Legislator response rates are “statistically indistinguishable” among the three issues raised.
The impact of AI on democratic processes
It is still possible to identify many AI-driven deceptions. Whether it’s from a glitch frame in the video or a false and prominent word in the speech. But as technology advances, it becomes difficult and impossible to separate facts from fiction.
FactCheckers may be able to attach follow-ups to fake social media posts. Websites such as Snopes can continue to expose conspiracy theories. However, there is no way to ensure that everyone who saw the original post will see these. Also, due to the number of distribution channels available, it is almost impossible to find the original source of fake materials.
The pace of evolution
I believe in seeing (or listening). When I see it, I believe it. Don’t show me and tell me. All of these phrases are based on human evolutionary understanding of the world. In other words, we choose to trust our eyes and ears.
These sensations have evolved over millions of years and hundreds of years. ChatGpt was released in November 2022, but our brains cannot adapt at AI speed. So, if people can no longer trust what is right in front of them, it’s time to educate everyone’s eyes, ears and minds.
Otherwise, this makes the organization wide open to attack. After all, work is often a place where people spend most of their time on the computer. This means equipping the workforce with awareness, knowledge and skepticism when faced with content designed to generate actions. Whether it contains a political message at the time of elections or asks employees to bypass the process and pay an unverified bank account.
It means that in order to make society realize many ways in which a malicious actor plays with natural prejudices, feelings and instincts in order to believe what someone is saying. These work with multiple social engineering attacks, including phishing (“number one internet crime type” according to the FBI).
And that means supporting individuals to know when to pause, reflect, or challenge what they see online. One way is to simulate an attack powered by AI. That way they will experience firsthand how it feels and what to watch out for. Humans need help to shape society and protect themselves, their organizations and communities against AI-driven deceptions.