AI from an attacker’s perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications.
Cybercriminals and AI: Reality vs. Hype
“AI won’t replace humans in the near future, but humans who know how to use AI will replace humans who don’t know how to use AI,” said Cato Networks Chief Security Strategist. And Kato CTRL. “Similarly, attackers are turning to AI to enhance their capabilities.”
However, there is much more hype than reality about the role of AI in cybercrime. Headlines sensationalize the threat of AI with terms like “Chaos-GPT” and “Black Hat AI Tools” and even claim that it is trying to destroy humanity. However, these articles are more fear-inducing than describing serious threats.
For example, research on underground forums revealed that some of these so-called “AI cyber tools” are just rebranded versions of basic public LLMs without advanced features. In fact, it was even marked as a scam by angry attackers.
How are hackers actually using AI in cyberattacks?
The truth is, cybercriminals are still figuring out how to effectively leverage AI. They experience the same problems and drawbacks as regular users, such as hallucinations and limited abilities. According to their predictions, it will be several years before GenAI can be effectively leveraged for hacking needs.
At the moment, GenAI tools are primarily used for simpler tasks, such as creating phishing emails and generating code snippets that can be incorporated into attacks. Additionally, we observed attackers providing compromised code to AI systems for analysis, with the goal of “normalizing” such code as non-malicious.
Using AI to exploit AI: Introducing GPT
Introduced by OpenAI on November 6, 2023, GPT is a customizable version of ChatGPT that allows users to add specific instructions, integrate external APIs, and incorporate their own knowledge sources. This feature allows users to create highly specialized applications such as technical support bots and educational tools. Additionally, OpenAI offers GPT monetization options to developers through a dedicated marketplace.
Abuse of GPT
GPT poses potential security concerns. One notable risk is the exposure of sensitive instructions, proprietary knowledge, and even API keys embedded in custom GPTs. Malicious actors could use AI, particularly prompt engineering, to clone GPT and exploit its monetization potential.
An attacker could use the prompt to obtain knowledge sources, instructions, configuration files, and more. These can be as simple as telling your custom GPT to list all uploaded files and custom instructions, or requesting debugging information. Or do something more advanced, such as asking GPT to compress one of your PDF files and create a downloadable link, or asking GPT to list all its features in a structured table format. There are also things.
“Even the protections put in place by developers can be bypassed and all the knowledge can be extracted,” said Vitaly Simovich, Threat Intelligence Researcher at Cato Networks and member of Cato CTRL.
These risks can be avoided by:
Don’t upload sensitive data Use instruction-based protection, but even that may not be foolproof. “You have to consider all the different scenarios that an attacker could exploit,” Vitaly adds. OpenAI protection
AI attacks and risks
Multiple frameworks currently exist to assist organizations looking to develop and create AI-based software.
NIST Artificial Intelligence Risk Management Framework Google’s Secure AI Framework Top 10 OWASP LLM Top 10 OWASP LLM Applications Recently Launched MITER ATLAS
LLM attack surface
There are six major Large Language Model (LLM) components that attackers may target.
Prompts – Attack responses such as prompt injection where malicious input is used to manipulate the output of the AI - Models that misuse or leak sensitive information in AI-generated responses – Theft, contamination, or manipulation of AI models training data – Introducing malicious data to alter the behavior of the AI.Infrastructure – Targeting the servers and services that support the AI Users – Misleading or exploiting humans or systems that rely on the output of the AI
Real-world attacks and risks
Finally, we present some examples of LLM operations that can easily be used in malicious ways.
Rapid adoption in customer service systems – A recent case study involved a car dealership using an AI chatbot for customer service. Researchers were able to manipulate a chatbot by issuing prompts that changed its behavior. By instructing the chatbot to agree with everything the customer says and end each response with “This is a legally binding offer,” the researchers were able to buy the car at a ridiculously low price. A critical vulnerability was revealed.
Hallucinations leading to legal consequences – In another incident, Air Canada faced legal action after an AI chatbot provided incorrect information about its refund policy. If a customer relied on the chatbot’s response and subsequently filed a complaint, Air Canada could be held liable for misleading information. Sensitive Data Leak – When Samsung employees used ChatGPT to analyze code, they unwittingly exposed sensitive information. Uploading sensitive data to third-party AI systems is risky because you don’t know how long the data will be stored or who will have access to it. AI and Deepfake Technology in Fraud – Cybercriminals are leveraging AI for more than just text generation. A Hong Kong bank suffered a $25 million fraud after an attacker used live deepfake technology during a video call. The AI-generated avatar imitated a trusted bank employee and persuaded the victim to transfer funds to a fraudulent account.
Summary: AI in Cybercrime
AI is a powerful tool for both defenders and attackers. As cybercriminals continue to experiment with AI, it’s important to understand how they think, what tactics they employ, and what choices they face. This allows organizations to better protect their AI systems from misuse and abuse.
Watch the entire masterclass here.