Developments in artificial intelligence (AI) continue to dominate technology conversations.
While use cases are emerging for generative AI to help businesses, it is also a double-edged sword, raising cybersecurity concerns.
InfoSec covers the impact of GenAI on cybersecurity, from how models are compromised to what attackers use AI for and how enterprises can safely and reliably introduce AI into business workflows. We have taken up the following.
Here are the top 10 AI cybersecurity news of 2024.
NSA launches guidance for safe AI deployment
Read the story here
The National Security Agency, in collaboration with six government agencies from the United States and other Five Eyes countries, has released new guidance on how to safely deploy AI systems. This guidance provides a list of best practices for three key steps in AI adoption.
White House issues AI national security memo
Read the story here
In October, the White House issued the National Security Memorandum (NSM) on AI, setting out key federal actions to advance the development of safe, secure, and reliable technologies related to U.S. national security. The NSM included steps to track and counter the development of adversarial AI.
Understanding NullBulge, the new “hacktivist” group fighting AI
Read the story here
A new threat actor named NullBulge emerged in Spring 2024 and claimed to target AI-centric games and applications. Little-known attackers claimed to have stolen and leaked more than a terabyte of data from Disney’s internal Slack channels in July. The group said the motive behind the attack was to protect artists around the world from AI. Although the company claimed it was not interested in profits, some threat analysts said they had observed some malicious activity that suggested otherwise.
UK signs Council of Europe AI Treaty
Read the story here
The Council of Europe AI Treaty is the first legally binding international agreement on AI, which was formally adopted by the 46 member states of the Council of Europe in May, and will be officially adopted by the 46 member states of the Council of Europe on 5 September 2024. Signed by the UK. This document outlines a joint effort to oversee AI development and protect the public from potential harm caused by the use or misuse of AI models and AI-powered tools.
Microsoft and OpenAI confirm nation-states are weaponizing generated AI in cyberattacks
Read the story here
Research by Microsoft and OpenAI confirms that large-scale language models (LLMs) like ChatGPT are being used by nation-state threat actors. The study notes that threat groups from Russia, China, North Korea, and Iran are leveraging generated AI to support campaigns that rely on social engineering to uncover insecure devices and accounts. However, these tools have not yet been used to develop new attacks and exploitation techniques.
Man charged with AI-based music fraud at Spotify and Apple Music
Read the story here
In September, a North Carolina man was charged with using AI to generate fake songs and fake listeners on a streaming platform and steal royalties. This is the first criminal case involving AI-generated music. The man is accused of creating hundreds of thousands of songs using AI, publishing them on multiple streaming platforms, and illegally streaming them using automated accounts, commonly known as bots.
Google Cloud warns that AI threats will intensify in 2025
Read the story here
While AI threats may not be as devastating in 2024, researchers at Google Cloud believe they will get worse in 2025. Cybercriminals will continue to use AI and LLM to develop and scale sophisticated social engineering schemes, including phishing campaigns. Google Cloud researchers also note that cyber espionage and cyber criminals will continue to use deepfakes for identity theft and fraud.
Cybersecurity teams are largely ignored in AI policy development
Read the story here
Of the 1,800 cybersecurity professionals surveyed by ISACA, only 35% said they were involved in developing policies governing the use of AI within their companies. In 2024, governments around the world are grappling with the governance and regulation of AI, but cybersecurity is not necessarily baked into those governance systems.
AI chatbots are highly vulnerable to jailbreak, British researchers discover
Read the story here
Researchers at the UK AI Safety Institute (AISI) have found that four of the most used generative AI chatbots are highly vulnerable to basic jailbreak attempts. Jailbreaking involves removing software restrictions and allowing the use of unauthorized features. When researchers performed the same attack pattern five times in a row, the model tested for an adverse response in 90% to 100% of cases.
AI Seoul Summit: 16 AI companies sign Frontier AI Safety Initiative
Read the story here
At the Virtual AI Seoul Summit, the second event on AI safety co-hosted by the UK and South Korea on May 21st and 22nd, 16 global AI companies will share new ideas for safely developing AI models. signed the pledge. Signatories of the Frontier AI Safety Commitments include Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI.
OpenAI Image credit: JarTee / Shutterstock.com