Microsoft has identified a “foreign-based threat actor group” operating a hacking-as-a-service infrastructure that intentionally circumvents the security controls of its generative artificial intelligence (AI) services and generates aggressive attacks. announced that they are taking legal action. Harmful Content.
The tech giant’s Digital Crime Unit (DCU) said threat actors have “developed sophisticated software that exploits exposed customer credentials harvested from public websites” and have “used certain generative AI services to create “in an attempt to identify and illegally access the system, and intentionally change its functionality.” of those services. ”
Attackers can then use these services, such as Azure OpenAI Service, and sell detailed instructions on how to use these custom tools to generate harmful content to other malicious actors. Monetized access. Microsoft said it discovered this activity in July 2024.
The Windows maker said it has since revoked the threat actor group’s access, instituted new measures and strengthened security measures to prevent such activity from occurring in the future. It also said it had obtained a court order to seize the website (aitism(.)net) that was the center of the group’s criminal activities.
The popularity of AI tools like OpenAI ChatGPT has also resulted in their exploitation by threat actors for malicious purposes, ranging from creating prohibited content to developing malware. Microsoft and OpenAI have repeatedly revealed that nation-state groups in China, Iran, North Korea, and Russia use their services for reconnaissance, translation, and disinformation campaigns.
According to court documents, the operation involved at least three unidentified individuals who used stolen Azure API keys and customers’ Entra ID credentials to break into Microsoft systems and violate acceptable use policies. has been shown to be using DALL-E to create harmful images. Seven other companies are believed to have used services and tools provided by the company for similar purposes.
It is currently unclear how API keys are collected, but Microsoft said the defendants were involved in “systematic theft of API keys” from multiple customers, including several U.S. companies. including, some of which are based in Pennsylvania and New Jersey.
“Defendants created a hacking-as-a-service scheme using stolen Microsoft API keys that were in the possession of a US-based Microsoft customer. It is accessible through infrastructure such as the “aitism.net” domain and was specifically designed to achieve that purpose. “abused Microsoft’s Azure infrastructure and software,” the company said in its filing.
According to a now-deleted GitHub repository, de3u is described as a “DALL-E 3 frontend with reverse proxy support.” The GitHub account in question was created on November 8, 2023.
Following the seizure of aitism(.)net, the threat actors took steps to “cover their tracks, including attempting to remove certain Rentry.org pages, the de3u tool’s GitHub repository, and portions of their reverse proxy infrastructure.” It is said that he taught . ”
Microsoft says attackers used custom-built reverse proxy services called de3u and oai reverse proxy, used stolen API keys to make Azure OpenAl service API calls, and used text prompts to create thousands of harmful images. It was pointed out that it was generated illegally. It is unclear what kind of offensive images were created.
The oai reverse proxy service running on the server is designed to collect communications from de3u user computers through a Cloudflare tunnel to the Azure OpenAI service and send responses to the user device.
“The de3u software allows users to issue Microsoft API calls to generate images using the DALL-E model through a simple user interface that leverages Azure APIs to access Azure OpenAI services,” said Redmond. he explained.
“Defendant’s de3u application uses undocumented Microsoft networking APIs to communicate with Azure computers and sends requests designed to mimic legitimate Azure OpenAPI service API requests. , authenticated using stolen API keys and other credentials.”
The use of proxy services to illegally access LLM services was reported in 2024 in connection with LLM jacking attack campaigns targeting AI services from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI. It’s worth pointing out that it was highlighted by Sysdig in May. Uses stolen cloud credentials and sells access to other attackers.
“Defendants conducted their Azure Abuse Enterprise operations through a systematic and sustained pattern of illegal conduct to accomplish a common illegal purpose,” Microsoft said.
“Defendants’ pattern of illegal activity is not limited to attacks on Microsoft; evidence Microsoft has disclosed to date shows that Azure Abuse Enterprise targets and harms other AI service providers. ”