Openai said on Friday it revealed evidence that China’s security operations have built an artificial intelligence-driven surveillance tool to collect real-time reports of anti-Chinese posts on Western social media services .
The company’s researchers said they have identified the new campaign. This campaign was called Peer Review. Because someone working on the tool has used Openai technology to debug some of the computer code that underpins it.
Openai’s lead researcher Ben Nimmo said this was the first time the company has discovered this type of AI-powered monitoring tool.
“Threat actors sometimes get a glimpse into what they’re doing in other parts of the internet because of the way they use AI models,” Nemo said.
There is growing concern that AI can be used for surveillance, computer hacking, disinformation campaigns and other malicious purposes. Researchers like Nemo say the technology can certainly enable these types of activities, but adds that AI can also help identify and stop such actions. Ta.
Nimmo and his team believe that China’s surveillance tools are based on Llama, an AI technology built by Meta.
In a detailed report on the use of AI for malicious and deceptive purposes, Openai also called sponsor grievances that used Openai technology to generate English posts criticizing Chinese opponents. said it discovered a Chinese campaign.
Openai said the same group used its technology to translate the article into Spanish before it was distributed in Latin America. The article criticized American society and politics.
Separately, Openai researchers have identified a campaign that is thought to be based in Cambodia. The campaign used the company’s technology to generate and translate social media comments driving fraud known as “pig slaughter.” Comments generated by AI were used to plead men on the internet and involve them in investment schemes.
(The New York Times sued Openai and Microsoft for copyright infringement of news content related to AI Systems. Openai and Microsoft denied these claims.)