Italian data protection watchdog blocks the DeepSeek service of China’s artificial intelligence (AI) in Japan, stating that information on the use of user personal data is insufficient.
Development will be made a few days after the authorities, Garante, send a series of questions to Deepseek and ask about the data processing practices and the location of training data.
In particular, I wanted to know what personal data was collected by the web platform and mobile app. I wanted to know what kind of purpose, what legal bases, and whether it was preserved in China.
In a statement issued on January 30, 2025, Garante said that after providing information that DeepSeek was “completely insufficient,” he has reached his decision.
The entity behind the service, Hangzhou Deepseek, and the Beijing DeepSeek artificial intelligence, have added that “it does not work in Italy and that European law is not applied to them.”
As a result, the watchdog said that it was immediately blocking access to Deepseek, and at the same time it was open.
In 2023, the Data Protection Bureau also issued a temporary ban on Openai’s Chatgpt. This is a restriction that was lifted in late April after intervening in an intervening of the AI (AI) company to cope with data privacy concerns. Later, Openai was fined by 15 million euros for how to handle personal data.
Deepseek’s banning news is a crowd of services this week, as the company is on popular waves, and the mobile app is being sent to the top of the download chart.
In addition to being the target of “large -scale malicious attacks”, the members and regular members have been paying attention to the privacy policy, Chinese parallel censorship, propaganda, and the national security concerns. As of January 31, the company has been making corrections to address the service attack.
In addition to the tasks, Deepseek’s large language model (LLM) is a crescendo, a bad riccato judge, a deceptive joy, what is right now (Dan), and the bad actor is malicious or prohibited. It is known that it is susceptible to permitted jailbreak technology. content.
“They bring out a variety of harmful outputs, from detailed instructions to create dangerous items such as Morotov cocktails to the generation of malicious code for attacks such as SQL injection and horizontal movements. The Palo Alto Network Unit 42 said in a Thursday report.
“Deepseek’s first answer was often benign, but in many cases, carefully created follow -up prompts often expose these initial protection weaknesses. The purpose.”
The further evaluation of Deepseek-R1, a Deepseek-R1 by Hiddenlayer, is not only vulnerable to prompt injection, but also the possibility that the proportions of that concept will lead to inadvertent information leaks. I revealed that there was.
Interesting twists, the company stated that the model also emerged as a result of “multiple instances suggesting that Openai data is incorporated, and raises ethical and legal concerns on data procurement and originality of models.” 。
This disclosure is also under the discovery of the jailbreak vulnerability of Openai Chatgpt-4O dubbed Time Bandit. 。 Since then, Openai has reduced the problem.
“The attacker starts a session with Chatgpt, directs a specific historical event, directly encouraged, or pretends to support users at specific historical events. Can be abused, “said Cert/CC).
“Once this is established, the user can pivot the received response to various illegal topics through subsequent prompts.”
Similar defects are identified by Alibaba’s QWEN 2.5-VL model and Copilot Coding Assistant of GitHub, and the latter is a harmful code that only includes words such as “SURE” at the prompt to avoid security restriction. Gives the ability to threaten the ability to adopt.
“If you start a query with positive words such as” SURE “and other formats, it will work as a trigger and shift to a mode that tends to cause a more likely risk of a co -pilot.” Oren saban, a person, said. “This small fine adjustment is all necessary to unlock the response, from non -ethical proposals to complete dangerous advice.”
According to APEX, another vulnerability in the Copilot proxy configuration was discovered. It stated that it could completely avoid access restrictions without paying use and even falsify the Copilot system prompt.
However, this attack depends on the activity token associated with the active Copilot license, and urges GitHub to be a responsible disclosure of abuse.
“The positive positive positive positive positive between proxy bypass and GitHub Copilot is a perfect example of how the most powerful AI tools are abused without appropriate protection,” said Saban.