Omer TAHA CETIN | Anadol | Getty Image
Deepseek’s powerful new artificial intelligence model is not just China’s victory. MetaAccording to experts in the industry who talked to CNBC, DataBricks, Mistral and Hugging Face.
Last month, DeepSeek released R1. R1 is an open source reasoning model that uses cheaper and more energy -intensive processes and is comparable to the O1 model performance of OPENAI.
This development has led to the fear that the market value of NVIDIA and other chip manufacturers can lead to a decrease in expenditure to high -performance computing infrastructure.
DeepSeek is a Chinese AI lab focusing on the development of a large -scale language model with the ultimate purpose of achieving artificial general information (AGI). In 2023, it was established by Liang Wenfeng, a co-founder of the AI-centered HIGH-FLYER.
AGI roughly refers to the ideas of AI equal to or exceeding human intelligence in a wide range of tasks.
What is an open source AI?
Since Openai’s Chatgpt exploded in the scene in November 2022, AI researchers have worked hard to understand and improve the basic large language model technologies that support it. 。
One of the focus of many labs is the open source AI. Open source refers to software where the source code is freely available on the open web.
Many companies, from high -tech companies like meta to poor startups, such as mistori and hug face, share more and more important developments with a wider research community, betting open source as a way to improve technology. I am doing it.
How Deepseek given the open source power
According to some high -tech executives, DeepSeek’s technical breakthroughs have only increased the claims of the open source AI model.
AI Startup Netmind is the Sewa Rejal, the highest commercial manager, to CNBC, and the success of Chinese companies is “Open source AI is no longer a non -commercial research initiative, but a scalable and scalable model like OPENAI GPT. It is an alternative. “
“Deepseek R1 has demonstrated that open source models can achieve state -of -the -art performances such as Openai,” Rejal told CNBC. “This is challenging the belief that only a closed source model can control innovation in this field.”
Rejal is not alone. Meta’s chief AI scientist, Yann Lecun, stated that DeepSeek’s success was a victory for the open source AI model, not necessarily a victory in China in the United States. The meta is behind a popular open source AI model called lama.
“Looking at DeepSeek’s performance, people who think that” China is beyond the United States with AI. ” You are reading this. “Open source models exceed their unique models,” he said in Linkedin’s post.
“Deepseek is profitable from open research and open source (Meta’s Pytorch, LLAMA, etc.). They came up with new ideas and built them on other people’s work. It is open. The power of research and open source.
Open source AI will be global
Washington cut out of AI model training and advanced chips required for operation, and focused on open source technologies to enhance the appeal of the AI model. Many Chinese companies, including DeepSeek, are pursuing open source models as a way to increase innovation and spread their use.
However, companies that look at open source technologies for success in AI are not limited to China. In Europe, scholars, companies, and data centers are affiliated with the development of high -performance, multilingual large -scale language models called Openeurollm.
The Alliance is led by JANHAJIč, a famous computing linguist at Czech Charles University, and Peter Sarlin, a co -founder of SILO AI, a SILO AI, purchased last year by US chip maker AMD.
This initiative forms a part that promotes AI sovereignty, and encourages the government to invest in AI labs and data centers in Japan to reduce the dependence on Silicon Valley.
What is the catch?
However, open source AI has drawbacks. Expert warns that open source technology is good for innovation, but tends to be cyber exploitation. Because anyone can package and change it.
Cyber Security companies have already found vulnerabilities in Deepseek’s AI models. A survey published last week has revealed that R1 contains serious safety defects.
Cisco’s AI Safety Research team uses the “Algorithm’s Palm Blake Technique” to provide a positive reaction to a series of harmful prompts from the popular Harmbench “with a 100 % attack success rate.” I say I got R1.
“Deepseek R1 is said to have been trained in just a part of the budget spent by other frontier model providers for model development. However, there is a different cost of safety and security,” she said. Kassianik and Amin Karbasi are written.
Dataryks are also concerns. Data processed by Deepseek’s R1 model via a website or app is sent directly to China. High -tech companies in China have long been in trouble with Beijing using systems to spy Western entities and individuals.
“Deepseek, like other generated AI platforms, also presents double -edged swords to companies and individuals,” said Matt Cooke, a proofPoint cyber security strategist EMEA. “The possibility of innovation cannot be denied, but the risk of data leakage is a serious concern.”
“DeepSeek is relatively new, and it takes time to learn about technology, but what we know is to provide sensitive corporate data or personal information for these systems. It’s like giving the loaded weapon, “said Cooke.
Netmind’s Rejal told CNBC that open source AI models would introduce cyber security risks that business needed to consider, such as software supply chain attacks, quick jailbreaking, and so -called “data addiction” events.
Watch: Why Chinese deep seeks are at risk of American AI lead