The AI lab, which is not well known from China, is a cheaper and powerful chip, and has released an AI model that can surpass the best in the United States, despite being built in cheaper, and panicked throughout Silicon Valley. I put on fire.
DeepSeek has announced a large open source large -scale language model in late December, as a lab.
With new developments, the global leads in the United States in artificial intelligence have been reduced, and an alarm has occurred on whether or not the Big Tech expenditure is questioning the building of AI models and data centers.
In the third party benchmark test set, the model of DeepSeek is excellent. Meta‘S LLAMA 3.1, Openai’s GPT-4o, and human Claude Sonnet 3.5, from complex problems to mathematics and coding, have accuracy.
DeepSeek released R1 on Monday. R1 is a reasoning model that exceeds OPENAI’s latest O1 with many of these third -party tests.
“In order to see a new model of Deepseek, it is very impressive in terms of both the actual method of actually performing an open source model that performs this inference time calculation effectively, and the superb calculation efficiency is high.” The economic forum held in Davos. “We should remove the development from China very seriously.”
DeepSeek also had to navigate the strict semiconductor limit of the US government in China and separate the country from the NVIDIA’s most powerful chips like H100. The latest progress suggests that DeepSeek has found a way to avoid the rules, or that export management was not Washington’s intended choke hold.
“They can take a really good, big model, and use the process called distillation,” said the benchmark general partner, Chetan Puttagunta. “Basically, it helps us to use a very large model to make a small model smarter and want to be smart. It’s actually very cost -effective.”
The lab and its founder, Liang Wenfeng, are almost unknown. According to media reports, DeepSeek was born from a Chinese hedge fund called HIGH-FLYER Quant, which manages about $ 8 billion of assets.
However, it is not the only Chinese company where DeepSeek has invaded.
Kai-Fu Lee, a major AI researcher, states that his startup 01.ai was trained using only $ 3 million. On Wednesday, Tiktok’s parent company’s bait dance released a model that states that it will exceed O1 in Openai in the key benchmark test.
“The need is the mother of the invention,” said the confused CEO Arabind Sulnibus. “They had to grasp the work around the work, so we decided to build something more efficient.”
Please see this video.