Last week, semiconductor stock like NVIDIA (NVDA 4.29 %)Advanced microdovice (AMD -7.35 %)And Micron Technology (MU 1.65 %) The news that a Chinese emerging company called DeepSeek has found a way to train some of the cost of artificial intelligence (AI) models of American colleagues.
Investors were concerned that Deepseek’s innovative approach could cause the demand for graphic processors (GPUs) and other data center components. However, these concerns may be exaggerated.
Meta platform (Meta 0.04 %) A large -scale buyer of NVIDIA and AMD AI chips. On January 29, CEO’s Mark Zuckerberg made a series of comments that would become music to the ears of investors owning AI hardware stocks.
Deepseek background
Successful hedge fund luxury houses have been building AI for years using AI. In 2023, DeepSeek was established as another entity and utilized the success of other AI research companies.
Last week’s stock market panic was caused by the DeepSeek’s V3 Language model (LLM), which matches the performance of some benchmarks in the United States and the latest Openai GPT-4O model performance. Deepseek is not as close as possible, except for a claim that has spent only $ 5.6 million training V3, but Openai has burned more than $ 20 billion since 2015 to reach the current stage.
Deepseek cannot access the NVIDIA’s latest data center GPU to be more concerned about the problem. The US government banned the sale to Chinese companies. In other words, startups need to use old generations such as H100 and inadequate H800, indicating that it is possible to train major AI models without the best hardware.
Deepseek has been innovated on the software by developing more efficient algorithms and data input methods to offset the lack of calculation performance. In addition, a method called distillation has been adopted. This includes training a unique model using a successful model. As a result, the training process is rapidly increased, and the computing capacity is much less.
Investors are concerned that other AI companies do not need to buy so many GPUs from NVIDIA or AMD if they have adopted the Deepseek approach. It will also crush the demand for the Micron industry leading data center memory solutions.
NVIDIA, AMD, and Micron Power AI Revolution
NVIDIA’s GPU is the most popular in the world to develop AI models. The company’s FY2025 just ended on January 31, and according to the management guidance, the profit may have doubled to a record of $ 128.6 billion (the official result was February 26). Will be announced.) If the recent quarter should progress, the data center segment will be gained from the data center segment thanks to the sale of the GPU.
That incredible growth is why NVIDIA added $ 2.5 trillion to the market capitalization in the past two years. If the demand for chips is slow, many of its value may evaporate.
AMD has become a valuable competitor of NVIDIA in the data center. The company will release a new MI350 GPU in the latter half of this year. This will be comparable to the latest Blackwell chips of NVIDIA, a gold standard for AI workload processing.
However, AMD is also a major supplier of AI chip of personal computer, and may be a major growth segment in the future. LLMs can ultimately run them with a small chip in a computer and device to reduce the dependence on external data centers.
Finally, Micron is often overlooked as an AI chip company, but plays an important role in the industry. The data center HBM3E (high bandwidth memory) is the best class in terms of capacity and energy efficiency. Therefore, NVIDIA is used in the latest Blackwell GPU. Save the memory in the prepared state. This allows the GPU to be received instantly as needed. AI workload is an important part of hardware puzzles because of data integration.
Mark Zuckerberg may have been concerned about recent concerns
Meta Platforms spends $ 39.2 billion in chips and data center infrastructure in 2024, and plans to spend $ 65 billion this year. These investments will help you further advocate the world’s most popular open source model, LLAMA LLMS, and download 600 million. LLAMA 4 is scheduled to be released this year, and CEO’s Mark Zuckerberg believes that it may be the most advanced in the industry and even the best closed source model.
On January 29, Meta held a call with analysts in the fourth quarter of 2024. When Zuckerberg was quizzed about the potential impact of deep -sequ, he said it would probably be too early to judge the meaning of capital investment in chips and data centers. However, he stated that even if the AI training workload requirements were low, it does not mean that companies need less chips.
Instead, he believes that the capacity can shift from training to inference. This is a process in which AI processes input from users and forms a response. Many developers use infinite amounts of data and, instead, focus on the “reasoning” function, leaving the training model. This is called a test time scaling, which has a higher quality response, with the model spending extra time to “think” before rendering the output.
For inference, Zuckerberg needs the best data center infrastructure for companies to maintain advantages over competition. In addition, most AI software products have not yet achieved mainstream adoption. Zuckerberg admits that providing services to many users requires additional data center capacity over time.
Therefore, it is difficult to put accurate numbers about how DeepSeek’s innovation changes the demand of chips, but Zuckerberg’s comment is that NVIDIA, AMD, and Micron stock investors have panicked. Suggests. In fact, these strains have been bullish for a long time.