nvidia It plans to report its fourth quarter financial results on Wednesday after Bell.
It is expected that one of the most notable years will be put to the finishing touches from unprecedented large companies. Analysts voted for by Factset predicted revenues of $38 billion for the quarter ended in January, a 72% increase per year.
The January quarter concludes two fiscal years, when Nvidia more than doubled sales. It’s a breathtaking streak powered by the fact that Nvidia’s data center graphics processing units (GPUs) are essential hardware for building and deploying artificial intelligence services such as Openai’s ChatGPT. Over the past two years, Nvidia’s shares have risen 478%, making it the most valuable US company, sometimes with a market capitalization of over $3 trillion.
However, Nvidia’s shares have been slower in recent months as investors wonder where Chip Company is going from here.
It is trading at the same price as last October, and investors are wary of signs that Nvidia’s most important clients may be tightening their belts after years of major capital spending. This is especially concerning in the wake of a recent breakthrough in AI from China.
Much of Nvidia’s sales are usually sent to a few companies that build large server farms to rent to other companies. These cloud companies are usually referred to as “hyperschools.” Last February, Nvidia said a single customer accounted for 19% of its 2024 total revenue.
Morgan Stanley analyst estimated it this month Microsoft Nvidia’s latest AI chip, Blackwell, accounts for nearly 35% of 2025 spending. Google It’s 32.2% Oracle 7.4% and Amazon 6.2%.
This is why Nvidia’s stock can be shattered with signs that Microsoft or its rivals could pull back their spending plans.
Last week, analysts at TD Cowen coordinated plans for Microsoft to cancel leases with private data center operators, delay the negotiation process, sign new leases, and spend on international data centers in support of US facilities He said he knew what he had done.
This report sparked fears about the sustainability of AI infrastructure growth. That could mean there is less demand for Nvidia’s chips. Michael Elias of TD Cowen said his team’s findings point to Microsoft’s “potential position in oversupply.” Nvidia’s shares fell 4% on Friday.
Microsoft pushed back on Monday, saying it plans to spend $80 billion on infrastructure in 2025.
“In some regions, we can strategically pace or adjust our infrastructure, but we will continue to grow in all regions, which will allow us to invest and allocate resources to growth areas for the future. “You can do that,” the spokesman told CNBC.
Last month, most of Nvidia’s major clients promoted large investments. The Alphabet aims to spend $75 billion this year. Meta With up to $65 billion, Amazon aims to spend $100 billion.
Analysts say that about half of AI infrastructure capital spending will end with NVIDIA. Many HyperSchoolers have dabbled in AMD’s GPUs and have developed their own AI chips to reduce their dependence on Nvidia, but the company owns a large portion of the market for cutting-edge AI chips. .
So far, these chips have been used primarily to train new AI models. This is a process that costs hundreds of millions of dollars. After AI is developed by companies such as Openai, Google, Anthropic, and more, warehouses filled with Nvidia GPUs will need to provide these models to their customers. As a result, Nvidia continues to grow and increase its revenue.
Another challenge for Nvidia was the emergence of Chinese startup Deepseek last month, releasing an efficient and “distilled” AI model. It has good performance and suggests that billions of dollars of Nvidia GPUs are not needed to train and use cutting-edge AI. It temporarily sunk Nvidia’s stock, causing the company to lose almost $600 billion in market capitalization.
Nvidia CEO Jensen Huang will have the opportunity to explain Wednesday why AI continues to need even more GPU capacity even after last year’s massive buildout.
Recently, Huang spoke about his observation from Openai in 2020, “Scaling Law.” AI models use more data and the data and calculations used in creating them.
Huang said Deepseek’s R1 model refers to a new wrinkle in what Nvidia calls “test time scaling.” Huang argues that the next major path to AI improvements is to apply more GPUs to the process of deploying AI or inference. This allows the chatbot to “infer” “reasons” and generate a lot of data in the process of thinking through the problems.
AI models are trained only a few times to create and fine-tune them. However, since AI models can be called millions of times per month, more NVIDIA chips need to be deployed to customers to use more calculations in inference.
“The market responded to R1. ‘Oh my goodness, AI is finished,’ AI doesn’t need to do any more computing,” Huang said in a pre-drawn interview last week. “It’s exactly the opposite.”