Microsoft bought twice as many NVIDIA flagship chips this year as its biggest rivals in the U.S. and China, as OpenAI’s biggest investor accelerated investment in artificial intelligence infrastructure.
Analysts at technology consultancy Omdia estimate that Microsoft has purchased 485,000 Nvidia Hopper chips this year. This puts Microsoft well ahead of Nvidia’s next largest U.S. customer, Meta, which has purchased 224,000 Hopper chips, and cloud computing rivals Amazon and Google.
For most of the past two years, demand has outstripped supply of Nvidia’s cutting-edge graphics processing units, and Microsoft is stocking up on chips, giving it an edge in the race to build next-generation AI systems.
This year, Big Tech companies spent tens of billions of dollars on data centers running Nvidia’s latest chips. It has been Silicon Valley’s hottest commodity since ChatGPT’s debut two years ago began an unprecedented surge in investment in AI.
Microsoft’s Azure cloud infrastructure was used to train OpenAI’s latest o1 model. OpenAI is competing for next-generation computing supremacy with a resurgent Google, startups like Anthropic and Elon Musk’s xAI, and even Chinese rivals.
ByteDance and Tencent each ordered about 230,000 Nvidia chips this year, according to Omdia estimates. These include the H20 model, a lower-performance version of the hopper that has been modified to meet U.S. export regulations for Chinese customers.
Amazon and Google, which along with Meta are increasingly deploying their custom AI chips as an alternative to Nvidia, bought 196,000 and 169,000 Hopper chips, respectively, analysts said.
Omdia analyzes a company’s published capital expenditures, server shipments, and supply chain intelligence to calculate its estimates.
Nvidia, which has begun rolling out Blackwell, a successor to Hopper, is worth more than $3 trillion this year as Big Tech companies rush to assemble increasingly large GPU clusters.
But the stock’s runaway rally has waned in recent months amid concerns about slowing growth, competition from Big Tech companies’ own custom AI chips and potential disruption to China operations by President Donald Trump’s incoming administration.
ByteDance and Tencent have emerged as two of Nvidia’s biggest customers this year, despite the U.S. government restricting the capabilities of American AI chips that can be sold in China.
Microsoft, which has invested $13 billion in OpenAI, is among the US big tech companies to build out data center infrastructure to run its own AI services, such as the Copilot assistant, or to rent to customers through its Azure division. Most proactive. .
Microsoft’s order for Nvidia chips is more than three times the number of contemporaneous Nvidia AI processors it purchased in 2023, when Nvidia was ramping up production of Hopper following the huge success of ChatGPT.
“Good data center infrastructure is a very complex and capital-intensive project,” Alistair Speirs, Microsoft’s senior director of Azure global infrastructure, told the Financial Times. “They require multi-year planning, so it’s important to give yourself a little bit of wiggle room and anticipate what the growth will be.”
According to Omdia, tech companies around the world will spend an estimated $229 billion on servers in 2024, including Microsoft’s $31 billion and Amazon’s $26 billion in capital spending. The top 10 buyers of data center infrastructure (which now also includes relative newcomers xAI and CoreWeave) account for 60% of the world’s investments in computing power.
Vlad Galabov, director of cloud and data center research at Omdia, said about 43% of server spending went to NVIDIA in 2024.
“NVIDIA GPUs represent a very high share of server capital expenditures,” he said. “We are nearing the top.”
While Nvidia still dominates the AI chip market, Silicon Valley rival AMD is also making inroads. According to Omdia, Meta has purchased 173,000 AMD MI300 chips this year, while Microsoft has purchased 96,000.
Big tech companies are also ramping up their use of their own AI chips this year to reduce their dependence on Nvidia. Google, which has been developing “tensor processing units” (TPUs) for a decade, and Meta, which introduced the first generation of Meta training and inference accelerator chips last year, each introduced about 1.5 million of their own chips.
Amazon has invested heavily in Trainium and Inferentia chips for its cloud computing customers, deploying about 1.3 million of those chips this year. Amazon will use hundreds of thousands of its latest Trainium chips to build a new cluster this month for Anthropic, an OpenAI rival in which Amazon has invested $8 billion to train next-generation AI models. announced that it was a plan.
But Microsoft has been much faster to develop AI accelerators to rival Nvidia’s, and the company has only installed about 200,000 Maia chips this year.
Spiers said using Nvidia’s chips would require Microsoft to invest heavily in its own technology to provide “unique” services to customers.
“In our experience, building an AI infrastructure requires more than just having the best chips, it takes the right storage components, the right infrastructure, the right software layers, the right host management layers, and error correction. “It’s important to have all the components to build that system,” he said.