To hear Jensen Huang say it, Nvidia and artificial intelligence (AI) are just getting started. The CEO of the world’s most valuable company wasn’t resting on his laurels. 94% year-over-year revenue growth number For the third quarter and faced to Some questions about the future of his company similarly general outlook for A.I. Growth continues for the rest of the decade.
“Just like any other factory, many of our AI services are running 24/7,” Huang told the earnings call audience. “This new type of system is going to come online. I call it[the company’s data center]an AI factory. Really Close to what it is. It’s different from the data centers of the past.
“And these basic trends are Really It’s just begun. We expect this to happen, this growth, this modernization and The creation of new industries will continue for several years. ”
Mr. Huang and CFO Colette Kress clearly Even as an analyst, I feel that the company’s best days are yet to come. question Whether we can keep up the pace in several areas, such as large-scale language model (LLM) development, scale of AI usage, etc. and We have achieved tremendous revenue growth over the past two years.
Their reasons for optimism range from consumer adoption rates to the upcoming explosion of enterprise and industrial AI to companies that rely on Nvidia’s data centers and chips (manufacturing is outsourced). This ranges from a long list of. own application.
By way of background, an AI data center is a specialized facility designed to handle the heavy computational demands of AI workloads, essentially by processing large amounts of data using high-performance servers. , provides the infrastructure needed to train and deploy complex machine learning models and algorithms. , specialized hardware accelerators, and advanced networking features are all optimized for AI operations. Simply put, it’s a data center. specially made Power your AI applications at scale.
If there was a theme to the conference call and earnings call, it was a laundry list of companies that depend on NVIDIA, from Alphabet to Meta to Microsoft to Oracle to Volvo. But when that list wasn’t running, Huang and Kress faced some tough questions from analysts. These range from scaling up LLM development to potential controversy over reported system overheating issues. company A 7-chip Blackwell GPU set is expected in the next few years. The company’s third quarter earnings outlook achieved There is no need to ship newly designed chips. Blackwell has been added, and requestAccording to Kress, is “astounding”.
in spite of some Mr. Huang was concerned about a potential slowdown in LLM’s expansion, arguing that there is still plenty of opportunity for growth. He cited continued advances in post-training scaling and inference scaling, and emphasized that scaling of the underlying model “continues to stay the same.”
Post-training scaling originally included reinforcement learning with human feedback, but has evolved to incorporate AI feedback and synthetic data generation. On the other hand, the inference time scaling demonstrated by OpenAI’s ChatGPT-01 allows for improved answer quality through increased processing time.
Mr. Huang expressed optimism about the continued growth of the AI market due to ongoing data center modernization and the emergence of generative AI applications. He described the transition from traditional coding to machine learning as a fundamental change. It will be necessary for that Enterprises need to upgrade their infrastructure to support AI workloads.
Huang also highlighted the emergence of generative AI, likening it to the advent of the iPhone. This is a completely new feature. Create new market segments and opportunities. He cited examples such as OpenAI, Runway, and Harvey, which provide basic intelligence, digital artist intelligence, and legal intelligence, respectively.
Nvidia’s Blackwell architecture is designed To meet the demands of this evolving AI environment. The company has developed seven custom chips for the Blackwell system. They can be configured for air- or water-cooled data centers and support a variety of MVlink and CPU options.
Huang acknowledged engineering challenges involved Integrating these systems into different data center architectures was difficult, but we remained confident in Nvidia’s ability to execute. he quoted example We have successfully collaborated with major cloud service providers (CSPs) such as Dell, Corweave, Oracle, Microsoft, and Google.
Nvidia is also seeing significant growth in enterprise and industrial AI. The company’s Nvidia AI Enterprise platform is There is Used by industry leaders to build co-pilots and agents.
In the industrial AI space, Nvidia’s Omniverse platform Activating Development and operation of industrial AI and robotics applications. Leading manufacturers like Foxconn rely on Omniverse to accelerate their business, automate workflows, and improve operational efficiency.
“The first transformational event is moving from coding that runs on the CPU to machine learning, which creates neural networks. run on the GPU,” Huang said. “The second part of it is generative AI, and we are currently creating new types of capabilities. of the world This is a new market segment that was previously unknown and the world has never experienced before. ”