NVIDIA’s GB300 AI server is set up to be announced at this year’s GTC 2025 conference, marking the migration to a fully “liquefied” AI cluster.
Nvidia’s Blackwell Ultra GB300 AI server has a significantly larger TDPS than the GB200, and has top-notch cooling performance
Team Green announces the next revolution in the AI industry for GTC 2025. This time we will feature the highly planned “Blackwell Ultra” lineup, a much more refined version of the existing Blackwell architecture. Although the details of Nvidia’s upcoming AI products, especially the GB300 lineup, are slim, Blackwell Ultra’s performance is “hot hold to hander”, we found that compared to existing GB200 clusters, it is set to “beef” the AI server’s cooling mechanism up to four times.
A report from Taiwan Economic Daily claims that the GB300 AI server is “fully liquid-cooled,” removing elements of the air-cooled system. In anticipation, we have seen a great demand for liquid cooling essentials given that Nvidia plans to quickly increase production of Blackwell Ultra, as it is said that Nvidia is higher than what it saw in the original Blackwell. The GB300’s thermal dissipation diagram is claimed to be much higher than its predecessor. Therefore, optimum cooling is required.

Using liquid cooling is expected to significantly increase the price of the GB300 AI server. Given that the current price of the GB200 NVL72 server is around $3 million, the GB300 top configuration will be much higher, and will ultimately increase NVIDIA’s revenue. However, given the yield rate issues surrounding Blackwell, it would be interesting to see if customers would choose a large GB300. Still, given Nvidia’s dominant presence in the market, it certainly appears to be imminent.
For a quick overview of the Blackwell Ultra “B300” series, it is rumored that Nvidia is planning to make the power figure bigger this time, as the GB300 AI server is supposed to have up to 1400W of TDP. With an architecture upgrade, we are considering FP4 performance to be around 1.4 times higher when memory capacity is high from the previous generation to 192 GB to 288 GB by effectively utilizing the HBM3E technology on the 12-HI stack.
This year’s GTC will not only showcase a single lineup, but Nvidia also plans to announce the Vera Rubin lineup and outline its status, but supply chain launches are not currently expected.