Blackwell Ultra Emphasizes AI Inference

>Blackwell Ultra Emphasizes AI Inference

NVIDIA B300 GPU boosts AI inference with 50% more FP4 compute and 288GB HBM3e memory. Launching H2 2025 in HGX B300 NVL16 & GB300 NVL72 platforms.

NVIDIA’s next-generation data center accelerator, B300, uses an evolved variant of the Blackwell architecture focused on inference performance, with increases in FP4 compute capabilities and larger HBM3e memory capacity compared to the standard Blackwell GPU. CEO Jensen Huang doled out precious little detail about architectural enhancements, which, as a whole, he said would give B300 about 50% more AI performance in dense FP4 compute compared to the original Blackwell. The B300 Blackwell Ultra GPU will integrate 288 GB of HBM3e memory, 50% more than the foundational Blackwell device, B200. The B300 GPU and the HGX B300 NVL16 and GB300 NVL72 platforms that incorporate it will be available in the second half of this year.

View the Analysis

This summary outlines the analysis found on the TechInsights' Platform.

Enter your email to register to the TechInsights Platform and access the full analysis summary, as well as the report.
 

Already a TechInsights Platform User?

View the Analysis

Can Intel Reclaim Its Crown in the Semiconductor World?

Can Intel Reclaim Its Crown in the Semiconductor World?

Join this webinar for expert insights on Intel’s comeback, manufacturing plans, and what it means for the future of computing.

The authoritative information platform to the semiconductor industry.

Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.