Blackwell Ultra Emphasizes AI Inference
NVIDIA B300 GPU boosts AI inference with 50% more FP4 compute and 288GB HBM3e memory. Launching H2 2025 in HGX B300 NVL16 & GB300 NVL72 platforms.
NVIDIA’s next-generation data center accelerator, B300, uses an evolved variant of the Blackwell architecture focused on inference performance, with increases in FP4 compute capabilities and larger HBM3e memory capacity compared to the standard Blackwell GPU. CEO Jensen Huang doled out precious little detail about architectural enhancements, which, as a whole, he said would give B300 about 50% more AI performance in dense FP4 compute compared to the original Blackwell. The B300 Blackwell Ultra GPU will integrate 288 GB of HBM3e memory, 50% more than the foundational Blackwell device, B200. The B300 GPU and the HGX B300 NVL16 and GB300 NVL72 platforms that incorporate it will be available in the second half of this year.