High-Bandwidth Memory: The Real Winner in AI

Author: Dick James

High-Bandwidth Memory: The Real Winner in AI

Explore how high-bandwidth memory (HBM) is revolutionizing graphics, HPC, and AI but facing production challenges that drive up costs and delay AI accelerator shipments.

In recent years, high-bandwidth memory (HBM) has become the universal memory solution for the high-end graphics, high-performance computing (HPC), and AI markets. The demand for HBM and the 2.5D interposer silicon that it needs is currently challenging both the memory and 2.5Dpackaging industry sectors to provide enough volume production. Shipments of AI accelerators, notably NVIDIA’s Blackwell, are delayed in part by the HBM production bottleneck, with memory manufacturers reporting HBM3 capacity “sold out” for the coming year. The high cost of HBM chips, exacerbated by production shortages, is driving the cost of leading-edge AI accelerators to extraordinary levels. For these high-end accelerators, HBM is a much bigger cost driver than even the GPU chip.

View the Analysis

Unlock the full edition of the Microprocessor Report for exclusive insights and in-depth analysis.

Enter your email to register to the TechInsights Platform and access analyses, exclusive content, and stay updated with the latest advancements.
 

Already a TechInsights Platform User?

View the Analysis

The authoritative information platform to the semiconductor industry.

Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.