High-Bandwidth Memory: The Real Winner in AI
Author: Dick James
Explore how high-bandwidth memory (HBM) is revolutionizing graphics, HPC, and AI but facing production challenges that drive up costs and delay AI accelerator shipments.
In recent years, high-bandwidth memory (HBM) has become the universal memory solution for the high-end graphics, high-performance computing (HPC), and AI markets. The demand for HBM and the 2.5D interposer silicon that it needs is currently challenging both the memory and 2.5Dpackaging industry sectors to provide enough volume production. Shipments of AI accelerators, notably NVIDIA’s Blackwell, are delayed in part by the HBM production bottleneck, with memory manufacturers reporting HBM3 capacity “sold out” for the coming year. The high cost of HBM chips, exacerbated by production shortages, is driving the cost of leading-edge AI accelerators to extraordinary levels. For these high-end accelerators, HBM is a much bigger cost driver than even the GPU chip.