The Chip Insider® – IEDM 2025. Nvidia versus Google
Author: G. Dan Hutcheson
6 Min Read December 23, 2025

This year’s IEEE International Electron Devices Meeting was a celebration of the 100th anniversary of the invention of the transistor by Julius Lilienfeld. If you’re one of those who believe it was Bell Labs that did this in 1947, then you’re a victim of the golden marketing law: those who have the richest marketing budget get the recognition. Moreover… What I’ve found to be so great about IEDM is that it’s one of the best ways to see technologies that will be monetizable in 5 to 10 years. This year included 4 focus sessions on next-generation research: Efficient AI Solutions: Silicon Photonics for Energy Efficient AI Computing, Beyond Von Neumann, from Physics-inspired Solvers to Quantum Hardware Implementation, The Invisible Revolution in Thin-Film Transistors, and Efficient AI Solutions: Advances in Architecture, Circuit, Device & 3D Integration… Papers from the Heavy Hitters: TSMC’s … SRAM … the world’s smallest 6T SRAM bit-cell… Kioxia’s … 3D DRAM … Intel’s GaN-Chiplet Integration … Samsung’s A 0.7µm-Pitch Dual Photodiode Pixel… Sony’s Ge-on-Si SP with SWIR Detection Wavelength Single-photon avalanche diode (SPAD) sensors…
Nvidia versus Google: Dan, what do you think of all the talk about Google and its TPU being a strategic threat to Nvidia and its GPU? … First, Alphabet still needs plenty of the open-market xPU chips that AMD and Nvidia supply. As for the GP/Custom AI chip battle: I believe the market seems to equate AI and GPU chips and think TPU Tensor Processing Unit chips are Google's alone. Today's GP-AI chips are often multiples of different xPU core, including C,G, and T core, hence the growing use of ‘xPU.’ Nvidia’s Blackwell is a far cry from its mainstream GPUs. Remember, the big reason GPU architectures won out…
Switching dimensions, one way to think of the Nvidia vs Google battle is at being similar to the old TSMC foundry vs Intel IDM battle. You’re probably thinking… Huh? Bear with me … Hence, both have similar, but not identical, grand strategies.
Another angle for Google is their linking hardware and software versus LLM software-centric companies, such as OpenAI. The latter are more like a Microsoft before the PC WinTel disruption… Google's risk matrix depends on … This is no different than for AWS... So, the hyperscalers are making their own chips targeted at specific workloads. However, these chips are like F1 cars, only able to run well on those workloads. Expand out the workload variance and you’re back to needing a general-purpose AI chip. It’s not a question of one or the other substitutability… It’s one of complementarity.
“I've always been more interested in the future than in the past.” — Grace Hopper
Access the Latest Edition of the Chip Insider
Stay ahead of evolving trends with data-driven insights and expert analyses of the semiconductor industry.





