“Nvidia and Groq, A Stinkily Brilliant Deal” — Ben Thompson
Why this is in the vault
Illustrates the “stinky deal” acqui-hire model reshaping AI M&A and Nvidia’s inference strategy — relevant to understanding AI infrastructure competitive dynamics.
The core argument
Nvidia paid ~$20B to license Groq’s deterministic inference chip technology and hire ~90% of its employees, including founder Jonathan Ross. This is not a formal acquisition but a license-and-hire arrangement — the latest evolution of “stinky deals” that emerged because overzealous regulators made traditional acquisitions too difficult, inadvertently creating a model that lets big tech cherry-pick talent and IP without acquiring the whole company.
Thompson argues this deal is uniquely significant because unlike prior stinky deals that dodged unwarranted scrutiny, this one dodges scrutiny that would have been warranted — Groq’s SRAM-based deterministic inference chips are genuinely differentiated for low-latency use cases (voice, ad serving, model routing). Nvidia gets capability it lacked plus the talent to integrate it into the CUDA ecosystem and manufacture on TSMC’s leading-edge process.
The deal structure is “brilliant” from Nvidia’s perspective: non-exclusive license, top talent, no legacy business baggage, and regulatory invisibility. Employees were treated well with vesting acceleration and $20B valuation payouts.
Mapping against Ray Data Co
Tangential. The inference speed dimension matters for agent architecture (low-latency model routing), but RDCO is a consumer of inference, not a builder of chips. The “stinky deal” regulatory pattern is useful context for understanding AI market consolidation.
Related
- agent-architecture