06-reference

stratechery tsmc earnings n3 fabs nvidia ramp

Sun Apr 19 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Stratechery ·by Ben Thompson

“TSMC Earnings, New N3 Fabs, The Nvidia Ramp” — @benthompson

Why this is in the vault

TSMC is now publicly behaving as if agentic AI is a real, multi-year demand wave — adding new N3 fabs and revising capex up — which is the supply-side counterpart to the agent-bubble debate Ray Data Co keeps revisiting. This update updates the prior “TSMC Risk” thesis Ben filed.

The core argument

Two updates, one recantation:

  1. TSMC’s tone shifted from skeptical to bought-in. CEO C.C. Wei used unusually specific language — “the shift from generative AI and the query mode to agentic AI and the command and action mode is leading to another step up in the amount of tokens being consumed.” Thompson notes that TSMC executives historically refuse to differentiate end-applications; this level of specificity suggests Wei has internalized the agentic-compute thesis (probably via Jensen Huang). 2026 revenue guidance moved from “below 30%” to “above 30%” growth; capex is trending toward the upper end of the $52-56B range.

  2. TSMC is breaking its own playbook by adding N3 capacity to a node already past its targeted ramp. New 3nm fabs in Tainan, Arizona (volume H2 2027), and Japan (2028). Historically TSMC never expands capacity once a node hits target — they let it depreciate and milk it. This is “Intel-like” behavior driven by AI demand TSMC can’t otherwise meet.

  3. These new fabs are functionally Rubin fabs. Nvidia has historically been late to new processes because their dies are reticle-sized — one defect ruins one chip vs. one defect ruining 1-of-9 small Apple A-series chips. So Nvidia waits for the process to mature and inherits depreciated Apple capacity. But Rubin’s compute needs are now too large for the inherited-Apple-capacity model to suffice. The new N3 fabs (likely built copy-exact from proven lines, so fast to ramp) will be Nvidia capacity.

  4. Self-correction on Mythos. Thompson previously suggested Anthropic’s Mythos was trained on Blackwell; updated reporting now points to TPUs. He apologizes for “passing along rumor as fact.”

Mapping against Ray Data Co