“TSMC Earnings, New N3 Fabs, The Nvidia Ramp” — @benthompson
Why this is in the vault
TSMC is now publicly behaving as if agentic AI is a real, multi-year demand wave — adding new N3 fabs and revising capex up — which is the supply-side counterpart to the agent-bubble debate Ray Data Co keeps revisiting. This update updates the prior “TSMC Risk” thesis Ben filed.
The core argument
Two updates, one recantation:
-
TSMC’s tone shifted from skeptical to bought-in. CEO C.C. Wei used unusually specific language — “the shift from generative AI and the query mode to agentic AI and the command and action mode is leading to another step up in the amount of tokens being consumed.” Thompson notes that TSMC executives historically refuse to differentiate end-applications; this level of specificity suggests Wei has internalized the agentic-compute thesis (probably via Jensen Huang). 2026 revenue guidance moved from “below 30%” to “above 30%” growth; capex is trending toward the upper end of the $52-56B range.
-
TSMC is breaking its own playbook by adding N3 capacity to a node already past its targeted ramp. New 3nm fabs in Tainan, Arizona (volume H2 2027), and Japan (2028). Historically TSMC never expands capacity once a node hits target — they let it depreciate and milk it. This is “Intel-like” behavior driven by AI demand TSMC can’t otherwise meet.
-
These new fabs are functionally Rubin fabs. Nvidia has historically been late to new processes because their dies are reticle-sized — one defect ruins one chip vs. one defect ruining 1-of-9 small Apple A-series chips. So Nvidia waits for the process to mature and inherits depreciated Apple capacity. But Rubin’s compute needs are now too large for the inherited-Apple-capacity model to suffice. The new N3 fabs (likely built copy-exact from proven lines, so fast to ramp) will be Nvidia capacity.
-
Self-correction on Mythos. Thompson previously suggested Anthropic’s Mythos was trained on Blackwell; updated reporting now points to TPUs. He apologizes for “passing along rumor as fact.”
Mapping against Ray Data Co
- Reinforces the agent-thesis supply chain. RDCO’s positioning rests on agents being a real multi-year demand wave, not a hype cycle. When the supply-side actor most exposed to bubble risk (TSMC, who builds capacity 3-5 years ahead) starts breaking their own playbook to add capacity, that’s a strong supply-side vote of confidence. This is the kind of evidence the founder should weight heavily when stress-testing “are we building for a real wave or a temporary distortion.”
- The 3-5 year fab lead time is the relevant timescale. RDCO’s COO-as-Claude-Code thesis assumes agents become deployable production infrastructure within that window. TSMC’s bet is essentially the same bet, just at the silicon layer.
- Models the discipline of public self-correction. Thompson’s one-line Mythos retraction is the right pattern for any analyst-publication: when you passed along a rumor, name it and correct it inline. This is a craft note for the Sanity Check newsletter.
- Useful for any “agents are real” Sanity Check piece. Wei’s exact quote (“the shift from generative AI and the query mode to agentic AI and the command and action mode”) is a quotable supply-side anchor for arguments that the agent transition is happening at the infrastructure level, not just at the chatbot/UX level.
Related
- 2026-01-26-stratechery-tsmc-risk — the prior issue this update revises (TSMC was alarmingly under-investing in AI-driven capacity)
- 2026-01-21-stratechery-tsmc-earnings-foundry-competition — earlier TSMC earnings read
- 2026-03-16-stratechery-agents-over-bubbles — Thompson’s core agents-thesis piece, now corroborated by TSMC behavior
- 2026-04-13-stratechery-mythos-muse-compute — the prior Mythos-on-Blackwell post being self-corrected here
- 2026-01-06-stratechery-nvidia-groq-deal — Nvidia ecosystem context
- 2026-01-07-stratechery-nvidia-ces-vera-rubin — Vera Rubin context for the chip these new fabs are being built for