“Intel Earnings, Intel’s Differentiation?, Whither Terafab” — @Ben Thompson
Why this is in the vault
Stratechery is RDCO’s primary tracking source for Big Tech strategy and AI infrastructure; this update reframes the inference-era compute mix as CPU-anchored rather than GPU-only, which directly affects how we should think about agent cost curves.
The core argument
Intel’s Q1 beat is structurally driven, not Tan-driven. Intel guided $13.8-14.8B for Q2 vs. $13B consensus, hit a record high, and Lip-Bu Tan got the credit. Thompson’s read: the real driver is a structural shift in CPU demand caused by the move from training to inference and on to agents.
The CPU/GPU ratio is reverting. Tan said the server CPU-to-GPU ratio “used to be 1 and 8, and now it’s 1:4 and…towards parity or even better.” Reasoning: training is GPU-dominant with CPUs playing a feeder role; inference workloads are short, numerous, and need orchestration (which is CPU work); agents add tool-use and data-access overhead, pushing the ratio further toward CPUs. This shift also reduces the relative importance of horizontally-scalable GPU networking, which Thompson notes is good news for AWS (whose competitive concerns were largely about training-era networking) — and ties back to his Garman/Altman interview from 4-28.
Intel is winning on availability, not differentiation. Thompson highlights two analyst questions Tan dodged: (1) is Intel competitively differentiated within CPUs vs. AMD/Arm, and (2) how does Intel compete against AMD x86 share gains and the new Arm-based stack (Nvidia Vera, AWS Graviton, Google Axion). Tan answered neither. The tell: CFO David Zinsner admitted Intel cleared finished-goods inventory by selling “de-specced” and shelved legacy product to willing buyers. Demand is so high that Intel is monetizing chips it had written off — “by definition, not differentiated.”
The bull case Thompson floats anyway. The original 2021 Intel Foundry thesis was that Intel needed external customers to fund leading-edge volume. The inversion: if CPU demand is structurally insatiable AND every other CPU maker is constrained by TSMC capacity (cf. the 2026-01-26-stratechery-tsmc-risk piece), Intel may end up consuming its own fab capacity profitably. Tan’s real contribution is yield and efficiency improvements that the Foundry model exposed as needed.
Terafab = speculative process-licensing bet. Musk’s $3B Texas research fab will use Intel’s 14A process. Tan was characteristically careful — no committed customer announcements, framed the partnership as “exploring innovative ways to refactor silicon process technology.” Thompson’s read: the payoff for Intel is less about license dollars and more about absorbing whatever manufacturing-cost improvements Musk’s team discovers. Reasonable bet, but Tan himself doesn’t sound certain it pays off.
Mapping against Ray Data Co
Strong mapping — agent-deployer thesis directly affected. The CPU-ratio shift is the infrastructure underwriting of Levie’s agent-deployer role (2026-04-14-levie-agent-deployer-role-jd). If agent workloads are CPU-anchored rather than GPU-anchored, the unit economics of running agents at scale improve faster than the GPU-supply story alone would suggest. That makes the agent-deployer role economically viable sooner — agents don’t have to wait in line behind training jobs for accelerator capacity.
MAC framing. Our 2026-04-26-innermost-loop-singularity-when-intelligence-stops-being-scarce piece argued that the binding constraint on enterprise AI is integration, not intelligence. Thompson’s data point reinforces that: enterprises moving inference into production discover they need orchestration (CPU) and data adjacency (their existing AWS/cloud footprint), not more raw model capability. MAC’s positioning as the integration discipline tracks this — we should reference the CPU-shift as an evidence point that the integration era has begun, not just be theorized about.
Snowflake/Databricks landscape. The CPU-anchored, data-adjacent inference picture is exactly why the data-warehouse incumbents are positioned to win the agent runtime. If inference is CPU + data + tools, then the platform that already has the data and the orchestration plane has gravitational advantage. This sharpens the thesis from our recent Snowflake/Databricks research — agents will run where the data lives, and “where the data lives” is increasingly the warehouse.
Anthropic Max ToS framing. Less direct, but: if inference is structurally cheaper than the GPU-only story implied, Anthropic’s pricing has more headroom than the narrative gives them. The Max ToS pressure was framed as a margin defense; an inference-cost decline would relieve it, though that effect is downstream and slow.
Stratechery as weather-tracker. Even where mappings are indirect, this is the kind of structural-tech-strategy reframe we file because it shifts the assumptions we operate under for 12+ months. The CPU-ratio claim is the headline takeaway worth carrying forward in our own Sanity Check writing.
Related
- 2026-04-28-stratechery-altman-garman-bedrock-managed-agents — directly cross-referenced in this update; Garman interview on the inference-era shift
- 2026-04-14-levie-agent-deployer-role-jd — agent-deployer thesis whose unit economics improve with CPU-ratio shift
- 2026-04-26-innermost-loop-singularity-when-intelligence-stops-being-scarce — MAC integration-era thesis reinforced by CPU-orchestration evidence
- 2026-04-20-stratechery-tsmc-earnings-n3-fabs-nvidia-ramp — TSMC capacity constraint that Intel is positioned to bypass
- 2026-01-26-stratechery-tsmc-risk — original framing of TSMC concentration risk that makes Intel’s in-house capacity strategically valuable
- 2026-01-27-stratechery-intel-earnings-agentic — prior quarter’s Intel update; baseline to track Tan’s progress against
- 2026-04-22-stratechery-john-ternus-spacexai-cursor — earlier coverage of the SpaceX/xAI/Tesla orbit Terafab sits inside
- 2026-03-30-ark-invest-terafab-gene-editing-roundup — earlier ARK take on Terafab; pair with Thompson’s more cautious read
- 2026-03-25-stratechery-arm-launches-own-cpu / 2026-03-26-stratechery-interview-arm-ceo-rene-haas — Arm competitive context Tan dodged
- 2026-04-07-stratechery-anthropic-tpu-deal-google-alliance — adjacent compute-supply story for the Anthropic/Google axis