Mythos, Muse, and the Opportunity Cost of Compute
Thompson’s Monday deep-dive reframes the AI compute debate away from marginal cost and toward opportunity cost. The central argument: AI chips will always be fully utilized (digital output, no raw materials), so the real constraint is not what it costs to serve one more request, but what you give up by serving it. This is an important correction to the “AI breaks Aggregation Theory” thesis from Doug O’Laughlin (Fabricated Knowledge, Jan 2025), who argued that non-zero marginal costs in AI would end the zero-cost dynamics that powered the 2010s internet giants.
Thompson uses Microsoft’s Q2 2026 earnings as the clearest illustration. CFO Amy Hood admitted Azure growth missed expectations not from lack of demand but because Microsoft allocated GPUs to internal products (M365 Copilot, GitHub Copilot, R&D) that carry higher margin and lifetime value. The missed Azure KPI would have been 40+ if all new GPUs went to external customers.
On Anthropic, Thompson argues Mythos is being held back from broad availability for two strategic reasons beyond safety theater. First, Anthropic is already compute-constrained serving existing Claude users (he cites the weekend’s X/GitHub backlash about perceived quality degradation). Second, restricting Mythos access protects pricing power against open-source distillation — Anthropic documented industrial-scale distillation campaigns by DeepSeek, Moonshot, and MiniMax across 16M+ exchanges and 24K fraudulent accounts. Stopping distillation also reduces the attractiveness of compute for competitors, letting Anthropic acquire capacity at better rates.
Meta’s Muse Spark — the first model from Meta Superintelligence Labs — enters as a strategic counterweight. Thompson’s key insight: Meta faces zero opportunity cost serving consumers because it has no enterprise/cloud business competing for the same GPUs, plus an at-scale ad business to monetize usage. Meta should therefore open-source Muse, since doing so primarily damages frontier labs’ pricing power while leaving Meta’s consumer position untouched.
Thompson’s conclusion: demand-side control (owning the customer) will still trump supply-side control (owning compute), meaning Aggregation Theory’s core logic survives. But compute constraints mean companies cannot serve everyone, creating real trade-offs. OpenAI argues its early infrastructure build gives it an advantage over Anthropic, but Thompson bets product quality and user loyalty will determine who sources compute on favorable terms, not the reverse.
RDCO Mapping
Anthropic arc. Anthropic’s run-rate now exceeds $30B (cross-ref TPU deal note). The opportunity cost framing explains why Claude quality complaints surface — it is a capacity allocation problem, not a capability regression. This directly affects our operational reliability on Claude Code.
Harness thesis. Thompson’s continued emphasis on model+harness integration as the profit-capturing layer (extending the “Agents Over Bubbles” argument) validates our architecture. Enterprise customers paying premium prices for integrated agent products is exactly the dynamic that funds Anthropic’s compute expansion.
Solve Everything (Ch 4-5). The opportunity cost vs. marginal cost distinction maps directly to the compute scaling chapters. Thompson’s framework — that AI compute decisions are allocation problems, not unit-economics problems — is a stronger framing than the simple “costs are falling” narrative.
Cost model. The distillation enforcement angle has downstream pricing implications. If Anthropic successfully restricts open-source fast-following, API pricing stays elevated longer, which affects our per-task cost projections for autonomous agent loops.
Related
- 2026-04-07-stratechery-anthropic-tpu-deal-google-alliance
- 2026-03-16-stratechery-agents-over-bubbles
- 2026-02-11-stratechery-spotify-earnings-ai-aggregation
- 2026-04-10-jaya-gupta-anthropic-moat
- 2026-04-12-alphasignal-claude-code-leak-harness-engineering