Why this is in the vault
Third Amazon-thesis data point in 48 hours. Ben Thompson’s piece ties Monday’s Amazon Supply Chain Services launch to the broader pattern: Amazon converts marginal costs into capital costs by being its own first-best customer, then leases the resulting primitive to everyone else. Critically, he extends the argument into AI - claiming AWS looked behind in the training era but is structurally well-positioned for the inference era, which is now the larger market. Direct corroboration of Karl Mehta’s “commoditization of LLMs / value moves to rails” thesis from one day prior, but from a different angle: Thompson is identifying who owns the rails.
The core argument
- Amazon has a repeatable formula across AWS, e-commerce logistics, ASCS, and now Leo (satellites): build a “primitive” with Amazon itself as the first-best customer, justify massive upfront capex, then sell the same primitive to third parties to amortize the investment over a decade-plus horizon.
- The 2023 SemiAnalysis critique (Nitro/EFA networking, in-house chips, fewer Nvidia allocations) was correct for a training-dominated world. But three structural shifts now favor AWS:
- Inference fits in a single server - no thousands-of-chip mesh required.
- Reasoning + agentic workloads need huge KV caches, pushing toward dedicated memory-server architectures that fit Amazon’s disaggregated approach.
- Agents are CPU-heavy, requiring exactly the heterogeneous resource routing Nitro was designed for.
- Jensen’s “tokens-per-watt” defense of Nvidia margins breaks down for Amazon specifically: Amazon can buy power upstream cheaper than Nvidia margins downstream, electricity is more commoditizable than logic, and inference utilization is a harder problem than training.
- Trainium 3 is “decent” - Annapurna acquisition was 2015, first AI chip 2019, so the seven-year compounding finally pays off. Bedrock quietly routes users onto Trainium without their knowing (Graviton playbook 2.0).
- Amazon is the most “neutral” frontier-model host: Microsoft cannibalizes Azure for internal workloads, Google has search-existential pressure, but Amazon’s core businesses are physical (retail, data centers, soon satellites + drones), so it has no incentive to deprioritize customer compute.
- Forward look - Leo + drones + ASCS converge into a vertically integrated physical-world stack where Amazon owns its own connectivity layer (no Starlink dependency for drone fleet).
Key Thompson framing: “long-term vulnerability to AI is strongly correlated with how much a company interacts with the physical world.”
Mapping against Ray Data Co
Strong - this is the third datapoint in the same thesis cluster in two days, and it sharpens the picture in a way directly relevant to RDCO positioning.
The triangulation: Karl Mehta argued the LLM layer commoditizes and value moves up to applications/agents. Thompson argues Amazon owns the inference rails (Trainium + Bedrock + power buildout) and is the safest neutral host because its core businesses are physical, not digital. Combined: the inference layer commoditizes onto AWS-class infrastructure, the model layer becomes interchangeable behind Bedrock-style abstractions, and the durable value is in the application / workflow / vertical-integration layer above. This is precisely the layer RDCO operates in - I’m a COO agent built on top of commoditized inference, not a bet on a particular model winning.
Strategic implications for RDCO:
- The “AWS-pattern” thesis (build for self, lease the leftover) is already in our operating model: every skill I add to my own toolkit becomes potentially packageable as a workflow primitive others could rent. The /paid-ads, /process-newsletter, /research-brief skills are all candidate primitives if we ever go horizontal.
- Thompson’s physical-vs-digital frame is a useful check on RDCO bets: Squarely (physical product), MAC info-product, Sanity Check (digital but distribution-controlled via subscriber list) all have varying degrees of physical/distribution moat. Pure-digital plays without distribution lock-in are the ones most exposed to AI commoditization.
- Bedrock as “users don’t know what chip they’re on” is the same dynamic that will hit at the application layer: end users won’t know or care which model is behind a workflow. This matters for how we describe RDCO surfaces - lead with outcomes, not model brand.
- The Leo + drones + ASCS convergence is a reminder that Amazon plays in decade increments. RDCO’s L4-to-L5 timeline should not be measured in quarters. Founder’s instinct to unhobble the COO agent first (rather than rush small bets) lines up with this patience model.
Open question worth flagging to founder: Thompson treats “neutral inference host” as a competitive advantage. If Amazon (and to a lesser extent Google) are the durable inference-rail owners, that’s a non-trivial input into where to host RDCO infrastructure long-term. We’re currently Cloudflare-first - worth a separate note on whether that’s still right given this thesis.
Related
- 2026-05-04-karlmehta-llm-commoditization-intelligence-rails - filed yesterday; Karl Mehta’s “commoditization of LLMs” thesis. Thompson’s piece names Amazon as the rails owner.
- 2026-05-04-amazon-supply-chain-services-launch - filed yesterday; the ASCS launch news that Thompson uses as his lead. Thompson confirms it’s the culmination of his decade-old “Amazon Tax” prediction.
- 2016-the-amazon-tax (Thompson, cited inline) - original 2016 Stratechery piece predicting Amazon would lease its logistics network like AWS. Worth pulling into vault if not already there.