06-reference

stratechery altman garman bedrock managed agents

Mon Apr 27 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Stratechery ·by Ben Thompson (interview with Sam Altman, OpenAI CEO + Matt Garman, AWS CEO)
amazon-bedrockopenai-microsoft-dealagent-substratemanaged-agentsharness-thesis

“An Interview with OpenAI CEO Sam Altman and AWS CEO Matt Garman About Bedrock Managed Agents” — @benthompson

Why this is in the vault

Two CEOs articulate the substrate-layer fight in primary-source language: AWS + OpenAI announce a co-built “Bedrock Managed Agents” product the same week Microsoft surrenders Azure exclusivity. This is a direct stress test of the harness-thesis cluster and the most explicit confirmation yet that the substrate layer (model + harness + identity + state + permissions, packaged for enterprise deployment) is now the contested ground. RDCO’s agent-deployer thesis sits one layer above this fight, which is exactly why we file it as competitive intel, not as a competitor.

Episode summary (the deal context)

Thompson’s framing precedes the interview: he conducted the conversation Friday, then on Monday Microsoft and OpenAI amended their agreement so OpenAI can serve products on any cloud (Azure remains “primary,” license non-exclusive through 2032, no more Microsoft revenue share to OpenAI, OpenAI continues paying revenue share to MSFT through 2030 with a cap, MSFT released from the AGI clause). Thompson reads it as Microsoft tending its OpenAI investment because Azure’s exclusivity was actively hurting OpenAI vs Anthropic in enterprise. The interview itself launches Bedrock Managed Agents — OpenAI frontier models packaged inside an AWS-native agent runtime (identity, permissions, state, logging, governance, deployment) — exclusive to AWS, customer data stays in the customer’s VPC, runs on a Trainium/GPU mix.

Key claims (numbered, attributed)

  1. (Altman) The harness is no longer separable from the model. “I no longer think of the harness and the model as these entirely separable things.” When you fire something at Codex, Altman himself doesn’t know how much credit goes to the model vs the harness.
  2. (Altman) Tool-calling is the precedent: things that looked separable get baked into training over time. He expects model + harness to converge further, and pre-training + post-training to converge as well.
  3. (Altman) The activation-energy analogy: AWS-for-cloud is what Bedrock Managed Agents is for agents. Pre-AWS you could rent colo and stand up servers; pre-Bedrock-Managed-Agents you can stitch together OpenAI + AWS yourself. The new offering compresses the activation energy and unlocks workflows you literally cannot reliably get to work today.
  4. (Garman) The product is built on top of AgentCore primitives. Bedrock Managed Agents = AgentCore components (memory, safe execution, permissioning) + OpenAI models, co-built. AgentCore continues as the DIY path for builders who want to assemble it themselves.
  5. (Altman) Exclusivity is real, scoped to this product. “We’re doing this exclusively with Amazon.” Distinct from OpenAI API access on other clouds. Spiritually a joint company effort, not just an API surface.
  6. (Garman) Customer data stays in the customer’s VPC; OpenAI does not see it. Frontline support is AWS; AWS escalates to OpenAI for model-level bugs.
  7. (Garman + Altman) Trainium will run an increasing share over time, mixed with GPUs initially. Garman: “Some of it’s timing and capabilities… over time, more and more of it will be on Trainium.”
  8. (Altman) Per-token pricing is the wrong unit. GPT-5.5 has higher per-token cost than 5.4 but uses far fewer tokens to reach the same answer; customers want “best unit of intelligence at the lowest price.” He floats “intelligence factory” over “token factory.”
  9. (Altman) The architecture problem is identity for agents. Open question: should an agent log in as the employee, with an “agent flag,” or have its own account? “We don’t even have a primitive to think about that.”
  10. (Altman, asked about Thompson’s middleware-layer theory) Yes, customers are converging on a consistent ask. Large enterprises want an agent runtime, a management layer to connect data + monitor token spend, and an end-user workspace (he hopes Codex). “That package of what people are asking for is getting remarkably consistent.”
  11. (Garman) AWS’s neutrality is intentional, not accidental. Contrast with Google’s full-stack integration story (Kurian week prior). AWS view: “the best products win,” partners + first-party can coexist, S3 is the only S3 but the upper layers are open.
  12. (Altman) The most important reframe a year out won’t be “OpenAI on AWS” — it’ll be “we didn’t realize how important this new product was.” Both CEOs frame the managed-agent stack as a new computing paradigm, not just a distribution deal.

Mapping against Ray Data Co

This is load-bearing for RDCO positioning. Today we established that RDCO’s agent-deployer thesis sits one layer above the substrate fight: substrates compete to be the runtime that enterprises deploy agents into; RDCO deploys agents into client operating workflows regardless of which substrate the client is on. The Stratechery interview is the cleanest primary-source articulation yet of that substrate fight, with five concrete implications:

Implication for Sanity Check: today’s interview deserves its own original re-frame, not a derivative summary. Candidate angle: “Sam Altman just admitted the harness is the moat. The substrate vendors are building it for you. Should you let them?” — sets up state-ownership-vs-substrate-lock-in as the operator’s decision, not a vendor pitch.

Implication for the agent-deployer positioning doc: the substrate-layer competitive map should be added as a section. Today: Anthropic (model+harness, vertical), OpenAI-on-Bedrock (model from one, substrate from another, co-built), Google (full-stack vertical per Kurian), DIY (LangChain/agent-frameworks/in-house). RDCO sits above all four.

Sponsorship

sponsored: false. Stratechery is subscription-funded, no third-party sponsors in interview format. Worth flagging that Thompson is openly bullish on this kind of integration (he repeatedly tees up Altman to confirm his harness-as-moat theory), but the bias is intellectual-thesis-alignment, not financial-sponsorship.

All claims paraphrased from the email body received 2026-04-28. One direct quote, ≤15 words, in quotation marks (claim 1: Altman on harness/model inseparability). Source: Stratechery Interview, 2026-04-28.