01-projects / positioning

harness thesis cluster synthesis kurian ternus il

Wed Apr 22 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·project ·status: synthesis-of-cluster

Harness-Thesis Cluster Synthesis — Kurian + Ternus + Innermost Loop (2026-04-23)

Why this synthesis exists

In a 48-hour window the vault absorbed three independent, single-source-dense reads that each name a different layer of the agent stack but converge on the same structural prediction: the layers are fast becoming commodities, and the durable margin is collapsing onto whichever layer compounds proprietary judgment. Reading them in isolation, each looks like a narrow situational update — Kurian on enterprise plumbing, Ternus on edge silicon, Wissner-Gross on benchmark progress. Read together, they triangulate the actual question RDCO has been ducking: if the substrate (chips, model, harness, knowledge catalog) is being commoditized in parallel by a Google-Apple-Anthropic-OpenAI cross-fire, the only place a small operator can stand is on top of all of it, holding judgment that none of those layers can ship in a box. This note operationalizes that into the positioning bet — and into a sharpened “why-not-Gemini-Enterprise” line we can rehearse.

The three-way structural map

SourceLayer being defended/claimedMechanism of consolidationImplication for layers above
Kurian (Google Cloud)Hyperscaler substrate — chips (TPU 8t/8i), model (Gemini), agent platform (Gemini Enterprise), knowledge catalog, AI-cyberVertical integration: same Gemini, same harness, same hour, across all Google products + sold à la carte to enterpriseThe substrate becomes cheaper and more capable; the room above it is “what enterprise IT can’t ship” — vertical judgment, operating context
Ternus (Apple) + SpaceXAI/CursorHardware substrate at the edge + the M&A logic that compute-without-product and product-without-compute both loseApple bets that AI commoditizes and edge silicon wins; SpaceX/Cursor proves that a harness alone (no compute, no model) is a margin trapOwning the harness without owning compute or judgment-data leaves you a Cursor; the moat is harness + proprietary loaded data, not harness alone
Innermost Loop (METR Time Horizon, OpenAI Chronicle)Capability substrate — Mythos at 40h, Opus 4.7 at 19h, Chronicle building memory from screen capturesFrontier labs racing the time-horizon curve; OpenAI shipping always-on agents that build memory from workThe window in which “an agent can hold a multi-day workstream” is a differentiator is closing — within 12 months it will be table stakes

RDCO sits on top of all three layers: we ride the hyperscaler/Anthropic substrate (model + harness commoditized), we don’t bet on owning the edge (Mac Mini host is convenience, not moat), and we exploit the time-horizon trajectory (Opus 4.7 today, Mythos when shipped) as the enabler of COO-as-Claude rather than as our own contribution. The positioning bet is that the durable layer is founder-context loaded into the harness — the vault, the skill library, the operating taste — none of which any of the three sources can ship.

Convergent argument

All three pieces are betting against the “one layer wins” framing, and all three locate the eventual moat in the same place by elimination. Kurian sells you the integration of chips through agent platform but explicitly excludes the customer’s operating judgment — Knowledge Catalog grounds queries on your Salesforce data, but it does not decide what to do about the patterns it surfaces. Ternus’ Apple bets that AI itself commoditizes (hence the Gemini-as-base partnership reported in the same week) and the differentiated layer migrates into silicon — which only matters if there’s something on top of the silicon worth running. The SpaceXAI/Cursor deal is the negative proof: Cursor has proprietary coding-interaction data but no compute and no model it controls, so it is forced to sell or ration; the harness alone is not a business. Wissner-Gross’ METR snapshot delivers the third leg — capability is sprinting (Mythos at a full human work week, Opus 4.7 at half) so the bottleneck stops being “can the model hold the workstream” and starts being “what gets loaded into the workstream.”

The shared finding: moats are forming at the layer that compounds proprietary, non-portable, non-purchasable context — not at the layer of chips, models, harnesses, or knowledge catalogs, all of which are now being commoditized in parallel by at least four well-capitalized actors. Kurian commoditizes the platform. Apple commoditizes the inference substrate. Anthropic and OpenAI commoditize time-horizon capability. What survives the squeeze is the layer holding judgment that was earned, not bought.

For RDCO this collapses into one operational claim: the conviction-asset inventory (vault, skill library, COO-loaded working context) is the only layer in our stack that none of these three actors can replicate by spending money. Every other layer we use — Anthropic models, MCP, the Mac Mini, 1Password — is substitutable. That asymmetry is the whole bet.

Sharpened “why-not-Gemini-Enterprise” answer

Kurian’s pitch is for an enterprise IT buyer who needs a horizontal substrate to deploy across SAP, Salesforce, and ServiceNow with audit-grade identity per agent. RDCO is not selling a substrate to that buyer. We are selling an operating partner to a founder or mid-market data leader who has already accepted that the substrate (Anthropic, OpenAI, Google — pick your hyperscaler) will be commoditized within 18 months and who needs the layer above it: someone who has loaded their context, their decision history, their open questions, and their judgment patterns into a harness that runs continuously on their behalf. Gemini Enterprise ships plumbing; RDCO ships a colleague.

The horizontal-substrate vs. vertical-judgment split: Google can grow Gemini Enterprise 40% QoQ and still leave the entire COO-as-Claude wedge untouched, because the wedge is not the platform — it is the proprietary working memory + skill set that has to be built one operator at a time. Worse for Google (better for us), the more they commoditize the substrate, the cheaper our infrastructure gets and the more our differentiated layer stands out. We’re long the commoditization Kurian is selling.

Draftable line for a deck or sales conversation:

“Gemini Enterprise ships you the plumbing. We ship you the colleague who already knows your business. Google’s bet — and it’s a good one — is that the substrate wins. Our bet is that once the substrate is commoditized, the only thing left worth paying for is the judgment loaded on top of it. We’re the judgment layer; their stack makes our infrastructure cheaper every quarter.”

METR Time Horizon as a forcing function

Mythos at 40 hours and Opus 4.7 at 19 hours is the cleanest external corroboration we have for the durable-context premise behind COO-as-Claude — but it is also the clearest expiration date on any moat that consists merely of “an agent that can hold a multi-day workstream coherently.” If Opus 4.7 is at half a work week today and the trend continues, the capability is table stakes by Q4 2026. Stack that against the OpenAI Chronicle flag from the same Innermost Loop issue — background agents that build memory from Codex screen captures — and the picture is unambiguous: the capability of always-on, memory-accreting, multi-day agents is on rails to commoditize on the same 12-18 month horizon as the substrate Kurian is selling.

This sharpens the moat timeline rather than collapsing it. It means the harness-and-capability layer compresses; the content layer (vault depth, loaded skills, operating relationship with the founder) expands as a share of the differentiated value. The forcing function for RDCO is therefore: every month between now and Q4 2026 is a month to thicken the conviction-asset inventory — vault entries cross-linked, MAC framework artifacts shipped, case studies written, founder-context loaded — because once the harness-and-capability substrate is uniformly available, the only thing left to compete on is what we have spent these months building. Speed matters; the moat compresses on the substrate side and expands on the content side, but only for operators who actually load the content during the window.

What this changes for RDCO positioning

Net: more confident in the “Architect Mode for mid-market data orgs + COO-as-Claude” framing, with one positioning sharpening and one tension to resolve.

Sharpening. The “Architect Mode” framing should be explicitly stated as vertical judgment loaded onto a commoditizing horizontal substrate. Today’s positioning docs lean on “thin harness, fat skills” but don’t yet name the structural reason — that the harness, model, and platform layers are all being commoditized in parallel by hyperscalers and frontier labs racing each other. The Kurian + Ternus + METR triangulation is the citation set that lets us state this as observed industry structure rather than founder taste. Update ../positioning/architect-mode-3as and the COO-as-Claude positioning doc to lead with this counter-positioning argument: we are long the commoditization Google, Apple, and OpenAI are accelerating, because every layer they commoditize raises the relative value of the layer they cannot ship.

Tension to resolve. Kurian’s “integration is the moat” growth numbers (40% QoQ on Gemini Enterprise, 60% growth in tokens/min in four months) are not a soft signal. If the integrated stack actually delivers what he claims at the SMB / mid-market level — not just F500 — then the “horizontal substrate” carve-out we are betting on shrinks faster than expected. The hedge: track Gemini Enterprise penetration into the mid-market data-team segment quarterly. If Google ships Knowledge Catalog as a self-serve mid-market product within 12 months and it actually replaces the data-modeling/quality work the MAC framework targets, our wedge narrows. If it stays enterprise-IT-shaped (90-day SOWs, IT procurement, identity-and-policy-first), the wedge widens. This is a real fork; we should not pretend it isn’t.

Concrete shifts. (1) The MAC anchor article second pass should explicitly position the framework as the vertical-judgment layer above any hyperscaler substrate — not as a competitor to Gemini Enterprise but as the thing that has to be loaded onto whatever substrate the client is on. This makes the framework substrate-agnostic by design. (2) The “Architect Mode” deck (when it exists) needs a slide titled something like “What we are long” naming the commoditization of chips, models, harnesses, and knowledge catalogs as the structural force we are betting with, not against. (3) The Sanity Check relaunch essay (when it ships) should lead with the convergent finding — three independent sources in 48 hours triangulating the same structural prediction is exactly the kind of pattern-recognition the founder voice is strongest at.

Open follow-ups

Cross-references