“The Half-life of a Moat (Part 2): Dig Faster” — Jonathan Natkins
Why this is in the vault
Part 2 of Natkins’s promised Seven-Powers-through-agents series (Part 1: 2026-04-14-semistructured-half-life-of-a-moat-part-1). Part 1 asked how moats dissolve. Part 2 answers “what do you build instead?” and converges on a framework — perpendicular vs. parallel companies — that is directly load-bearing for RDCO positioning. Natkins also folds in recent live-fire events (Claude Design shipping against Figma on Apr 14; Anthropic hiring Workday’s CTO; Anthropic posting vertical EM roles in financial services, legal, healthcare, life sciences) as evidence that frontier labs are now walking into application categories simultaneously, not just racing each other.
Disclosure inside the piece: Natkins explicitly names ClickHouse as his employer when using it as a “perpendicular” example. He also cites a public exchange with Fergal Reid (Intercom AI lead) that he argues through in the essay. No external sponsor or “brought to you by” block — the only ask is a Substack subscribe CTA.
The core argument
Claim. The durable question for any AI builder is: “If Anthropic’s next model is 10x better, does my company become more or less valuable?” That question sorts companies into two buckets:
- Parallel companies run in the same direction as model improvement. Every frontier release is an existential threat dressed up as a product launch. Coding agents, search wrappers, content generation, generic chatbot UIs. Value proposition shrinks as the frontier advances.
- Perpendicular companies run orthogonal to model improvement. Their products get more valuable as models get better, because better models generate more demand for the thing they sell. Infrastructure beneath the model — telemetry storage (ClickHouse), observability and evals (Langfuse), the data layer underneath agents — are his canonical examples.
Why the four moats people bet on don’t hold.
- The data flywheel (Intercom’s Apex, Cursor’s Composer 2, Decagon Labs). McCabe argues completion/resolution data is a durable signal; Natkins argues frontier labs — with direct access to vastly more completions across every domain — are accumulating the relevant data faster and can bake it into the base model, not just a fine-tune. And with recursive self-improvement on the table, Anthropic/OpenAI have strategic incentive to dominate coding far beyond the revenue opportunity. “That’s a really fucking rough place to be if you’re betting your moat on a fine-tuned model.”
- Fine-tuned vertical models. You fork an open-weight model and marry yourself to its architecture. When the base upgrades, you don’t benefit — you retrain and hope your pipeline transfers. Fergal Reid’s counter (the durable asset is the post-training pipeline, not any specific Apex build) is steelmanned but Natkins’s reply is the dilemma: either the task is not intelligence-saturated (frontier will eventually beat your fine-tune), or it is (lots of models clear the bar, meaning the model isn’t the moat — the product is). Either way: the model is not your moat. Where fine-tuning does work: sufficiently narrow use cases the frontier has no economic incentive to optimize for (rare-dialect translation for government intel, niche science).
- Speed to market / viral B2B adoption. Cursor $2B, Replit projecting $1B by EOY 2026, Lovable $100M in eight months. “B2B companies with B2C adoption curves.” But the growth engine is structurally fragile: you are a tenant on the inference of the company now shipping your competitor. Users adopted you virally and can abandon you virally. This is why buyers are being pushed into multi-year commitments — it buys the vendor time to excavate a moat or at least reach the next round.
- AI-native UX. Genuinely better than sparkle-emoji retrofits — but if agents increasingly become the primary interface, the UX layer goes vestigial regardless of how well it was designed. “World’s best buggy whip.”
The meta-moat is execution velocity. Not a moat in the traditional sense. “You don’t need to outrun the bear. You just need to outrun the other hikers.” Process power applied at a tempo that matches the rate of change in AI.
Live-fire evidence. Anthropic’s mid-April burst (Mythos private preview, Opus 4.7, Claude Design) sorted winners from losers in six trading hours: Figma -7%, Adobe dipped quietly, Jasper’s story “already told.” Meanwhile ClickHouse and Langfuse read the same week as tailwind.
Uncomfortable coda for equity-holders. Three questions for anyone holding AI-company equity: (1) perpendicular or parallel? (2) is the team executing faster than the moat is decaying? (3) is your exit timeline shorter than the moat’s half-life? “Nobody at the company is going to run that math for you.”
Mapping against Ray Data Co
This is the synthesis move Part 1 promised, and it lands squarely on the RDCO thesis. Natkins’s perpendicular/parallel test is the same question our ../04-tooling/rdco-state-ownership-architecture doc has been asking in different words. Strong mapping — this is a tracked-author win.
1. Direct reinforcement of the state-ownership architecture. RDCO’s bet is that the durable layer is client-owned state + stateless reasoning. Natkins’s perpendicular examples — infrastructure that sits underneath the agent (telemetry, observability, evals, data layer) — are structurally the same bet. More agents in production means more data, means more need for the layer that stores and queries that data. The state RDCO helps clients accumulate and own is perpendicular to model improvement in exactly Natkins’s sense: better models make the client’s accumulated state more valuable, not less. Connects to:
- concepts/externalized-cost (CA-017) — the wrapper companies Natkins calls “parallel” are systematically externalizing the cost of model improvement back onto their users/investors in the form of half-life risk. Client-owned state is the internalized-cost architecture; vendor-mediated state is the externalized-cost architecture.
- synthesis-harness-thesis-dissent-2026-04-12 — Natkins’s “execution velocity as the meta-moat” is the same observation as the thin-harness-fat-skills camp: the durable asset is the operating muscle of composing small advantages fast, not any individual component.
- 2026-04-11-garry-tan-thin-harness-fat-skills — Tan says skills are the fat part because they encode the proprietary expertise the founder/team actually owns. Natkins’s “cornered resource + process power” is the framework-level version of that claim.
2. The consulting-brand test Part 1 raised gets sharper in Part 2. Part 1 worried that “nobody gets fired for buying McKinsey” is on a half-life as procurement becomes algorithmic. Part 2 refines the test: RDCO is brand-exposed (parallel-ish) if what we sell is the consulting deliverable, and infrastructure-exposed (perpendicular) if what we sell is the client-owned operating architecture. The founder’s repeated framing — “the data layer does the work” (Natkins Mar 31), stewardship retainer not framework sale — already routes us to the perpendicular side. Part 2 is confirmation, not correction.
3. Direct tension with the “entangled system” pricing story. Natkins is explicit that fine-tunes, datasets, and viral B2B adoption all have half-lives shorter than founders want to believe. This sharpens the Part 1 caveat about Moura’s entanglement thesis: RDCO engagements priced as “the system compounds in value” are implicitly betting the entanglement half-life is long. Natkins would say: price and structure as ongoing maintenance of a depreciating asset, with the perpendicular layer (client-owned data, client-owned eval harnesses) as the durable piece that survives model upgrades. See 2026-04-13-moura-entangled-software-agent-harnesses-dead for the Moura side of this tension.
4. “Perpendicular vs. parallel” as a Sanity Check content hook. This is the cleanest one-frame synthesis we’ve seen from the moat-debate cohort (Gupta, Moura, Tan, Thompson, Natkins Part 1, CommonCog series). Natkins gives us the frame + the live-fire case study (Claude Design vs. Figma, Apr 14) + the three equity-holder questions as a closing provocation. This is ready-to-ship research-brief material for an issue on “which AI moats are real” — or a standalone issue on the perpendicular/parallel test as the one question that matters.
5. Concept-page trigger. The vault has concepts/externalized-cost but no concept page for perpendicular-vs-parallel or moat-half-life as a durable frame. Natkins has now delivered it across two posts with a named framework and live case studies. This meets the threshold for a new concept article in 06-reference/concepts/ — the founder’s call on whether to promote.
Where Natkins is strongest and weakest in Part 2
Strongest: the perpendicular/parallel reframe is genuinely clarifying, and the Claude Design / Figma case study (six trading hours, -7%) is a better evidence unit than any argument about what moats might dissolve in the future. The Fergal Reid exchange is also intellectually honest — he steelmans the pipeline-as-moat argument before closing the dilemma.
Weakest: “execution velocity as the meta-moat” is close to a tautology — “whoever runs fastest wins” is always true in a commoditizing market. It doesn’t answer the Part 1 question of what the asset is that execution compounds. Natkins implicitly answers “nothing permanent — keep running,” which is consistent with his half-life framing but leaves the question of what durable capital the fast runner accumulates unresolved. Ray Data Co’s answer (client-owned state + harness muscle) is a more specific claim about what survives.
Also worth flagging: Natkins is ClickHouse-employed and names it as a perpendicular example. This is disclosed inline, not hidden, but readers should note the frame was developed by someone whose employer sits neatly on the perpendicular side of it.
Related
- 2026-04-14-semistructured-half-life-of-a-moat-part-1 — Part 1 (how moats dissolve); Part 2 directly answers its cliffhanger
- 2026-03-31-semistructured-data-layer-does-the-work — Natkins’s earlier “data layer does the work” piece; Part 2 is the framework-level elaboration of the same bet
- 2026-04-13-jaya-gupta-ai-lock-in-state-moat — Gupta’s four-layer state typology; the operational map beneath Natkins’s perpendicular/parallel frame
- 2026-04-13-moura-entangled-software-agent-harnesses-dead — Moura’s entanglement thesis; in tension with Natkins’s “viral adoption is structurally fragile” argument
- 2026-04-14-stratechery-openai-memos-anthropic — Thompson on the Dresser memo / Frontier Alliance; the primary source showing vendors racing against exactly the half-life Natkins describes
- 2026-04-11-garry-tan-thin-harness-fat-skills — Tan’s thin-harness-fat-skills; the skills layer is Natkins’s “cornered resource + process power”
- synthesis-harness-thesis-dissent-2026-04-12 — the running dissent synthesis this should fold into
- concepts/externalized-cost (CA-017) — parallel companies externalize model-improvement cost onto users; perpendicular companies internalize the benefit
- ../04-tooling/rdco-state-ownership-architecture — RDCO’s architecture doc; Natkins’s perpendicular/parallel frame is direct confirmation of the bet
- 2026-04-15-dwarkesh-jensen-huang-nvidia-moat — Huang on NVIDIA’s moat; another perpendicular example (infrastructure beneath the models)