06-reference

semistructured half life of a moat part 1

Mon Apr 13 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Semi-Structured (Substack) ·by Jonathan Natkins (Natty)

“The Half-Life of a Moat (Part 1)” — Jonathan Natkins

Why this is in the vault

Part 1 of a moat series from Natkins, who already filed one of the strongest “data is the moat” arguments in the vault (2026-03-31-semistructured-data-layer-does-the-work). This piece is the synthesis move: he takes Hamilton Helmer’s Seven Powers framework and runs each power against the reality of AI agents, asking which moats survive and which are being drained. It is effectively the framework-level version of the moat debate we have been tracking across Gupta, Moura, Tan, and Thompson — and it arrives the same day as Thompson’s OpenAI/Frontier piece, which describes vendors racing to build moats in real time.

Core argument

Natkins opens on the Windsurf episode: Google paid $2.4B for a non-exclusive license and the founders, then launched Antigravity aimed at Windsurf’s own user base four months later. That bet only makes sense if there is nothing holding those users in place — no switching costs, no lock-in, no moat. He generalizes: that is the defining strategic reality for a generation of AI companies.

He walks Helmer’s seven powers (scale economies, network effects, counter-positioning, switching costs, branding, cornered resource, process power) and sorts them by what agents do to each:

Powers being drained by agents:

Powers that persist or strengthen:

The Power Progression inversion. Helmer’s model says scale/network/switching build during Takeoff and branding/process come later, during Stability. Natkins argues AI-native companies invert this: they arrive with process power and brand from day one, but the Takeoff-phase powers never form because agents dissolve them as fast as they are built. He calls this “unprecedented” and explicitly does not know what it means yet — Part 2 is promised.

Mapping against Ray Data Co

Natkins reinforces the state-ownership architecture, with a caveat.

Our ../04-tooling/rdco-state-ownership-architecture doc argues the durable position is client-owned state + stateless reasoning engine — the Databricks counterproposal Gupta names. Natkins’s framework-level analysis supports this from a different angle: of the Seven Powers, the ones being drained are the ones vendors are trying to manufacture inside their closed APIs (switching costs via memory lock-in, branding via institutional trust narratives, data network effects via completion datasets). The ones that persist — scale, counter-positioning, process power, genuinely cornered resources — are orthogonal to the state-capture play Thompson describes OpenAI running with Frontier.

Where Natkins pressures our positioning. His “branding is dead because agents don’t care about brand” argument cuts against the consulting-brand play for RDCO. If enterprise procurement becomes algorithmic, the “nobody gets fired for buying MG / BCG / McKinsey” effect Thompson flagged in the Frontier Alliance 2026-04-14-stratechery-openai-memos-anthropic is itself on a half-life. That argues for RDCO to compete on cornered resource (the founder’s specific domain expertise, voice, and accumulated vault) plus process power (the compressed-loop operating muscle) rather than brand-alone.

Does Natkins reinforce or contradict Databricks-style state-ownership?

Reinforces — but with a sharper frame than Gupta or Moura provided. Gupta named four layers of state and said the moat forms in those layers. Moura said software and customer become entangled over time. Natkins takes one step back: he accepts that switching costs, data moats, and network effects are exactly what vendors try to build, and argues agents systematically drain them. State ownership under client control is precisely the architecture that does NOT try to manufacture those draining moats — it bets instead on cornered resource (your proprietary data, which you own) and process power (how your team operates), both of which Natkins flags as resilient. So client-owned-state is consistent with the powers that survive and orthogonal to the ones dying.

Caveat and contradiction with Moura. Moura’s “entangled software” thesis implies that accumulated customer-product intertwinement becomes its own moat. Natkins is more skeptical of this framing — his “data network effect” critique applies directly: the flywheel requires manual cranking each turn, and every frontier release can devalue what was accumulated. This is a real tension with the RDCO consulting pitch that “the entangled system grows in value over time.” If Natkins is right, entanglement is real but its half-life is shorter than Moura implies, which means stewardship engagements need to be priced and structured as ongoing maintenance of a depreciating asset, not as a one-time flywheel install.

Cornered resource, sharpened. Natkins’s clearest test — “is this resource genuinely unavailable to competitors?” — is the bar we should hold RDCO’s vault against. The vault’s 1,400+ documents are cornered only so long as the specific founder-curated assessments and cross-references cannot be reproduced from public sources by a competitor with the same model access. That is a real moat for THIS founder and THIS business context, but it is not transferable — which aligns with the “stewardship retainer, not framework sale” engagement model.

Content implication. This is strong raw material for a Sanity Check issue on “which AI moats are real.” Natkins provides the framework (Seven Powers filtered through agents), Gupta provides the taxonomy (four layers of state), Thompson provides the primary source (Dresser memo showing vendors racing to build moats in real time), and Moura provides the philosophical framing (entangled software). A synthesis piece from those four, with Part 2 as the follow-up hook when it drops, is a near-complete arc.

Where Natkins’s argument could be weakest

  1. “Agents don’t care about brand” may underweight human-in-the-loop procurement. Enterprise buying is not yet algorithmic. Procurement, legal, security review, and renewal decisions are still human. Brand still lubricates those, even if the agent executing the work is brand-agnostic. Full algorithmic procurement is a horizon event, not a current state.

  2. The switching-cost argument assumes agent-to-agent feature parity. When he switched coding agents and “lost nothing,” that presumes the agents were substitutable. For deeper integrations — where an agent has accumulated memory, is wired into tickets, calendar, CRM — the lift to switch is still real. His framing is true for thin-shell tools and understates the cost for entangled ones, which is Moura’s whole argument.

  3. Part 1 promises Part 2 but does not deliver the answer. The essay ends on “so what do you build?” as a cliffhanger. That is fine as a newsletter device but means we should not over-weight Part 1 as a complete argument — hold open a Part 2 follow-up assessment.