04-tooling

rdco state ownership architecture

Sun Apr 12 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·architecture-decision ·status: active

RDCO State-Ownership Architecture

The architectural principle that makes RDCO durable against model switches, harness upgrades, and vendor consolidation — and the positioning that differentiates RDCO consulting engagements from “sign up for a vendor” alternatives.

The Principle

We keep the state. We keep the data. We use the harness and model to update our state. Vendors may see a slice of our state temporarily during inference, but they cannot and should not own it.

This is the Salesforce model applied to AI operating systems. Salesforce helped companies track their relationships but never tried to own those relationships. Customer data lived in Salesforce tables the customer controlled. That was the distinction that made the license defensible without being predatory.

AI vendors are trying something different: own both the execution environment AND the state that accumulates inside it, behind APIs that make ownership ambiguous. Anthropic’s Cowork, OpenAI’s Enterprise, Microsoft Copilot, Google Gemini for Workspace — all are designed so that the valuable state (memory, organizational context, human-AI patterns) accumulates in vendor-controlled systems.

We reject that. Our architecture enforces customer ownership at every state layer.

The Four Layers of State (per Gupta)

We design around all four layers of state explicitly, and we own all four:

1. Behavioral State

What it is: How our production systems are calibrated to model behavior (prompts, parsers, evaluation logic, orchestration patterns). Where it lives: ~/.claude/skills/ and ~/.claude/scripts/ — our own codebase. Why this matters: We can swap models without rewriting this layer. The skills are written to interfaces, not to Claude-specific quirks.

2. Memory State

What it is: What the agent has done, learned, been told — episodic, semantic, procedural memory across sessions. Where it lives: ~/rdco-vault/ (1,419 documents), ~/.claude/state/working-context.md (durable scratchpad), ~/.claude/projects/*/memory/ (auto-memory). Why this matters: All of this is plain markdown we can read, edit, version-control, and migrate. QMD is a local search index we control.

3. Organizational Context State

What it is: The accumulated understanding of how Ray Data Co operates — decisions, project history, relationships, patterns. Where it lives: Notion task board (our own Notion workspace), vault 01-projects/, 05-meetings/, 03-contacts/. Why this matters: Notion API is portable. The board schema is documented. We could migrate to Linear or Airtable in days if needed.

4. Human-AI State

What it is: The model of how Ben thinks, reasons, and works — built through hundreds of hours of interaction. Where it lives: SOUL.md, auto-memory files, working-context, cross-referenced in every vault entry. Why this matters: This is the hardest to replicate but also the most personal. It lives in files we own, not in a vendor’s fine-tune.

The Inference Contract

When Claude (or any model) reads our state to do work, we treat it as borrowed access, not ownership transfer:

This is explicitly the Databricks counterproposal from Gupta’s framework: “Episodic memory, semantic memory, and organizational context live in enterprise-governed infrastructure, with the LLM treated as a stateless reasoning engine that reads from persistent state at inference time.”

We arrived at this architecture by instinct (Ben’s data engineering discipline) before we had the strategic framework (Gupta’s state typology). The framework confirms the choice was right.

What this rules IN

What this rules OUT

Consulting Implication

This is RDCO’s differentiated positioning when we help clients build their own agentic systems.

The wrong engagement: “We’ll get you set up on [vendor’s enterprise AI product]. They handle the memory, the skills, the orchestration. You just use it.”

This engagement fails the state ownership test. The client’s organizational state accumulates in a system they cannot inspect, export, or govern. When the vendor changes pricing, gets acquired, or changes direction, the client has no leverage.

The RDCO engagement: “We’ll help you build your own AI operating system. You’ll own every layer of state. Models become a commodity choice. Harnesses become replaceable infrastructure. The state we help you accumulate is yours — permanently, exportably, inspectably.”

Pricing model fits: long-term stewardship retainer, not one-time build. The value grows as state accumulates, and the client can’t easily leave because they’d lose the entangled system — but they could leave if they wanted to, because the state is theirs.

The Salesforce Parallel (Full)

Salesforce built a $200B+ company on a specific contract with customers:

  1. Customer owns the CRM data
  2. Salesforce provides the execution environment
  3. Customer can export at any time (though few do, due to switching costs)
  4. Salesforce gets paid for the environment, not the data

This contract is what made Salesforce durable. Competitors offering cheaper CRM couldn’t beat it because customers didn’t actually want to leave — but they could if they needed to. The optionality is what made the relationship trustworthy.

Anthropic and OpenAI are currently trying to own both the environment AND the state. That’s a stronger short-term position but a weaker long-term one — because customers eventually notice, and regulators eventually intervene. RDCO bets on the Salesforce-style durable contract.