“Your Harness, Your Memory” — Harrison Chase
Why this is in the vault
The LangChain CEO’s stake-in-the-ground on harness permanence and memory lock-in. Chase makes two arguments that matter for RDCO: (1) harnesses are not a temporary scaffolding phase — they are permanent and growing, and (2) memory is inseparable from the harness, which means closed harnesses create dangerous vendor lock-in. This is both a genuine architectural argument and a competitive positioning move for LangChain’s open-source Deep Agents product.
Core arguments
1. Harnesses are permanent
Chase traces the evolution: simple RAG chains (LangChain) → complex flows (LangGraph) → agent harnesses (Claude Code, Deep Agents, Codex, etc.). He directly rebuts the “models will absorb the scaffolding” argument:
- What happened is that 2023-era scaffolding became unnecessary, but was replaced by new scaffolding, not eliminated.
- Evidence cited: Claude Code’s leaked source was 512k lines of code. “Even the makers of the best model in the world are investing heavily in harnesses.”
- Web search “built into” model APIs is not part of the model — it’s a lightweight harness behind the API orchestrating tool calls.
2. Memory is the harness, not a plugin
Citing Sarah Wooders (CTO, Letta): “Asking to plug memory into an agent harness is like asking to plug driving into a car.” The harness is responsible for:
- Short-term memory (conversation messages, tool results)
- Long-term memory (cross-session learning, user preferences)
- Context management decisions: what loads into CLAUDE.md, what survives compaction, how skill metadata is presented, how filesystem state is exposed
Memory is still in its infancy — no common abstractions exist yet. This means the harness and memory are tightly coupled by necessity.
3. Closed harnesses create memory lock-in
Chase identifies three levels of increasing danger:
- Mild: Stateful APIs (OpenAI Responses API, Anthropic server-side compaction) store state on provider servers. Switching models means losing conversation threads.
- Bad: Closed harnesses (Claude Agent SDK / Claude Code) interact with memory in unknown ways. Artifacts may be created client-side, but their shape and usage patterns are opaque and non-transferable.
- Worst: Entire harness + long-term memory behind an API (e.g., Anthropic’s Claude Managed Agents). Zero ownership or visibility into memory. The provider controls what’s exposed.
Chase argues model providers are incentivized to move more behind APIs because memory creates lock-in that the model alone does not. Example: Codex generates encrypted compaction summaries unusable outside the OpenAI ecosystem.
4. The pitch: Open Memory, Open Harnesses
Chase’s prescription — and LangChain’s product play:
- Memory should be owned by whoever develops the agentic experience
- Harnesses should be model-agnostic and separate from model providers
- LangChain’s Deep Agents is positioned as the answer: open source, model agnostic, uses open standards (agents.md, skills), pluggable storage (Mongo, Postgres, Redis), deployable via LangSmith or any web hosting
Assessment
Strengths:
- The memory-lock-in argument is concrete and well-evidenced. The anecdote about his email agent getting deleted and having to reteach preferences is effective.
- The three-tier lock-in taxonomy (mild/bad/worst) is useful framing for Sanity Check content.
- Correctly identifies that memory is the moat, not the model.
Bias flags:
- High commercial interest. Chase is CEO of LangChain and is launching Deep Agents as a direct competitor to Claude Code, Codex, and Managed Agents. The “open vs closed” framing conveniently positions his product as the virtuous choice. The architectural argument is sound, but the prescription is self-serving.
- Credits Sarah Wooders (Letta), Sydney Runkle, Viv Trivedy, Nuno Campos — all in the LangChain/adjacent ecosystem.
What he doesn’t say: Open-source harnesses still need hosted infrastructure, and LangSmith (LangChain’s deployment platform) is itself a commercial product. “Open harness” ≠ “free harness.”
RDCO mapping
- Sanity Check angle: The memory-as-moat argument is strong newsletter material. Frame as: “The real AI lock-in isn’t the model — it’s your agent’s memory.”
- RDCO architecture validation: Our vault + skills + thin-harness approach is exactly what Chase advocates. We own our memory (local vault, QMD index), use an open harness pattern, and are model-switchable in principle.
- Tension with Garry Tan: Tan’s “Thin Harness, Fat Skills” and Chase’s “Your Harness, Your Memory” agree on architecture but Chase emphasizes the political economy of who controls the harness, while Tan emphasizes the engineering of keeping it thin.
- Data Dots candidate: The encrypted-Codex-compaction detail is a perfect Data Dot.
Related
- 2026-04-11-garry-tan-thin-harness-fat-skills
- 2026-04-12-cobus-greyling-harness-era-language-shift
- paper-arxiv-2604-08224-agent-harness-study-2026-04-12
- synthesis-harness-thesis-dissent-2026-04-12
- 2026-04-10-akshay-pachaar-agent-harness-anatomy
- cross-check-agent-architecture