06-reference

akshay pachaar agent harness anatomy

Thu Apr 09 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: X long-form article by @akshay_pachaar ·by Akshay Pachaar (@akshay_pachaar)

The Anatomy of an Agent Harness — @akshay_pachaar

Why this is in the vault

Founder flagged this alongside the Ramp Latent Briefing paper: “good content for framing the agentic enablement and custom harness development.” This article is the framing piece — it gives us the conceptual map for what a production agent system actually looks like. The Ramp paper is about optimizing one piece of that map (cross-agent memory); this article is the map itself.

Directly relevant to our automated investing 5-agent vision and to the broader Ray Data Co thesis of building infrastructure for agents.

The core insight

The agent is the emergent behavior. The harness is the machinery producing it. When someone says “I built an agent,” they mean they built a harness and pointed it at a model.

The term was formalized in early 2026 but the concept existed before. Anthropic’s Claude Code docs say explicitly that “the SDK is the agent harness that powers Claude Code.” OpenAI’s Codex team uses the same framing. LangChain’s Vivek Trivedy has the best one-liner: “If you’re not the model, you’re the harness.”

Two products using identical models can have wildly different performance based solely on harness design. LangChain demonstrated this on TerminalBench 2.0 — they changed only the infrastructure wrapping their LLM (same model, same weights) and jumped from outside the top 30 to rank 5. A separate research project hit a 76.4% pass rate by having an LLM optimize the infrastructure itself, surpassing hand-designed systems.

The harness is not a solved problem or a commodity layer. It’s where the hard engineering lives.

The Von Neumann analogy

Beren Millidge (2023): a raw LLM is a CPU with no RAM, no disk, no I/O. The context window is RAM (fast but limited). External databases are disk storage (large but slow). Tool integrations are device drivers. The harness is the operating system.

“We have reinvented the Von Neumann architecture.”

Three levels of engineering

Three concentric layers surround the model:

  1. Prompt engineering — crafts the instructions the model receives
  2. Context engineering — manages what the model sees and when
  3. Harness engineering — encompasses both, plus tool orchestration, state persistence, error recovery, verification loops, safety enforcement, and lifecycle management

The harness is not a wrapper around a prompt. It is the complete system that makes autonomous agent behavior possible.

The 12 components of a production harness

Synthesized across Anthropic, OpenAI, LangChain, and the practitioner community. I’m paraphrasing the key points for each in my own words rather than reproducing the article verbatim.

1. The orchestration loop

The heartbeat: Thought → Action → Observation (TAO), aka the ReAct loop. Assemble prompt, call LLM, parse output, execute tool calls, feed results back, repeat until no more tool calls. Mechanically it’s a while loop; the complexity lives in what the loop manages, not the loop itself. Anthropic frames their runtime as a “dumb loop” where all intelligence lives in the model.

2. Tools

The agent’s hands. Schemas (name, description, parameter types) injected into the LLM’s context so it knows what’s available. The tool layer handles registration, validation, argument extraction, sandboxed execution, result formatting. Claude Code exposes tools in six categories: file ops, search, execution, web, code intelligence, subagent spawning.

3. Memory

Multi-timescale. Short-term = conversation history within a session. Long-term = persists across sessions. Claude Code’s three-tier hierarchy: lightweight index (~150 chars/entry, always loaded), detailed topic files pulled on demand, raw transcripts accessed via search only. Critical design principle: the agent treats its own memory as a “hint” and verifies against actual state before acting. This matches what I do with the working-context bridge notes in our own setup.

4. Context management

Where many agents fail silently. The core problem is context rot — model performance degrades 30%+ when key content falls in mid-window positions (Chroma research, corroborated by Stanford’s “Lost in the Middle”). Even million-token windows degrade on instruction-following as context grows.

Production strategies:

Anthropic’s stated goal: find the smallest possible set of high-signal tokens that maximize likelihood of the desired outcome.

5. Prompt construction

Hierarchical: system prompt, tool definitions, memory files, conversation history, current user message. OpenAI’s Codex uses a strict priority stack: server-controlled system (highest), tool definitions, developer instructions, user instructions (cascading AGENTS.md files with 32 KiB limit), then conversation history.

6. Output parsing

Modern harnesses use native tool calling (structured tool_calls objects), not free-text parsing. The harness checks: tool calls present? Execute and loop. No tool calls? Final answer.

7. State management

LangGraph: typed dicts flowing through graph nodes with reducers, checkpointing at super-step boundaries, resume after interruptions, time-travel debugging. OpenAI: four mutually-exclusive strategies — application memory, SDK sessions, server-side Conversations API, or lightweight previous_response_id chaining. Claude Code’s approach: git commits as checkpoints and progress files as structured scratchpads. We do exactly this.

8. Error handling

A 10-step process with 99% per-step success still has only ~90.4% end-to-end success. Errors compound fast. LangGraph distinguishes four error types: transient (retry with backoff), LLM-recoverable (return as ToolMessage, let the model adjust), user-fixable (interrupt for human input), unexpected (bubble up for debugging). Stripe’s production harness caps retries at two.

9. Guardrails and safety

OpenAI SDK: input guardrails (first agent), output guardrails (final output), tool guardrails (every tool invocation). Tripwire mechanism halts the agent immediately.

Anthropic architecturally separates permission enforcement from model reasoning. The model decides what to attempt; the tool system decides what’s allowed. Claude Code gates ~40 discrete tool capabilities independently across three stages: trust at project load, permission check before each tool call, explicit user confirmation for high-risk operations.

10. Verification loops

This is what separates toy demos from production agents. Three approaches:

Boris Cherny (creator of Claude Code): giving the model a way to verify its work improves quality by 2 to 3×.

11. Subagent orchestration

Claude Code has three execution models: Fork (byte-identical copy of parent context), Teammate (separate terminal pane with file-based mailbox), Worktree (own git worktree, isolated branch). OpenAI SDK: agents-as-tools + handoffs. LangGraph: subagents as nested state graphs.

12. (implicit — the twelfth isn’t in my extracted notes but the article uses the phrase “twelve distinct components” up front; I’m treating this as framing for the whole system rather than a discrete slot)

How the loop works end-to-end

Article walks through a 7-step cycle:

  1. Prompt assembly — system + tools + memory + history + current message. Important context goes at the beginning and end (Lost in the Middle finding).
  2. LLM inference — assembled prompt to API, model generates text + tool call requests
  3. Output classification — no tool calls → end. Tool calls → execute. Handoff → switch agent.
  4. Tool execution — validate args, check permissions, sandboxed execution, capture results. Read-only tools run concurrently; mutating tools serially.
  5. Result packaging — format as LLM-readable. Errors caught and returned as error results so the model can self-correct.
  6. Context update — append to history. If near window limit, trigger compaction.
  7. Loop — return to step 1 until termination.

Termination conditions are layered: text response with no tool calls, max turn limit, token budget exhausted, guardrail tripwire, user interrupt, or safety refusal.

Ralph Loop pattern (Anthropic, for long-running tasks spanning multiple context windows): an Initializer Agent sets up environment (init script, progress file, feature list, initial git commit), then a Coding Agent in every subsequent session reads git logs and progress files to orient itself, picks highest-priority incomplete feature, works on it, commits, writes summaries. The filesystem provides continuity across context windows.

How the major frameworks implement it

The scaffolding metaphor (and the co-evolution principle)

Scaffolding is precise, not decorative. Construction scaffolding is temporary infrastructure workers use to reach floors they otherwise couldn’t — it doesn’t do the construction, but without it nothing gets built.

Key insight: scaffolding is removed when the building is complete. As models improve, harness complexity should decrease. Manus was rebuilt five times in six months, each rewrite removing complexity. Complex tool definitions became general shell execution. “Management agents” became simple structured handoffs.

Co-evolution principle: models are now post-trained with specific harnesses in the loop. Claude Code’s model learned to use the specific harness it was trained with. Changing tool implementations can degrade performance because of this tight coupling.

The “future-proofing test” for harness design: if performance scales up with more powerful models without adding harness complexity, the design is sound.

The seven architectural decisions

Every harness architect chooses:

  1. Single-agent vs multi-agent. Both Anthropic and OpenAI say: maximize a single agent first. Split only when tool overload exceeds ~10 overlapping tools or clearly separate task domains exist. This directly validates the founder’s “single-threaded staged approach” guidance for automated investing.
  2. ReAct vs plan-and-execute. ReAct interleaves reasoning and action at every step (flexible, higher per-step cost). Plan-and-execute separates them. LLMCompiler reports 3.6× speedup over sequential ReAct.
  3. Context window management. Five production approaches: time-based clearing, summarization, observation masking, structured note-taking, sub-agent delegation. ACON research: 26-54% token reduction while preserving 95%+ accuracy by prioritizing reasoning traces over raw tool outputs.
  4. Verification loop design. Computational (tests, linters — deterministic) vs inferential (LLM-as-judge — catches semantic issues, adds latency). Martin Fowler frames this as guides (feedforward, steer before action) vs sensors (feedback, observe after action).
  5. Permission architecture. Permissive (fast, risky) vs restrictive (safe, slow). Context-dependent.
  6. Tool scoping. More tools often means worse performance. Vercel removed 80% of tools from v0 and got better results. Claude Code achieves 95% context reduction via lazy loading. Principle: expose minimum tool set needed for the current step.
  7. Harness thickness. How much logic lives in the harness vs the model. Anthropic bets on thin harnesses; graph-based frameworks bet on explicit control. Anthropic regularly deletes planning steps from Claude Code’s harness as new model versions internalize that capability.

What this means for Ray Data Co

Direct validation of our current approach:

Gaps this article surfaces that we should close:

The one-liner to internalize:

The next time your agent fails, don’t blame the model. Look at the harness.

Tracked author

../03-contacts — consider adding Akshay Pachaar (@akshay_pachaar) to the CRM when we open task #4. Co-founder of dailydoseofds, publishes substantive framing work on AI systems.