You Are the Most Expensive Model — Mike Taylor (Every, Apr 27 2026)
Why this is in the vault
This piece names the exact bottleneck that the always-on RDCO agent was built to relieve: the founder’s time is the priciest model in the loop, and every clarification round is a tax. Taylor’s “Incremental Determinism” framework starts with “Turn Sessions into Skills” — which is operationally identical to the /skillify pattern Ray Data Co already runs. That convergence makes this a load-bearing reference for how to reason about delegating from the founder, to Ray (Opus), to cheaper subagents, to deterministic scripts.
It also pairs directly with the prior Every piece by Katie Parrott, “AI Was Supposed to Free My Time. It Consumed It.” (Mar 9 2026) — Parrott named the disease, Taylor proposes the cure.
The core argument
Taylor frames the operator as the most expensive model in any AI workflow. Tokens are cheap; founder attention is not. Defaulting to frontier models for every task is, in his analogy, like asking a CEO to work the burger grill — the wrong unit cost for the work being done.
The proposed remedy is Incremental Determinism: a four-step push that moves recurring work from ad-hoc chat sessions toward repeatable, cheaper, lower-attention pipelines. Step one — the only one fully readable above the Every paywall — is Turn Sessions into Skills: formalize what you keep redoing, test it against a baseline, then hand it to a subagent running a smaller model. The remaining three steps continue the same gradient (skills → tighter scaffolds → deterministic code).
The implicit ranking of cost-per-unit-of-work, from most to least expensive: human operator > frontier model in interactive session > frontier model running a documented skill > smaller model running the same skill > deterministic script. The job is to keep ratcheting work down that ladder.
“McDonald’s asking its CEO to man the burger grill.”
(Taylor’s analogy for using frontier models on trivial tasks.)
Mapping against Ray Data Co
Strong mapping. This is essentially the operating thesis behind the always-on agent setup, articulated by an outsider.
/skillifyis Step 1. RDCO already has the skill-creation primitive Taylor prescribes. The skill’s whole purpose is taking a session that just succeeded and converting it to a permanent reusable skill. Taylor’s framework gives that primitive a clearer “why” — every skillification moves a recurring task one rung down the cost ladder.- The “founder is advisor not pair programmer” rule is the same insight. RDCO already biases toward decision-needed-up-top, status-as-appendix, and only escalating judgment calls. Taylor is the macro-economic justification for that micro-rule.
- Auto-mode signal-to-noise tuning is the same insight. Reducing approval-asks on reversible work is exactly “stop renting the most expensive model for cheap decisions.”
- Where RDCO is already past Step 1: subagent fan-out (
/process-newsletterbatch mode,/deep-researchper-question subagents) is Taylor’s “delegate to subagents using smaller models” — though RDCO currently delegates to isolated context windows of the same model, not to cheaper models. That’s the next optimization Taylor’s framework points at: when a skill stabilizes, downgrade the model running it. - Where the framework challenges current practice: most RDCO subagents run on Opus. Taylor’s argument says once a skill has a stable test baseline, it should drop to Haiku or a non-Anthropic small model. Worth auditing which skills have stabilized enough to be model-downgraded.
- Sanity Check angle. The “you are the most expensive model” frame is a one-line reframe of a thing the audience already vaguely senses — perfect Sanity Check raw material if paired with a concrete RDCO story (e.g., the moment a skill replaced a recurring 20-minute founder-in-the-loop ritual). Not derivative because RDCO has the lived experience to add; Taylor only has the framework.
Related
- 2026-03-09-every-ai-time-consumption — Parrott / Every. Names the disease (“AI consumed my time”) that Taylor’s framework treats. These two should be read as a pair.
- 2026-04-04-100x-business-with-ai — Vasuman on context engineering as the multiplier. Same family of “the human is the bottleneck” thinking.
- 2026-04-04-coding-with-agents-non-technical — Tossell shipping at scale by leaning fully into agent delegation; a worked example of the cost-ladder concept.
- 2026-04-08-four-levels-of-ai-use — Level 3 (work that was below the ROI threshold before) is unlocked precisely because incremental determinism drives the per-task cost down.
- 2026-04-12-cross-check-agent-architecture — earlier cross-check flagged “no production cost numbers” as a gap in the agent-architecture cluster; Taylor still doesn’t give dollars, but he gives a framework for thinking about cost. Partial gap-fill.
- 2026-04-19-indydevdan-opus-4-5-engineers-model-transcript — “the name of the game is what you can teach your agents to do.” Taylor’s framework is the economic argument for why teaching matters.