06-reference / concepts

unhobbling

Wed Apr 22 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·concept ·status: canonical-for-rdco ·source: terminology-canonization

Unhobbling — RDCO canonical term

The release of latent model capability that was always present in the weights but blocked by interface, scaffold, RLHF, or tooling — and the consequent evaporation of any product whose value proposition was compensating for that block.

Why this note exists

The term originated in Aschenbrenner’s Situational Awareness (2024) as a name for the gap between raw model capability and what users could actually invoke. It has been recurring on the Moonshots panel for a year — Alex Wissner-Gross used it on Ep #240 (Mar 21 2026) to describe NemoClaw, and used it again on Ep #249 (Apr 23 2026) to describe Anthropic’s release of Claude Design on Opus 4.7 — same model, new interface, Adobe -2% / Figma -10% on the day. Diamandis and Dave reinforced the framing.

RDCO is canonicalizing the term now because the cluster of evidence supporting it has crossed a threshold: harness-thesis convergence (10+ sources), live SaaS price action (Figma -10%), and a working operational thesis (MAC as the verification layer that survives unhobbling). One vocabulary across vault, Sanity Check, X, and client-facing decks. Use Unhobbling, not “scaffold release,” not “capability surfacing,” not “AI eating SaaS.”

What “unhobbling” actually means

The model wasn’t ever incapable of doing X. It was hobbled — by RLHF guardrails, by the chat-only interface, by the absence of tool calls, by the lack of vision input, by the missing memory, by the EU regulatory gate, by the compute envelope at inference time. Each new release “unhobbles” some prior limitation. Whichever vertical-SaaS layer existed to compensate for that limitation evaporates.

The pattern, in three current examples:

The mechanism is consistent every time: the SaaS company’s value was the workaround for a model limitation. Remove the limitation, the workaround is dead weight.

The harness-thesis connection

This is the load-bearing crossover. Unhobbling and Garry Tan’s thin harness + fat skills are the same phenomenon viewed from two sides — Tan from the engineering side (how to build on top of a model that’s about to be unhobbled), Diamandis/AWG from the consumer/business side (which products evaporate when it happens).

Tan’s prescription is the operational form. You don’t write more code into the harness. You write more skills — markdown procedures encoding judgment — that the now-unhobbled model can apply. When Anthropic ships the next unhobbling, every skill automatically improves; the deterministic layer stays reliable; the harness stays thin. Tan’s lesson, paraphrased: push intelligence up, push execution down, keep the middle thin.

Cobus Greyling’s “weights → context → harness” arc names the same trajectory. So does Akshay Pachaar’s harness anatomy, Paddy Srinivasan’s Agentic Cloud, Harrison Chase’s memory-as-moat. Ten-plus independent sources now describe one shape (see the Apr 12 cross-check). Unhobbling is the cause; thin-harness/fat-skills is the response.

What survives unhobbling — and why MAC matters

Most vertical SaaS evaporates as the model gets unhobbled. What does NOT evaporate is the verification layer. Reliability still has to be proven. The model’s output still has to be checked against a set of acceptance criteria that name what “correct” means for your specific data, your specific business, your specific risk tolerance.

The Scope × Basis test plan that the MAC framework formalizes is platform-agnostic. Whether the engine is Claude 4.7, Claude Mythos, or some descendant in 2027, the question — did the model do what we said it would, across all relevant scopes, on the relevant bases? — does not go away. If anything, it gets sharper, because the consequences of an unverified model failure compound as the model takes on more work.

Position MAC explicitly as “the unhobbling-resistant layer.” Every skill, every sub-property bet, every service offering should be tested against this question: if the model gets unhobbled twice more in the next 12 months, does this still have a job to do? MAC does. A markdown spec describing acceptance criteria does. A vault of cross-linked, bias-flagged source material does. A workflow app whose only value was decomposing a prompt into seven API calls does not.

This is also a direct answer to the dissent synthesis’s strongest objection — Kingsbury’s “the verification layer is itself LLM-contaminated.” The MAC framework’s response is: yes, which is why the acceptance criteria must be written by humans against ground-truth tools (deterministic SQL, schema invariants, regression tests on labeled data) — not generated by another model. MAC is the layer where the contamination stops because the criteria are externalized and auditable.

Connection to Solve Everything

Unhobbling names what gets cheaper: the underlying model capability, released from whatever interface / scaffold / tooling / gate was hobbling it. Solve Everything names what then matters more: the targeting system — the aim, the verification, the acceptance criteria that tell you whether the now-cheap capability was pointed at the right thing and got there. The two frames compound: unhobbling is the supply-side shift; targeting systems are the demand-side correction.

Chapter 6 formalizes this as one of the ten engine-gears — blinded evaluation harnesses, Decision Records for AI Systems (DR-AIS), red teaming, Return on Cognitive Spend (RoCS), Spec-to-Artifact Score. Each of these is a specific instance of the targeting-system mechanism. The RDCO canonical term for the generalized concept is now locked in at Targeting System — unhobbling is a specific surface of the same mechanism (what happens to vertical SaaS when capability gets cheap), and MAC is the specific instance that survives the unhobbling.

Practical read: when pitching MAC externally, cite the umbrella frame (targeting system) before the specific instance (MAC) — it positions the work inside a larger conceptual vocabulary that AWG/Diamandis/Solve-Everything readers already know, rather than asking them to learn a net-new acronym cold.

What unhobbling does NOT mean

Three guardrails against misreading:

RDCO implications

Practical, in priority order:

  1. Sanity Check positioning angle. “What survives unhobbling” is a recurring theme worth 2–3 articles in the next quarter. Lead with the Figma -10% / Adobe -2% data point as the concrete inflection. Pair with the vertical software selloff piece as cited prior-art.
  2. MAC framework repositioning. Pivot the elevator pitch from “data quality discipline” to “the verification layer that survives unhobbling.” Same artifact, sharper frame, better wedge into “why this matters now.” The MAC pack should ship as mac.md (single-file agent spec) following the pattern documented in GEO.
  3. Sub-property bet evaluation gate. Each sub-property (Squarely, Data Dots, MAC pack, future) should be scored against: what survives if the model gets unhobbled twice more in the next 12 months? Anything whose value is “we wrap the API better” gets discounted. Anything whose value is “we own the verification surface, the data, or the relationship” gets weighted up.
  4. Service offering alignment. “MAC + Client Reporting” already aligns — reporting IS the verification layer for AI-generated insight, which IS the unhobbled work. The pitch writes itself.
  5. Vault as compounding asset. The vault is unhobbling-resistant for the same reason data-as-moat beats architecture-as-moat: the model can’t absorb what it hasn’t seen. Every cross-linked, bias-flagged, decision-cited entry is an asset that survives the next release.

Cross-references