Targeting System — RDCO canonical term
The mechanism that converts intelligence into direction — what to aim at, and how to verify you hit it — once intelligence itself stops being the scarce resource.
Why this note exists
Surfaced by the founder 2026-04-24 as the missing connecting frame between the other canonicalized RDCO terms — unhobbling, Three Decision Algorithms, and the MAC framework. Each of those is a specific instance of the same underlying mechanism, and the vault has been referring to the mechanism implicitly without promoting it to a first-class term.
The umbrella term already exists in the corpus. Solve Everything positions targeting systems as one of the ten engine-gears in Chapter 6, and names the shift from “intelligence is scarce” to “aiming is scarce” as the book’s closing thesis. RDCO has been underweighting this reference — citing the individual gears (blinded evaluation, DR-AIS, red teaming, RoCS) without naming the generalized concept they instantiate.
Canonicalizing it now locks the frame. Use Targeting System — not “eval layer,” not “scorecard,” not “verification apparatus” — when referring to the generalized mechanism. Use the specific terms (MAC, DR-AIS, RoCS, Spec-to-Artifact Score) when pointing at specific instances.
What a Targeting System actually is
From Solve Everything ch 6: the mechanism that converts intelligence into direction. Not just evals. The whole apparatus — acceptance criteria, test harness, decision record, red-team loop, and the judgment that says this was the right thing to aim at in the first place — that answers two questions: what should we aim at? and how do we verify we hit it?
The founder’s 2026-04-24 synthesis locks in a dual framing that the vault now treats as canonical:
- Implicit targeting system. Taste, experience, accumulated priors. Humans, decades of carrying analogies forward. Operates through “I’ve seen this before, it’ll work” and “good enough, proceed.” No measurable outcome required because the operator’s track record is the targeting signal. Most pre-agent senior decision-making runs on this.
- Agentic targeting system. Benchmarks, evals, test harnesses, scorecards, replayable verification. Operates through measurable outcomes. Works cleanly when outcomes are well-defined; degrades fast in fuzzy or un-instrumented domains where “correct” can’t be written down in advance.
The two aren’t substitutes so much as phase-matched to different domains. A domain with well-defined outcomes is one where the agentic targeting system can do most of the work. A domain with fuzzy outcomes still runs on the implicit system, because no one has written the criteria that would let an agent grade itself.
Tesla FSD as the fleet-learning analog
The most vivid case that the agentic targeting system can overtake the implicit one in a well-instrumented domain: Tesla’s fleet.
An individual human driver accumulates at most 1–2 million driving miles over a full career. Tesla’s fleet logs on the order of a billion miles per month. The implicit targeting system each human carries (“I know what a dicey left turn feels like”) is capped by individual lifetime exposure. The agentic targeting system Tesla is building is capped only by how fast the fleet can log reps and how well the evals convert those reps into learning. The founder’s bet, verbatim: “no individual can drive as many miles as the Tesla fleet hive mind that trains FSD.”
The generalization the founder offers: the same bet is available in any field where enough reps can be logged and graded. AI gets a lot more turns at the plate to build up experience/taste than any individual can. In any domain that permits instrumentation, agentic targeting systems will eventually compound past the best human implicit one.
The critical qualifier: “in any domain that permits instrumentation.” That qualifier is load-bearing for the next section.
The conviction-in-uncertainty gap
Load-bearing open question, explicitly unsolved: the agentic targeting system works well when outcomes are well-defined. In fuzzy, un-instrumented situations, decisions still require conviction — someone who has earned the right to say “good enough, proceed” in the absence of a clean eval. The founder’s question, surfaced in the 2026-04-24 synthesis, is whether an agentic system can ever provide that conviction to a human decision-maker — or whether conviction is the one thing the implicit system keeps.
This matters because it’s where MAC’s positioning earns its keep. MAC’s job is to push as much of the decision surface as possible from implicit into agentic — make fuzzy things measurable, convert taste into acceptance criteria, convert “good enough” into a spec a test can grade against. That’s most of the work.
But the residual fuzz — the calls that genuinely can’t be reduced to a spec in advance, the decisions made under scarce ground truth — is where conviction still lives. It’s where the founder / senior-operator / earned-track-record side of the implicit system retains value even as everything around it cheapens. Whether an agent can provide that conviction (rather than merely inform it) is an open research question for RDCO, not a solved problem.
Frame this honestly in any external material. MAC narrows the fuzz; it doesn’t eliminate it. The honest positioning is “we convert as much of the implicit targeting system into the agentic one as your domain allows,” not “we replace conviction with evals.”
The 3 RDCO concept docs as specific targeting-system instances
The other three canonicalized RDCO terms are now best understood as specific surfaces of the targeting-system mechanism:
- Unhobbling describes what happens when the underlying model capability gets cheaper. Targeting systems become more valuable in an unhobbled world because the capability on its own doesn’t know what to aim at — you need the targeting apparatus to point the now-cheaper capability at something worth doing and to verify it got there.
- Three Decision Algorithms (3DA) describes what happens to the cognitive side — the cost advantages that held up the implicit targeting system (carried analogies, career-protected conservatism, slow reversal) all collapse. The pressure to build explicit / agentic targeting systems goes up because the implicit one can no longer coast on its old cost structure.
- MAC framework is the canonical RDCO targeting system for data modeling specifically. The founder’s verbatim positioning (2026-04-24): “MAC is the targeting system for effective data modeling. Thorough evals to know the model can answer the business questions it is intended to and to build trust in the data through a rigorous test suite.” Scope × Basis IS the targeting system; acceptance criteria ARE the aim; the test suite IS the verification.
One mechanism, three surfaces. Unhobbling is why targeting matters more (the generator cheapens). 3DA is why the old targeting system fails (implicit-layer costs collapse). MAC is what a working targeting system looks like in the data-modeling domain.
Sanity Check as a public targeting system
Per the Solve Everything master synthesis §5, Sanity Check should function as a public targeting system for the data industry. The editorial thesis, promoted to canonical:
Each issue defines a measurable claim, stress-tests it against evidence, and gives the reader a diagnostic they can apply to their own work. The Ch 3 principle — automate evaluation before work — becomes the writing discipline: before the issue recommends adopting a tool, technique, or framework, it hands the reader the scorecard that would tell them whether it worked.
Practically this means every issue closes with what Ch 9 calls a “Before Monday Noon” action: a concrete verb and a target. No “stay curious.” The target is the reader’s own targeting-system contribution — the thing they can measure on Monday that they couldn’t measure on Friday.
RDCO implications
- MAC pack positioning language. Swap in “targeting system for effective data modeling” everywhere MAC is introduced in external material. It’s cleaner than “data quality acceptance framework,” it cross-references the umbrella frame the founder now uses internally, and it positions MAC inside a larger conceptual vocabulary (Solve Everything, unhobbling, 3DA) that the vault already invests in. Pitch: MAC is the targeting system that survives unhobbling and the Three Decision Algorithms collapse.
- Client Reporting productization angle. Position Client Reporting as building the client’s implicit-to-agentic targeting bridge. Start from their taste and experience about which reports matter, which anomalies warrant escalation, which numbers they trust — then convert that tacit targeting apparatus into eval-backed dashboards and acceptance criteria. This is a sharper offering than “we build dashboards.”
- Sanity Check editorial thesis update. Every issue is a targeting-system exercise: define the claim, pressure-test, give the reader the diagnostic. Adopt this as the format constraint, not just the occasional structure.
- Fleet-learning bet on the vault itself. The vault IS an implicit-to-agentic targeting apparatus for the founder’s own decision-making. Every processed newsletter, every cross-check, every YouTube assessment is a rep that accrues to the pattern-recognition surface. The vault is RDCO’s fleet. Worth calling out internally — it’s the analog to the Tesla FSD bet, pointed at RDCO’s own decisions rather than at a client’s.
- GEO as a distribution targeting system. Worth noting the parallel: GEO is the targeting system for AI-mediated content distribution. Named frameworks + owned terms + topical concentration + cross-domain mentions are the aim-and-verify apparatus for who cites your content when an AI is asked. Same shape, different surface.
Cross-references
- Solve Everything master synthesis — umbrella frame; the founder’s 2026-04-24 callout is that RDCO has been underweighting this reference
- Solve Everything ch 6 — The Engine — targeting-system definition; 10-gear engine including blinded eval / DR-AIS / red teaming / RoCS / Spec-to-Artifact Score
- Solve Everything ch 3 — The Mechanics — “automate evaluation before work” principle; nine-layer Industrial Intelligence Stack (targeting system as layer 4)
- Unhobbling concept doc — sibling concept; targeting system explains why unhobbling makes verification more valuable, not less
- Three Decision Algorithms concept doc — sibling concept; the 3DA cost-collapse is the mechanism pushing the implicit-to-agentic targeting-system shift
- Jaya Gupta: Experience is now a tax — source for 3DA; names the implicit-targeting-system components that just cheapened
- Tan: thin harness, fat skills — harness-thesis core; harness engineering IS targeting-system construction at the project level
- Tan: build the car (Jepsen response) — extends the harness thesis directly into verification
- Harness-thesis dissent synthesis — counter-arguments including the “verification layer is LLM-contaminated” objection the targeting-system frame has to answer
- Ayman: Architect Mode — the decision-posture that matches a world where targeting systems replace pre-commitment as the source of trust
- GEO concept doc — GEO is the targeting system for AI-mediated content distribution; same shape, different surface