06-reference / concepts

targeting system

Thu Apr 23 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·concept ·status: canonical-for-rdco ·source: terminology-canonization

Targeting System — RDCO canonical term

The mechanism that converts intelligence into direction — what to aim at, and how to verify you hit it — once intelligence itself stops being the scarce resource.

Why this note exists

Surfaced by the founder 2026-04-24 as the missing connecting frame between the other canonicalized RDCO terms — unhobbling, Three Decision Algorithms, and the MAC framework. Each of those is a specific instance of the same underlying mechanism, and the vault has been referring to the mechanism implicitly without promoting it to a first-class term.

The umbrella term already exists in the corpus. Solve Everything positions targeting systems as one of the ten engine-gears in Chapter 6, and names the shift from “intelligence is scarce” to “aiming is scarce” as the book’s closing thesis. RDCO has been underweighting this reference — citing the individual gears (blinded evaluation, DR-AIS, red teaming, RoCS) without naming the generalized concept they instantiate.

Canonicalizing it now locks the frame. Use Targeting System — not “eval layer,” not “scorecard,” not “verification apparatus” — when referring to the generalized mechanism. Use the specific terms (MAC, DR-AIS, RoCS, Spec-to-Artifact Score) when pointing at specific instances.

What a Targeting System actually is

From Solve Everything ch 6: the mechanism that converts intelligence into direction. Not just evals. The whole apparatus — acceptance criteria, test harness, decision record, red-team loop, and the judgment that says this was the right thing to aim at in the first place — that answers two questions: what should we aim at? and how do we verify we hit it?

The founder’s 2026-04-24 synthesis locks in a dual framing that the vault now treats as canonical:

The two aren’t substitutes so much as phase-matched to different domains. A domain with well-defined outcomes is one where the agentic targeting system can do most of the work. A domain with fuzzy outcomes still runs on the implicit system, because no one has written the criteria that would let an agent grade itself.

Tesla FSD as the fleet-learning analog

The most vivid case that the agentic targeting system can overtake the implicit one in a well-instrumented domain: Tesla’s fleet.

An individual human driver accumulates at most 1–2 million driving miles over a full career. Tesla’s fleet logs on the order of a billion miles per month. The implicit targeting system each human carries (“I know what a dicey left turn feels like”) is capped by individual lifetime exposure. The agentic targeting system Tesla is building is capped only by how fast the fleet can log reps and how well the evals convert those reps into learning. The founder’s bet, verbatim: “no individual can drive as many miles as the Tesla fleet hive mind that trains FSD.”

The generalization the founder offers: the same bet is available in any field where enough reps can be logged and graded. AI gets a lot more turns at the plate to build up experience/taste than any individual can. In any domain that permits instrumentation, agentic targeting systems will eventually compound past the best human implicit one.

The critical qualifier: “in any domain that permits instrumentation.” That qualifier is load-bearing for the next section.

The conviction-in-uncertainty gap

Load-bearing open question, explicitly unsolved: the agentic targeting system works well when outcomes are well-defined. In fuzzy, un-instrumented situations, decisions still require conviction — someone who has earned the right to say “good enough, proceed” in the absence of a clean eval. The founder’s question, surfaced in the 2026-04-24 synthesis, is whether an agentic system can ever provide that conviction to a human decision-maker — or whether conviction is the one thing the implicit system keeps.

This matters because it’s where MAC’s positioning earns its keep. MAC’s job is to push as much of the decision surface as possible from implicit into agentic — make fuzzy things measurable, convert taste into acceptance criteria, convert “good enough” into a spec a test can grade against. That’s most of the work.

But the residual fuzz — the calls that genuinely can’t be reduced to a spec in advance, the decisions made under scarce ground truth — is where conviction still lives. It’s where the founder / senior-operator / earned-track-record side of the implicit system retains value even as everything around it cheapens. Whether an agent can provide that conviction (rather than merely inform it) is an open research question for RDCO, not a solved problem.

Frame this honestly in any external material. MAC narrows the fuzz; it doesn’t eliminate it. The honest positioning is “we convert as much of the implicit targeting system into the agentic one as your domain allows,” not “we replace conviction with evals.”

The 3 RDCO concept docs as specific targeting-system instances

The other three canonicalized RDCO terms are now best understood as specific surfaces of the targeting-system mechanism:

One mechanism, three surfaces. Unhobbling is why targeting matters more (the generator cheapens). 3DA is why the old targeting system fails (implicit-layer costs collapse). MAC is what a working targeting system looks like in the data-modeling domain.

Sanity Check as a public targeting system

Per the Solve Everything master synthesis §5, Sanity Check should function as a public targeting system for the data industry. The editorial thesis, promoted to canonical:

Each issue defines a measurable claim, stress-tests it against evidence, and gives the reader a diagnostic they can apply to their own work. The Ch 3 principle — automate evaluation before work — becomes the writing discipline: before the issue recommends adopting a tool, technique, or framework, it hands the reader the scorecard that would tell them whether it worked.

Practically this means every issue closes with what Ch 9 calls a “Before Monday Noon” action: a concrete verb and a target. No “stay curious.” The target is the reader’s own targeting-system contribution — the thing they can measure on Monday that they couldn’t measure on Friday.

RDCO implications

Cross-references