06-reference / concepts

candidates

Sat Apr 18 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·backlog ·status: active

Concept-Page Candidates Backlog

A capture surface for cross-domain patterns and synthesis-worthy clusters that surface from /check-board cycles, /process-newsletter, /process-youtube, /cross-check, and /deep-research subagents. The default failure mode without this file: ideas surface in subagent reports, get relayed to the founder via Discord, then evaporate.

Lifecycle: Subagent surfaces a candidate → I append here → founder reviews periodically → ripe candidates get promoted to actual concept pages at ~/rdco-vault/06-reference/concepts/<topic>.md → entry marked status: written and the page wikilinked.

Promotion bar: A candidate is “ripe” when it has 3+ supporting sources from independent communities/voices in the vault. 1-2 sources is a hunch; 3+ is a pattern; 5+ is canon.

Replacement plan: This file is interim. The “B” build is a Notion “Concept Pages Backlog” DB with the same lifecycle but proper triage UX, auto-promotion to /improve cron, and Approved → drafted-by-Mon-cron pipeline. See entry MA-001 below.


Active candidates (newest first)

CA-026 — Representation-is-not-the-object (topological-space framing for AI embeddings)

Surfaced: 2026-04-20 cycle 48 (3Blue1Brown backfill — This open problem taught me what topology is) Synthesis: Sanderson closes the inscribed-rectangle proof with a pedagogically-load-bearing admission: a Möbius strip is not a shape — it is a topological space, an infinite family of shapes connected by equivalence under continuous deformation. The familiar half-twist paper strip, Dan Asimov’s snail-shell embedding with a planar circular edge, and the abstract “unordered pairs of points on a closed loop” are all the same Möbius strip. Direct category-level map to AI embeddings: a token’s vector representation is not the concept — it’s one point in an infinite family of equivalent vectors (equivalent under orthogonal rotation of the full embedding space, equivalent under compositional context, equivalent under paraphrase-preserving fine-tune). RDCO AI writing has historically defaulted to “the embedding IS the concept” language, which is the same category error as “the half-twist paper strip IS the Möbius strip.” The pattern generalizes: the rhombic dodecahedron in the Five-Puzzles video is a projection of a 4D hypercube (also not one shape but a projection of something higher); diffusion models operate over an image manifold that’s not a single image but an equivalence class; LLM embeddings are defined up to rotation. The fix: when writing about AI representations, always surface the equivalence class, not the single realization. Sources (2 in-vault, 3+ when ripe): 2026-04-20-3blue1brown-topology-open-problem (canonical — Möbius strip as infinite family of shapes connected by continuous equivalence), 2026-04-20-3blue1brown-five-puzzles-thinking-outside-the-box (rhombic dodecahedron = one realization of 4D hypercube projection; hexagonal rhombus-tiling = one realization of cube-stack projection) — pending: an LM-embedding rotation-invariance source (e.g., word2vec / BERT embedding space is only defined up to orthogonal transformation), a diffusion-manifold source making the equivalence-class-of-images explicit Status: Inbox — 2 sources, below 3+ ripeness bar; flagged because the category-error in RDCO AI writing is immediately fixable once the vocabulary exists Why founder cares: Direct editorial upgrade for every Sanity Check piece touching embeddings or representations. Replaces “the embedding represents the concept” with “the concept is a topological equivalence class; the embedding is one point you happened to land on.” Also pairs with CA-014 (high-dim surface concentration) and CA-022 (binary-decision-around-continuous-probability) as a trilogy on what it actually means to work in high-dim AI representation space.

CA-025 — Emergent macrostate from local microrules

Surfaced: 2026-04-20 cycle 48 (3Blue1Brown backfill — Simulating and understanding phase change, guest by Vilas Winstein) Synthesis: A minimal local rule (each cell prefers to have neighbors; a single scalar temperature modulates how much it cares) plus a correct sampling algorithm (Kawasaki Dynamics / MCMC) plus a second scalar parameter (chemical potential / external field) produces a full 2D phase diagram that matches real H2O qualitatively — including a supercritical-fluid region where behavior varies smoothly and a phase-transition line where it varies discontinuously. Vilas Winstein makes the principle of universality explicit: “most specific details of a model shouldn’t actually be too important — there are usually only a few fundamental microscopic rules that you need in order to see the same macroscopic behavior, at least qualitatively.” The Ising-model identification (same simulation, different physical interpretation — up/down magnets instead of molecule/empty) IS this principle in operational form. The pattern generalizes across AI: diffusion-model sampling IS structurally Kawasaki Dynamics; neural-network training emerges from local weight-update rules; multi-agent aggregate behavior is a macrostate over per-agent microrules. Three operationally-concrete corollaries for RDCO: (a) LM sampling temperature is thermodynamic temperature — at T→0 the model minimizes cross-entropy (liquid, crystalline, deterministic); at T→∞ it maximizes entropy (gas, noise); real values sit in the supercritical-fluid region. (b) Metastability is the failure mode for long-running cron loops — a system can stay in the wrong macrostate indefinitely without an external kick; periodic /improve / self-review / audit runs are the kicks. (c) The critical-brain hypothesis — brains may operate near criticality with fractal self-similar structure; well-architected agent systems should sit near criticality, enough local ordering that coherent reasoning emerges and enough stochastic variation that it doesn’t freeze into a single-pattern loop. The three points above collectively ground the harness-thesis in a rigorous statistical-mechanics substrate. Sources (3 in-vault, 4+ when ripe): 2026-04-20-3blue1brown-simulating-phase-change-vilas-winstein (canonical physics exemplar — Boltzmann / Kawasaki / universality / metastability / criticality stack in one 41-min lecture), 2026-04-20-3blue1brown-but-how-do-ai-images-and-videos-actually-work (diffusion as Markov-chain sampling from an intractable distribution — structural twin of Kawasaki Dynamics), 2026-04-20-3blue1brown-but-what-is-a-neural-network (emergent network behavior from local weight-update microrules) — pending: an Ising-in-AI source (Poole et al. on signal propagation, or Saxe et al. on deep learning criticality), or an IndyDevDan multi-agent-emergence source Status: Inbox — 3 sources (canon-tier minimum), very-likely ready to promote after one additional source lands; currently pending because the three in-hand are all 3B1B-cluster which is the same confidence cap flagged in CA-014 (still a mild caveat since the AI objects come from Winstein vs Welch vs Sanderson). Strong enough to flag publicly in the cycle-48 report even at 3-source inbox. Why founder cares: Operational vocabulary for three concrete RDCO upgrades — (a) LM temperature mental model upgraded from “creativity knob” to “thermodynamic temperature on the phase diagram”; (b) cron-loop stability framed as metastability avoidance with periodic kicks (pairs with CA-019 design-for-controlled-decay); (c) agent-system architecture framed as criticality tuning (pairs with CA-007 state-as-path-dependent and the harness-thesis cluster). Also a very strong standalone Sanity Check angle: “Why Your Data Pipeline Has a Phase Diagram” — the 1,600-word piece that teaches senior data-engineering intuition from the liquid-vapor simulation.

CA-024 — Verifier-as-epistemology

Surfaced: 2026-04-20 cycle 44 (3Blue1Brown Euclid backfill — subagent ad429b191b7512512) Synthesis: Three independent traditions (ancient mathematics / modern AI research / RDCO production system) converge on the same architecture: a creative proposer paired with a deterministic verifier whose epistemic load is carried by the verifier, not the proposer. Euclid’s Elements: every proposition is a finite ruler-and-compass construction; the antagonistic Greek skeptic with his own kit is the verifier and the verifier is the epistemology (not the axioms). The collapsible-compass discipline forbids smuggling ungrounded length across a lift. AlphaGeometry: DD+AR alone solves 14/30 IMO geometry problems; +75 heuristics → 18/30; +LM → 25/30 (silver). LM proposes auxiliary constructions; symbolic engine mechanically verifies. Strip the verifier → LM hallucinates plausible-looking wrong proofs. audit-newsletter-outputs.py: 13 invariants, pure stdlib + pyyaml, ZERO LLM calls. The deterministic verification layer for the RDCO newsletter pipeline. Failure modes mechanical, not mimetic. The thesis: knowledge is what a cheap, mechanical verifier — one whose failure modes are independent of the asserter’s — can replay and confirm. Operational frame for Kingsbury’s “verification-layer LLM contamination” critique: the fix isn’t “write better prompts,” it’s “ensure at least one layer in the stack has no LLM in its causal chain.” Three implementation tests for any RDCO skill: (1) Does it have a deterministic verifier? (2) Was the verifier written without LLM assistance? (3) If the LLM is wrong about everything, can the verifier still catch a wrong output? Three yeses passes; anything else is a layered-defense gap. Sources (3, canon-tier promotion-bar met): 2026-04-20-3blue1brown-what-was-euclid-really-doing (2,300-year-old historical exemplar; Syversen/Blåsjö construction-as-subroutine reframe; Lean as modern continuation), 2026-04-20-3blue1brown-imo-geometry-alphageometry-aleph0 (AlphaGeometry DD+AR as the modern neuro-symbolic LM-proposes-verifier-disposes pattern), ~/.claude/scripts/audit-newsletter-outputs.py (RDCO’s own production deterministic-audit layer; the concrete answer to Kingsbury). Pending fourth from formal-methods literature (Lean, Coq, TLA+) to harden the historical-continuity thread. Status: Drafted 2026-04-20 — see verifier-as-epistemology. The most editorially-loaded concept of the day; the philosophical underwriting of the entire harness thesis. Pairs with layered-defense-architecture (CA-016) — verifier-as-epistemology is the why, layered-defense is the how. Why founder cares: Operational gate criterion for every new skill — the three implementation tests should be checkpoint questions in the skill-creator template. Also lands the Kingsbury rebuttal as an architectural commitment rather than a one-off script. Strong Sanity Check angle (“Every AI Tool That Claims to Reason Needs a Ruler and Compass”) leveraging the 4th-century-BC Greek skeptic + Lean + AlphaGeometry rhetorical spine.

CA-024 — Indirect inference as the discipline of measurement (“never look at x, look at y and how x impacts y”)

Surfaced: 2026-04-20 cycle 64 (3Blue1Brown backfill — the Tao paired-explainer pair: cosmic-distance-ladder + cosmological-measurements); strengthened cycle 70 with Practical Engineering Water Recycling — the engineering-domain third source that breaks the single-cluster (3B1B) caveat and meets the canon-tier promotion bar. Tao states the principle explicitly at [01:02] of the part-1 video: “If you want to measure the distance to x, you can never just look at x. You have to look at y and how x impacts y.” The two Tao videos collectively walk eight rungs of the same indirect-inference move (Eratosthenes’ shadow, Aristarchus’ eclipse-geometry for the Moon, Aristarchus’ phases-of-the-Moon angle for the Sun, Kepler’s 687-day Mars-jigsaw for orbit shapes, Venus-transit duration for the AU, Roemer’s Io-eclipse offset for the speed of light, parallax for nearby stars, Hertzsprung-Russell color-to-absolute-brightness for distant stars, Cepheid period-luminosity standard-candles for other galaxies, Hubble redshift for cosmological distances, gravitational-wave standard sirens for the Hubble cross-check). The water-recycling video extends the pattern into a fundamentally different domain: water-quality measurement is entirely indirect, and the contaminants-of-emerging-concern problem is the engineering Aristarchus-failure-mode in 2025 working order — the right metric (regulated contaminants) is wrong-distribution because the actual hazard distribution shifted toward unregulated trace pharmaceuticals/PFAS. 3 sources, canon-tier promotion bar met. Synthesis: The discipline is measurement-by-proxy with explicit chains of inference. Each rung uses the previous as the reference frame, exploiting some periodicity, regularity, or physical invariant in the proxy quantity to back out the target quantity. The Aristarchus failure-mode (right method, wrong scale: he was off by an order of magnitude on the Sun’s distance because he couldn’t clock the half-Moon to the half-hour) is the canonical “good methodology + bad calibration = wrong answer” case study. The Hubble-tension closing (gravitational-wave standard sirens disagree with redshift Hubble by ~10%, and nobody knows why) is the canonical “two well-justified independent methods disagree, and the disagreement is the most interesting datum” case study. The water-recycling extension lands the same pattern in a domain readers can taste (literally): you can’t directly observe whether reused water is “safe to drink” — you measure proxy quantities (turbidity, organic carbon, specific assays) to back out the target quantity. The contaminants-of-emerging-concern problem is the Aristarchus-failure-mode in 2025 production — right metric (regulated contaminants), wrong distribution (actual hazards shifted to unregulated trace pharmaceuticals/PFAS), wrong-by-an-order-of-magnitude in the closed-loop accumulation case. The “environmental buffer” concept names the proxy-method of measurement-by-natural-test-instrument (river dilution + sunlight disinfection + bacteria as a free continuous bioassay). All three case studies map cleanly onto modern AI eval discipline: AI evals are indirect inference (we instrument behavior because we can’t directly observe the model’s reasoning), they often suffer from Aristarchus-scale failure (right metric, wrong distribution = wrong by an order of magnitude in production), and when independent eval methods disagree by 5-10%, that disagreement is the signal, not the noise. The Cepheid period-luminosity law is the canonical “find an invariant in your calibrated range, then trust it past the calibration boundary” move — directly applicable to AI eval design. Sources (3, canon-tier promotion-bar met): 2026-04-20-3blue1brown-tao-cosmic-distance-ladder (lower rungs — Eratosthenes through Kepler; the load-bearing principle stated explicitly), 2026-04-20-3blue1brown-tao-cosmological-measurements (upper rungs — Venus transit through Hubble + GW standard sirens; the modern-astronomy continuation, plus the live Hubble-tension anomaly), 2026-04-20-practical-engineering-how-water-recycling-works (engineering-domain third source — water-quality-by-proxy, contaminants-of-emerging-concern as the Aristarchus-failure-mode in production, environmental-buffer as natural-test-instrument). Pending fourth could come from AI-eval literature (LM-as-judge vs human eval triangulation, calibration-curve cross-checks) or telemetry/observability literature (OpenTelemetry RED, dbt observability). Status: Ripe (3 sources, canon-tier promotion-bar met) — ready to draft as ~/rdco-vault/06-reference/concepts/indirect-inference.md. Cluster-source caveat now lifted (2-of-3 in 3B1B, 1-of-3 in PE — different domain + different author). Strong newsletter-angle candidates: “Henrietta Swan Leavitt’s AI Eval Trick” (Cepheid law as canonical example of finding an invariant in your calibrated range and using it to extrapolate); or “Your Production AI Has a Pharmaceuticals-in-the-Water Problem” (water-recycling as the visceral analogy for closed-loop training-data accumulation in LLMs). Adjacent to CA-022 (binary-decision-around-continuous-probability) — Hubble-tension’s “don’t collapse two disagreeing methods to one number” maps directly to CA-022’s “don’t collapse a probability distribution to a binary.” Why founder cares: Direct line to RDCO eval-design discipline. The Aristarchus-failure-mode is the most memorable historical-anchor for the “right metric, wrong distribution” production-eval failure mode. The Hubble-tension is the most memorable historical-anchor for the “expose the disagreement, don’t average it” cross-eval discipline. The Cepheid period-luminosity law is the cleanest historical-anchor for the “find a calibrated invariant, extrapolate past the boundary” eval-design move. All three of these are usable in newsletter copy without the reader needing to follow the math — the historical narrative carries the lesson.

CA-023 — Coordinate change as the core problem-solving move (pick the geometry where the problem is trivial)

Surfaced: 2026-04-20 cycle 41 (3Blue1Brown backfill — Why colliding blocks compute pi + How (and why) to take a logarithm of an image); promoted to canon-tier cycle 44 (Five puzzles for thinking outside the box — three fresh 2D→3D lifts + explicit meta-thesis); strengthened cycle 48 with a fourth source — the topology-open-problem video extends the pattern one level up from “coordinate change” to “representation change” (lift into a topological space where self-intersection = solution); strengthened cycle 51 with two more sources — the Laplace-series chapters 2 and 3 (j0wJBEZdwLs + FE-hM1kRK4Y) — adding the explicit change-of-basis-into-exponential-pieces and change-of-domain-from-time-to-s-plane exemplars; strengthened cycle 64 with the Laplace-series chapter 1 prerequisite (-j8PzkZ70Lg, “The Physics of Euler’s Formula”) — the prerequisite chapter that justifies why exponentials are the natural basis (“the atoms of calculus” frame), via the geometric x’ = i*x → unit-circle derivation of Euler’s formula and the damped-harmonic-oscillator dumb-trick that motivates the Laplace transform. 7 sources total, well past canon-tier + “very strong promotion bar” (5+ sources). Synthesis: The meta-move: the problem is hostile in the coordinates you arrived with; pick coordinates where the symmetry becomes manifest and the answer falls out. Colliding blocks: rescale (v1, v2) axes by sqrt(mass), energy ellipse becomes a circle, inscribed-angle theorem + small-angle approximation trivializes the collision-count question. Logarithm of image: apply the complex logarithm to Escher’s Print Gallery, the impossible self-referential distortion unwraps to a flat strip with a fixed point at the blank spot. Five puzzles (cycle 44): three 2D→3D lifts in one video — (a) rhombus tilings of a hexagon = projections of cube-stacks, max-moves answer is n³; (b) Tarski-plank strips on a disk = hemispherical caps with area = π × width via Archimedes’ cylinder projection, minimum sum = 2; (c) Monge’s three-circle collinearity = cones with apex-at-center-of-similarity and tangent plane; plus a 3D→4D lift where rhombic dodecahedron = projection of hypercube along (1,1,1,1), max-moves = n⁴. Sanderson’s own meta-framing (“step into a higher dimension”) plus the closing analysis-vs-intuition caveat (we can do 2D→3D because we have 3D intuition as creatures; 4D+ loses the intuition shortcut, becomes pure analysis). Grant’s explicit colliding-blocks closing: “distilling a problem into its core essence can expose hidden connections.” Map to Ray Data Co: log-transform a skewed price series (multiplicative → additive), SVD on correlated features (axis-align the variance), embed tokens (sparse one-hot → dense near-orthogonal), change-of-basis in any dashboard (revenue → revenue-per-cohort). The failure mode for data teams is staying in the original coordinates because the business question was phrased there — treating the coordinate system as given rather than chosen. The newsletter angle: “The hardest problems in your data aren’t problems, they’re bad coordinate systems.” The analysis-vs-intuition caveat is the honest cap on the pattern: some coordinate changes lose intuitive guidance (thousand-dimensional embeddings) and become daunting. Sources (7, canon-tier + 5+ promotion bar): 2026-04-20-3blue1brown-why-colliding-blocks-compute-pi (state-space + sqrt-mass rescaling turns energy ellipse into circle — physics→geometry), 2026-04-20-3blue1brown-how-and-why-to-take-a-logarithm-of-an-image (complex log unwraps Escher’s self-referential distortion — pure-math unwrap), 2026-04-20-3blue1brown-five-puzzles-thinking-outside-the-box (three 2D→3D lifts in one video + meta-thesis + analysis-vs-intuition caveat — canonical pattern-explicit source), 2026-04-20-3blue1brown-topology-open-problem (inscribed-rectangle proof via 3D surface whose self-intersections ARE the rectangles + Möbius strip lift + Klein-bottle impossibility — generalizes the pattern from “coordinate change” to “representation change into a topological space”), 2026-04-20-3blue1brown-but-what-is-a-laplace-transform (Laplace transform as change-of-basis into the exponential-function basis; poles expose the basis coefficients — the deep-structure exemplar of “change of basis is the whole game”), 2026-04-20-3blue1brown-why-laplace-transforms-are-so-useful (driven damped oscillator solved by change-of-domain from t-domain to s-plane — differential expressions become polynomial algebra; the most operationally-explicit statement of CA-023 in the corpus: “differential expressions turn into polynomials, and polynomials are algebra”), 2026-04-20-3blue1brown-physics-eulers-formula-laplace-prelude (chapter-1 prerequisite — the “atoms of calculus” frame and the geometric x’ = i*x → unit-circle derivation of Euler’s formula; supplies the why-exponentials-are-natural-coordinates justification that the chapter-2/chapter-3 Laplace pair builds on). Adjacent but distinct from CA-012 (Notation is the conceptual move) — that one is about naming, this is about geometry / representation. Status: Drafted 2026-04-20 — see coordinate-change-as-core-move. 6 sources, all 3B1B (honest caveat: same single-author-cluster caveat as CA-014 / verifier-as-epistemology — pattern is canonical in math + CS literature but vault evidence is single-author). Cycle-44 Five Puzzles is the pattern-explicit anchor; cycle-51 Laplace pair is the operational anchor (inner = change of basis, outer = change of domain). Next strengthening: Polya How to Solve It, multivariable change-of-variables textbook, Fourier/Laplace outside 3B1B. Why founder cares: Direct rhetorical on-ramp for feature-engineering and representation-learning pieces. The “Escher distortion unwraps to a strip” mental image is more memorable than any “log transforms multiplicative to additive” explanation, and it pre-sells why embedding geometry matters for the data-engineering audience that defaults to raw-feature reasoning. The Five-Puzzles analysis-vs-intuition caveat is the honest frame for how this pattern scales (or doesn’t) into LM embedding-space territory.

CA-022 — Binary-decision-around-continuous-probability anti-pattern

Surfaced: 2026-04-20 cycle 27 (Practical Engineering backfill — An Engineer’s Perspective on the Texas Floods); strengthened cycle 30 with two AI-domain canonical sources (LLM logit-to-argmax, diffusion mean-collapse); strengthened cycle 73 with Hurricane vs Tiny Houses (Cox’s “5x return-period ≠ 5x elevation” misperception observation — the cleanest operational counter-rule for the anti-pattern, with the visceral two-houses-3-ft-apart visual) Synthesis: Floodplain maps draw crisp binary lines (in / out) around an inherently continuous and uncertain probability gradient. Property owners spend significant resources to shift the line slightly because crossing it changes regulatory burden — yet the underlying risk profile a foot inside vs a foot outside is essentially identical. Grady’s load-bearing question: “What’s the difference in risk profile between just-inside-the-line and just-outside? Is it enough to have a sharp line between them? And if not — if the true situation is more nebulous — is the map doing a good job of communicating risk to the public?” Same shape at the agent-system layer: rate-limit boundaries, retry/no-retry decisions, escalate/don’t-escalate triage, mark-task-done thresholds — all are hard-coded thresholds that act as binary lines around what’s fundamentally a probability gradient. Cycle 30 AI-domain extensions: (a) LLM logit-to-argmax collapse — Grant Sanderson explicitly notes “instead of predicting one word with certainty, [an LLM] assigns a probability to all possible next words”; every chatbot UI in production then collapses the probability distribution to a sampled or argmax single token, discarding the calibration signal. (b) Diffusion mean-collapse — Welch Labs demonstrates that removing the random-noise step from DDPM image generation collapses every generated point to the mean of the training distribution (the “tiny sad blurry tree” exemplar). The model learned the mean of a Gaussian; sampling from the Gaussian preserves the distribution; collapsing to the mean destroys it. Same anti-pattern, three independent canonical exemplars now (engineering, autoregressive AI, generative AI). Concrete RDCO heuristic: when a decision is fundamentally probabilistic, expose the probability alongside the decision. Every /check-board triage decision could carry a confidence score; every /audit-newsletter-outputs flag could carry a probability rather than a yes/no. Sources (3, promotion-bar met): 2026-04-20-practical-engineering-an-engineers-perspective-on-the-texas-floods (canonical case — NFIP floodplain maps as binary lines around continuous probability), 2026-04-20-3blue1brown-large-language-models-explained-briefly (LLM logit distribution → argmax/sampled-token UI collapse — autoregressive-AI canonical exemplar), 2026-04-20-3blue1brown-but-how-do-ai-images-and-videos-actually-work (diffusion mean-collapse → blurry-tree exemplar; sampling preserves distribution, mean-collapse destroys it — generative-AI canonical exemplar) Status: Drafted 2026-04-20 — see binary-decision-around-continuous-probability. 3 sources (canon-tier minimum, just-barely promoted). Two-of-three in 3B1B cluster noted as the honest confidence cap; Brier-score / calibration-literature source flagged as the explicit fourth to chase. Lands the operational heuristic (expose calibration, don’t collapse) and maps to the cycle-27 probabilistic-/check-board /improve proposal as the concrete first RDCO fix. Why founder cares: Direct heuristic for skill design — replace hard-threshold boolean returns with calibrated-probability returns, surfacing the underlying uncertainty to downstream consumers. Also a strong Sanity Check angle (“The 100-Year Anything Is a Lie” — or, with the AI extensions: “Whenever You Collapse a Probability Gradient to a Binary Decision, You’re Throwing Away the Most Expensive Signal the Model Produced”) that lands hard with the data-engineering audience that defaults to crisp uptime / latency / MTBF claims.

CA-021 — Premature commitment (build-the-cheap-reversible-piece-first)

Surfaced: 2026-04-20 cycle 27 (Practical Engineering backfill — California’s Tallest Bridge Has Nothing Underneath) Synthesis: Foresthill Bridge is the canonical exemplar — the 700-ft cantilever bridge built in 1973 as a necessary precursor to the Auburn Dam, which then died over the next two decades (Oroville earthquake → Teton Dam collapse → cost balloon → environmental opposition → 2008 permit revocation). The dam was abandoned; the bridge survives as California’s perpetual maintenance liability (2010s seismic retrofit, T1 steel weld inspection program). The Bureau of Reclamation built Foresthill because it was the inevitable, future-proof choice — they were optimizing for “build it right the first time” before the project’s load-bearing assumption (the dam is feasible) had been validated. The inverse rule: when load-bearing assumptions haven’t been validated by real-world stress tests, build the cheapest reversible piece first, not the cheapest irreversible piece. RDCO equivalent: don’t build the LaunchAgent for a skill until the skill has run successfully end-to-end as a one-shot; don’t add a Notion DB until the surface has been used by a manual workflow for a week; don’t migrate to a new MCP server until the existing one has demonstrably broken. Add to skill-creator template: “What load-bearing assumption are you betting on, and what’s the smallest reversible test of it?” Sources (1 in-vault, 3+ when ripe): 2026-04-20-practical-engineering-californias-tallest-bridge-has-nothing-underneath (canonical — Foresthill Bridge as monument to abandoned Auburn Dam) — pending: any post-mortem of an MVP-architecture-that-became-the-permanent-architecture, the “we built the auth system first, then the product never shipped” pattern, the dbt-project-with-no-customers pattern, or any IndyDevDan/Cobus piece on premature-platform commitment Status: Inbox — 1 source, below 3+ ripeness bar; flagged because the discipline (build cheap reversible before cheap irreversible) is immediately actionable for /build-skill and /build-project gate criteria Why founder cares: Direct gate criterion for new skill / new infrastructure decisions. Adds a “What load-bearing assumption are you betting on, and what’s the smallest reversible test of it?” question to the build-skill template. Also a strong Sanity Check angle (“Bridges Without Dams”) for the data-engineering audience that has lived through monolithic data warehouses, abandoned dbt projects, and unused Airflow DAGs that became perpetual maintenance liabilities for capabilities never built.

CA-020 — Pure-agentic application as a distinct architectural pattern

Surfaced: 2026-04-20 cycle 25 (IndyDevDan backfill — The Library Meta-Skill); refined cycle 27 with mac-mini-agent (the OS-primitive vs behavior-layer partition) Synthesis: A pure-agentic application has its behavior layer in markdown — SKILL.md describing intent + a YAML/markdown reference data structure + a cookbook directory of per-command markdown that the LLM agent reads and executes directly. Cycle 27 refinement (mac-mini-agent): Pure-agentic does NOT mean zero compiled code — OS-primitive access (GUI control via macOS Accessibility, terminal control via tmux, network listening) is irreducibly compiled because the OS doesn’t expose a markdown API. The right partition is: OS primitives = compiled binary; agent behavior = SKILL.md. Dan’s mac-mini-agent is the cleanest exemplar — 4 CLIs (compiled / scripted access to OS primitives) + 2 SKILL.md files (behavior layer). Dan’s library skill is the fully pure-agentic exemplar at the distribution layer: 6 commands (add, use, push, list, search, sync) implemented as agent reasoning over markdown plus git operations. The architectural implication: the bar for “this should be a Python script” vs “this should be a SKILL.md” is about whether the work is behavior (always SKILL.md) or primitive access (compiled if no other API). Compare to the /process-newsletter skill which has both an editorial markdown layer AND a Python audit script — the future-form would push more of the audit behavior into agent reasoning over JSON contracts while keeping the JSON read/write as compiled code. Sources (3 in-vault): 2026-04-20-indydevdan-library-meta-skill (canonical fully-pure-agentic example — library skill as pure-agentic distribution layer), 2026-04-20-indydevdan-agent-experts-self-improving (expertise.yml + agent reasoning, no compiled code, pure-agentic at the expertise layer), 2026-04-20-indy-dev-dan-mac-mini-agents-openclaw-nightmare-skills-instead (the OS-primitive vs behavior-layer partition — 4 CLIs + 2 SKILL.md as the right division of labor for device-control agents) — pending: a future IndyDevDan or Cobus Greyling piece on prompt-as-application, or an Anthropic blog on skill-only Claude Code patterns Status: Ripe (3 sources, promotion-bar met); ready to draft as ~/rdco-vault/06-reference/concepts/pure-agentic-application.md. Mac-mini-agent source clarifies the OS-primitive partition that was ambiguous in earlier sources. Why founder cares: Direct heuristic for new-skill design: every new tool should fail a “does this need code?” test before the first line of Python — and when the answer is “yes, for OS primitives,” the compiled code should be a thin wrapper exposing primitives to a SKILL.md behavior layer. Reduces dependency surface, makes skills agent-harness-portable (Claude Code / PI / Cursor / Cline interchangeable), enables the library-as-distribution pattern (CA-013-adjacent). Also a strong Sanity Check angle for the data-engineering audience that defaults to scripts even when SKILL.md + agent reasoning would do the job.

CA-019 — Design-for-controlled-decay (when prevention is impossible, schedule the cut)

Surfaced: 2026-04-20 cycle 25 (Practical Engineering backfill — Sawing a Dam in Half); strengthened cycle 70 with Practical Engineering Floating Bridges — DOT wind-threshold-driven bridge closure as the schedule-around-environmental-forcing variant of the same discipline. Strengthened cycle 73 with Practical Engineering “Concrete’s Greatest Weakness is Time” — the chemistry-as-cycle-time variant: concrete cure cannot be hurried without paying in heat-induced cracking, rebar corrosion (calcium chloride), or shortened workability. Broadens the cluster from structures-only to materials-also; the failure mode is in the material itself, not the structure. Skyline Plaza 1973 (14 dead) is the canonical “skip-the-cure-because-of-schedule-pressure” case — directly maps to RDCO skip-the-canary-cycle anti-pattern under cron pressure. Synthesis: Distinct from CA-016 (layered defense) and CA-018 (emergent correlated failure). CA-019 is about what to do when prevention isn’t an option — the failure mode is already baked in (Fontana’s reactive aggregates were cast into 2.1M m³ of concrete in 1944; floating bridges are constantly coupled to wind/wave forcing that can’t be engineered away) and the choice is binary: live with degradation, or schedule periodic controlled mitigation. Three variants now visible: (a) TVA Fontana slot-cutting — cut a half-inch slot every ~5 years, monitor with hundreds of instruments, recalibrate via FEA, repeat. Operating principle: “disturb the structure as little as possible per cycle.” (b) Spillway fuse-plug / fuse-gate engineered controlled failure — design a sacrificial layer that fails in a known way before the main structure is at risk. (c) DOT wind-threshold-driven bridge closure on Washington floating bridges — when wind exceeds threshold, the bridge closes to traffic, even if structurally sound. In extreme weather, the bridge itself becomes part of the storm. The third variant is the cleanest “you can’t engineer this layer; you have to schedule around it” exemplar — recognition that some failure modes are not engineering problems at all, they’re operational-discipline problems. Map directly to tech-debt management: most teams treat tech debt as binary (fix it / live with it). The TVA + fuse-plug + wind-threshold trio is a richer third option: schedule a small periodic correction (slot-cut), build sacrificial layers (fuse-plug), or schedule around uncontrollable forcing (close the bridge when wind exceeds threshold). Concrete RDCO applications: monthly /compile-vault runs to catch link rot before it requires /vault-health-tier intervention (slot-cut); skill design with sacrificial verifier layers (fuse-plug); rate-limit-aware scheduling that pauses cron skills during Anthropic API congestion windows (wind-threshold). Also pairs with the “/improve should make smallest reasonable correction” guardrail. Sources (3, canon-tier promotion-bar met): 2026-04-20-practical-engineering-sawing-a-dam-in-half (canonical case — Fontana ASR slot-cutting every 5 years; the periodic-mitigation variant), 2026-04-20-practical-engineering-spillway-failed-on-purpose (fuse-plug spillways as engineered-controlled-failure variant), 2026-04-20-practical-engineering-hidden-engineering-floating-bridges (DOT wind-threshold-driven bridge closure as schedule-around-environmental-forcing variant — recognition that some failure modes are operational-discipline problems, not engineering problems) — pending: Erlang’s “let it crash” supervisor pattern, hot-swappable database migrations, blue-green deployments as cut-and-monitor cycles Status: Ripe (3 sources, canon-tier promotion-bar met) — ready to draft as ~/rdco-vault/06-reference/concepts/design-for-controlled-decay.md. Cluster-source caveat acknowledged (3-of-3 from Practical Engineering); software-domain fourth source (Erlang let-it-crash, blue-green deploys) flagged as the explicit cross-cluster strengthening to chase. Why founder cares: Direct operating principle for tech-debt and vault-hygiene management. Stops the binary “fix it now or never” pattern. Also feeds a candidate skill /slot-cut that runs a small periodic correction against a designated area (vault, CLAUDE.md, a target skill) on a 30-day schedule. The wind-threshold variant adds the rate-limit-aware-scheduling discipline as a separate feedline into the autonomous-loop architecture.

CA-018 — Emergent correlated failure from individually-correct local optimizations

Surfaced: 2026-04-20 cycle 23 (Practical Engineering backfill — Do Retention Ponds Actually Work? + cross-link to Spillway Failed On Purpose); strengthened cycle 27 with temporal-correlation sub-pattern (Foresthill / Auburn Dam case) Synthesis: Distinct from CA-016 (layered defense). CA-016 is about defending against failure; this candidate is about not creating new failure modes via well-meaning local optimizations. The canonical civil-engineering case: many on-site detention basins designed with similar outlet controls can synchronize their attenuated peak discharges at downstream confluences and produce a worse aggregate flood than no detention at all. Each basin is locally correct (meets its peak-rate ordinance) and globally harmful (synchronized peaks > unsynchronized natural peaks). Pair with the Asheville bypass-line case from the spillway video (the redundant transmission line and the original line shared a downstream channel as a single failure mode — the surge that took out one took out both). Same shape at the algorithmic-systems layer: microservice retry storms from synchronized exponential backoff, algorithmic flash crashes from correlated trading bots, AWS region-wide outages amplified by retry storms. Cycle 27 sub-pattern: temporal correlation. Auburn Dam’s three project-killers (Oroville earthquake Aug 1975, Teton Dam collapse 1976, Engineering Geologists report April 1976) all hit within a 12-month window. Each was independently correct; the correlated arrival killed institutional momentum. Same shape at the agentic-systems layer: when 3+ unrelated failures hit within a single cron cycle, the issue isn’t any single failure — it’s that the operating environment changed and we’re seeing the lagging signals. The temporal-correlation discipline: when you see clustered unrelated failures, treat the cluster itself as the signal, not the individual incidents. The fix is regional/coordinated infrastructure (regional detention pond, shared utilities, central coordination layer) for spatial; cluster-detection in the cron-skill audit pipeline for temporal. Sources (3): 2026-04-20-practical-engineering-do-retention-ponds-actually-work (synchronized-attenuated-peaks → worse downstream flood — spatial correlation), 2026-04-20-practical-engineering-spillway-failed-on-purpose (Asheville bypass-line + transmission line correlated knockout via shared downstream channel — spatial correlation), 2026-04-20-practical-engineering-californias-tallest-bridge-has-nothing-underneath (1975-1976 cluster of Oroville earthquake + Teton Dam collapse + AEG report killed Auburn Dam momentum — temporal correlation) Status: Ripe (3 sources, promotion-bar met); ready to draft as ~/rdco-vault/06-reference/concepts/emergent-correlated-failure.md with two distinct sub-patterns (spatial: synchronized-local-actions; temporal: clustered-unrelated-shocks). Why founder cares: Direct audit target for RDCO cron schedule (are jobs firing on the same hour boundary, producing synchronized load spikes? Are 3+ unrelated failures clustering temporally, signaling environment change?) and for skill design (do skills share retry/backoff windows that could synchronize? Does the audit pipeline detect temporal failure clusters?). Also a Sanity Check angle that lands hard with the data-engineering audience because they recognize the pattern from production incidents.

CA-017 — Externalized cost as the real engineering metric

Surfaced: 2026-04-20 cycle 23 (Practical Engineering backfill — The Los Angeles Aqueduct is Wild + cross-link to Spillway and Niagara videos); strengthened cycle 25 (Sawing a Dam in Half — ASR as multi-decade externalized cost of locally-sourced aggregates without long-term reactivity testing); strengthened cycle 27 (Foresthill Bridge as the externalized cost made physical — 700-ft bridge built as cheap-precursor for an abandoned dam, now perpetual maintenance liability); strengthened cycle 73 with PE “Concrete’s Greatest Weakness is Time” (Skyline Plaza 1973 collapse — deferred cure-time treated as zero cost on the GANTT chart, externality paid in 14 lives when shoring removed too early on cold-weather-delayed concrete; canonical “deferred-wait-as-invisible-cost” exemplar at the project-schedule layer) Synthesis: An engineering project’s true success metric is total-cost-including-decades-of-externalities, not delivery cost or initial benefit. The LA Aqueduct case is the canonical exemplar: from a narrow engineering perspective the project was an early-20th-century triumph (300-mile pure-gravity conveyance, hundreds of MW incidental hydropower, transformed LA into a world city). The unmetered cost showed up as a >$1B Owens Lake dust-mitigation bill, decades of California Water Wars litigation, broken Owens Valley communities, native displacement, Mono Lake collapse, and a foundational thesis (reliable Sierra snowpack) that climate change is now eroding. Grady’s load-bearing line: “What we sometimes dismiss as red tape around major infrastructure is often completely justified due diligence.” Direct map to AI infrastructure: model-training compute energy, copyright lawsuits, data-labeling labor conditions, single-vendor lock-in risk, environmental load — all paid later in the same legal/political/restoration form. Pair with the Asheville bypass-line case (correlated-failure cost was unmetered until Helen tipped the spillway), Fontana Dam ASR (cost of sourcing local aggregates without reactivity testing surfaces in 1972 and is paid every 5 years thereafter via slot-cutting in perpetuity), arch-dam abutment-thrust (load-bearing geology must be inspected and rock-bolt-maintained for the dam’s entire life — cost not on initial budget), Foresthill Bridge (the externalized cost made physical — built as cheap precursor for the never-completed Auburn Dam, now California’s perpetual maintenance liability via 2010s seismic retrofit and T1-steel weld inspection program), and Niagara as the positive exemplar (deliberate underuse extends the asset’s life from 12,000 years of recession to multiples of that — externality discipline as a moat). The thesis: every engineering decision should carry a “shadow cost ledger” alongside the balance sheet — costs that are predictable but not on the P&L should be made visible at design time. Sources (7): 2026-04-20-practical-engineering-los-angeles-aqueduct-is-wild (canonical case — Owens Valley, Owens Lake dust, Mono Basin extension as second-bite trap), 2026-04-20-practical-engineering-spillway-failed-on-purpose (Asheville bypass-line correlated-failure cost was unmetered until Helen), 2026-04-20-practical-engineering-niagara-falls-hidden-engineering (positive pole — deliberate underuse extends asset life by 3x, externality discipline as moat), 2026-04-20-practical-engineering-sawing-a-dam-in-half (ASR as multi-decade externalized cost of local-aggregate sourcing without reactivity testing — paid via periodic slot-cutting), 2026-04-20-practical-engineering-why-no-short-arch-dams (arch-dam abutment-thrust loads as hidden ongoing cost — geology is load-bearing for the dam’s life), 2026-04-20-practical-engineering-californias-tallest-bridge-has-nothing-underneath (Foresthill Bridge as the externalized cost made physical — cheap precursor for an abandoned dam becomes perpetual maintenance liability), 2026-04-20-practical-engineering-how-water-recycling-works (closed-loop accumulation sub-pattern: contaminants of emerging concern — pharmaceuticals, PFAS, personal-care products — as the canonical case of cost externalized by every individual user accumulating to system-level toxicity when the loop closes; direct analog for synthetic-data accumulation in LLM training) Status: Drafted 2026-04-20 — see externalized-cost. 7 sources (canon tier, expanded from 6 in cycle 70). Commits to the civil-engineering → AI-infrastructure isomorphism with explicit confidence caveat (6 of 7 sources from Practical Engineering; AI half interpretive). Anchors on Grady’s “justified due diligence” quote; uses Foresthill as the visceral physical exemplar and Niagara as the positive pole. Cycle 70 addition: closed-loop accumulation sub-pattern. The water-recycling video adds the canonical case where externalized-cost is invisible per-individual-actor (one pharmaceutical flush is harmless) and catastrophic at system scale via accumulation in a closed loop (parts-per-billion contaminants accumulate to regulatory-threshold concentrations when water is reused enough times). This is the cleanest historical anchor for synthetic-data accumulation in LLM training — every individual AI-generated artifact is harmless, the accumulation across the training-loop is the externalized cost nobody pays for upfront. Paired with CA-021 as the Foresthill-question discipline. More editorial weight than CA-014 or CA-016 — this is the stance, not the tool. Why founder cares: Top-tier Sanity Check angle (LA aqueduct as spine, Foresthill as visceral image, AI-infrastructure analog as body, Grady’s due-diligence quote as closer — 1500-word essay candidate). Also operationally relevant: every RDCO skill design should ask “what shadow costs am I creating that aren’t on the balance sheet?” — analog to the fuse-plug-vs-gated and gravity-test design-doc questions. Foresthill specifically prompts the “bridges-without-dams” audit (which RDCO infrastructure was built as precursor for capabilities we no longer plan to ship?).

CA-016 — Layered-defense architecture for autonomous agent systems

Surfaced: 2026-04-20 cycle (Practical Engineering backfill — The Hidden Engineering of Runways); strengthened cycle 25 with two PE dam pieces adding the structure-class-determines-mitigations and design-for-controlled-decay sub-patterns; strengthened cycle 73 with PE “Hurricane vs Tiny Houses” — adds the emergent-layer-from-graceful-degradation sub-pattern (orange house’s destroyed first floor improvised a new stilts level that bought time) plus the scale-fidelity finding (1/6-scale model showed smooth progressive collapse; 1/3-scale showed fits-and-starts) — directly maps to dry-run-vs-real-cron-cycle testing discipline Synthesis: Aviation engineers runway pavement as a stack of independent layers (subgrade → drainage → subbase → base course → surface course), each optimizing one failure mode, plus an EMA arrestor at the runway end as the last-resort layer that catches the catastrophe that bypasses every other defense. Three Sept 2025 overruns ended without fatalities solely because the EMA layer existed. Map this directly onto autonomous agent ops: every skill should be a stack of layered defenses (retry policies, validation hooks, fallback models, graceful-degradation EMAs) where each layer targets one failure mode and is cheap relative to the catastrophe it prevents. Today’s en-fr 429 mid-cycle is exactly the failure an EMA layer would have absorbed (Gemini Flash transcription as the second-layer fallback when YouTube’s translation endpoint 429s). The thesis: ad hoc fallbacks accumulate as technical debt; explicitly designed layered defenses scale. Cycle 25 sub-patterns: (a) design-for-controlled-decay — when prevention is impossible, schedule a periodic small mitigation cycle (TVA’s Fontana slot-cutting every 5 years vs Hoover’s bigger-one-time fix); (b) structure-class determines available mitigations — gravity dams are slot-cuttable in-flight because vertical slices are independently stable; arch dams require wholesale replacement because the structure is tightly coupled. Same logic applies to skill design: modular SKILL.md files can be edited in-flight; tightly coupled monoliths must be rewritten. Sources (9): 2026-04-20-practical-engineering-hidden-engineering-runways, 2026-04-20-practical-engineering-spillway-failed-on-purpose (engineered-failure-mode hydraulics — fuse plug as 2nd-layer EMA-equivalent + Asheville bypass-line correlated-redundancy disaster as the load-bearing failure case), 2026-04-20-practical-engineering-niagara-falls-hidden-engineering (4-layer water control: international control dam → diversion tunnels → pumped storage → coffer dam — each layer independently failable, geology as bottom defense), 2026-04-20-practical-engineering-sawing-a-dam-in-half (design-for-controlled-decay sub-pattern: TVA slot-cutting as scheduled small mitigation when prevention is impossible), 2026-04-20-practical-engineering-why-no-short-arch-dams (structure-class-determines-mitigations sub-pattern: gravity-class slot-cuttable, arch-class wholesale-replace), 2026-04-20-practical-engineering-how-water-recycling-works (Wichita Falls 3-plant treatment-chain as canonical layered-defense at the water-treatment level + environmental-buffer-as-implicit-defense-layer sub-pattern: the layer you didn’t know you had until you removed it), 2026-04-20-practical-engineering-hidden-engineering-floating-bridges (the canonical defeat-one-layer-stack-collapses exemplar — 1979 Hood Canal open-hatch sinking + 1990 Lacy V. Murrow removed-watertight-doors re-sinking; deliberate-defeat sub-pattern from the 1990 case + environmental-coupling as third structure-class addition to the arch-vs-gravity sub-pattern), 2026-04-20-indydevdan-pi-agent-teams-harness-engineering (multi-team agents as redundancy + model rotation as failover layer), 2026-04-15-thariq-claude-code-session-management-1m-context (auto-compact + ADWs as Anthropic’s own multi-layer defense) Status: Drafted 2026-04-20 — see layered-defense-architecture. 9 sources (canon tier, expanded from 7 in cycle 70 with two more PE pieces). Wove three sub-patterns: independence-of-failure-modes (Asheville), design-for-controlled-decay (Fontana slot-cutting), structure-class-determines-mitigations (arch-vs-gravity, expanded with environmental-coupling as third class via floating bridges); added temporal correlation (Foresthill/Auburn cluster) as a fourth and deliberate-defeat (1990 Lacy V. Murrow doors-removed-for-good-reason case) as a fifth — operationally most-instructive failure mode because it captures the “but we needed to disable that for just one batch” anti-pattern that recurs in skill-iteration cycles. Concept page should be updated to add the no-temporary-defeat discipline rule (layered defenses must not be temporarily disabled — if you need to defeat one layer, you must temporarily increase another) and the environmental-buffer-as-implicit-layer insight from the water-recycling source. Commits to the civil-engineering → AI-agent isomorphism claim with confidence caveat (7 of 9 sources from Practical Engineering cluster). Why founder cares: Operational and immediately actionable for the channels-agent + autonomous-loop architecture. Every existing skill could be retrofitted with explicit layer-by-layer defense documentation. Also a strong Sanity Check angle for data engineering audience — same pattern applies to data pipelines (raw → staged → cleaned → marts → dashboards, with monitoring as the “EMA” layer).

CA-015 — Process power in audience-first creator businesses

Surfaced: 2026-04-20 cycle (3Blue1Brown backfill — The Most Beautiful Formula lecture; cross-cluster with Acquired and Practical Engineering) Synthesis: Three independently-built audience-first technical-content businesses — 3Blue1Brown (Manim animation library, 10-year visual-language moat), Acquired (deep-research interview format + research process, 8-year moat), Practical Engineering (field-trip + animation visual style + civil-engineering domain authority, 8-year moat) — express the same Helmer “process power” pattern. Tacit knowledge accumulated over 8-10 years that no amount of capital or talent can replicate at speed. Same shape as the TSMC moat (40 years of semiconductor manufacturing tacit knowledge), scaled down to creator-business size. The thesis: process power is the durable form of moat for audience-first content businesses, and it compounds invisibly because the leading creator’s “look” / “format” / “research depth” gets re-experienced by audiences as inseparable from the brand. Implications for Sanity Check: the moat compounds in the editorial style + research discipline + recurring formats, not in any single piece. Worth investing in the durable craft layer over short-term content output. Sources (3): 2026-04-20-3blue1brown-volume-higher-dim-spheres-most-beautiful-formula, 2026-04-19-acquired-tsmc-remastered (process-power frame applied at industrial scale; same shape at creator scale), 2026-04-20-practical-engineering-hidden-engineering-runways — also adjacent to 2026-04-19-acquired-coca-cola (brand-as-process-power across a century) Status: Inbox — 3 sources, promotion-bar met; could be drafted as ~/rdco-vault/06-reference/concepts/process-power-creator-businesses.md Why founder cares: Direct positioning for Sanity Check at the 5-10 year horizon. The investment thesis for editorial style discipline, recurring-format development, and research-process compounding is exactly this concept. Validates time spent on style-guide, structural template, and visual-language work as the moat-building activity.

CA-014 — High-dimensional surface concentration as the load-bearing geometric intuition for ML

Drafted 2026-04-20 — see high-dim-surface-concentration. Entry retained as backlog anchor for future source additions. Surfaced: 2026-04-20 cycle (3Blue1Brown backfill — The Most Beautiful Formula lecture); strengthened cycle 30 with the actual neural-network and LLM and diffusion videos providing the AI objects that live in the geometry; strengthened cycle 33 with the Essence of Linear Algebra Ch 1–3 videos providing the low-dim pedagogical scaffolding the high-dim story generalizes from; strengthened cycle 37 with the Essence of Calculus Ch 1 opener (integral/derivative/FTC prerequisite layer beneath the gradient-descent and attention math the AI trilogy assumes) and the Hairy Ball Theorem (complementary topology result — even spheres have topological constraints on fields that can live on them, pairs well with the measure-concentration story for a Sanity Check piece on high-dim geometry) Synthesis: In high dimensions, almost all of a unit ball’s volume sits in a thin shell near its surface — V_n(0.99)/V_n(1.0) = 0.99^n, which goes to 0 as n grows. By n=100 less than 37% of the ball’s volume is within 0.99r of the center; by n=1000, essentially 0%. This is the geometric fact behind the curse of dimensionality in nearest-neighbor search, the concentration of measure in statistical learning, and the “lonely points” intuition for embeddings. Pair with the related fact that V_n peaks at n≈5.26 and goes to zero as n→∞ (a 100-dim unit ball has essentially zero volume relative to its bounding cube). Together these two facts collapse most lay intuition about high-dim spaces. Cycle 30 strengthening: the neural-network video gives the parameter-space object (13,000-dim weight space for a toy MNIST classifier; modern LLMs are at 100B+ dims), the LLM video gives the embedding-space object (512–4096-dim token vectors that attention operates on via cosine-similarity, which only works because of near-orthogonality in high-dim space — a direct surface-concentration consequence), and the diffusion video gives the failure-boundary object (Welch Labs explicit at one point: “in the high dimensional space of images, it appears that our image generation process doesn’t quite make it to the manifold of realistic images, resulting in a blurry non-realistic image” — surface concentration manifesting as the manifold being thin in the embedding space). Cycle 33 strengthening: the Essence of Linear Algebra Chapter 1–3 videos provide the low-dim pedagogical scaffolding that everything above generalizes from — vectors as arrows ↔ coordinate tuples (Ch 1), linear combinations / span / basis (Ch 2), and matrices-as-transformations (Ch 3). The “your audience doesn’t have high-dim intuition” failure mode is usually actually “your audience doesn’t have the low-dim linear algebra foundation to generalize from”; the surface-concentration story only lands for readers who have internalized vectors-as-lists, span, and matrices-as-transformations. A Sanity Check essay on CA-014 should lean on the Ch 1–3 trio as the prerequisite layer and CA-014 itself as the surprising high-dim payoff. Most ML/AI explainers discuss the curse of dimensionality without giving readers the geometric picture that makes it intuitive; the surface-concentration result is the cleanest one-page demonstration. The thesis: this single intuition is the highest-leverage geometric fact for any data-engineering or ML audience and should be the canonical visualization in any vault piece touching embeddings, vector search, or high-dim distance metrics. Sources (7 strong, all 3B1B-cluster): 2026-04-20-3blue1brown-volume-higher-dim-spheres-most-beautiful-formula (canonical geometry source), 2026-04-20-3blue1brown-but-what-is-a-neural-network (parameter-space object — 13K-dim loss landscape inherits high-dim geometry), 2026-04-20-3blue1brown-large-language-models-explained-briefly (embedding-space object — token vectors where attention via cosine similarity exploits high-dim near-orthogonality), 2026-04-20-3blue1brown-but-how-do-ai-images-and-videos-actually-work (failure-boundary object — Welch makes manifold-thinness in image-space explicit at the production-system limit), 2026-04-20-3blue1brown-vectors-chapter-1 (pedagogical prerequisite — arrow ↔ list translation), 2026-04-20-3blue1brown-linear-combinations-span-basis-chapter-2 (pedagogical prerequisite — span / linear-combination vocabulary), 2026-04-20-3blue1brown-linear-transformations-matrices-chapter-3 (pedagogical prerequisite — matrices-as-transformations geometric view) Status: Drafted 2026-04-20 — see high-dim-surface-concentration. 7 strong sources, all from 3B1B cluster but covering 4 distinct AI objects + 3 pedagogical prerequisites. Cluster-source caveat (all from 3B1B) is mild — the AI objects each come from a different specialist within the cluster (Sanderson on geometry/NN/LLM/LA series, Welch on diffusion). The Ch 1–3 addition directly addresses the “reader needs foundation” failure mode: the essay references the exact prerequisite videos to link rather than hand-waving. Could be paired with CA-022 in a single Sanity Check piece on “Why High-Dimensional Intuition Is Wrong (And What That Costs You)” — the geometry plus the operational anti-pattern in one essay. Why founder cares: Highest-leverage single visualization for any Sanity Check piece on embeddings, retrieval, or vector search. The data-engineering audience uses these tools daily without the geometric intuition. A “your intuition about embeddings is wrong, here’s the geometry” piece would land with that audience, and the Ch 1–3 prerequisite layer is now explicit so the essay can pitch at exactly the level the audience needs without losing readers who came in without the math foundation.

CA-013 — R&D framework: Reduce and Delegate as the only context-management discipline

Surfaced: 2026-04-20 cycle (IndyDevDan backfill — Claude Code 2.0 Agentic Coding); strengthened cycle 25 with library-meta-skill as a file-system-layer reduction mechanism Synthesis: Three independent voices now describe the same operational answer to context-window management — and each names it differently, which is itself a sign the pattern is real and converging. IndyDevDan: “There are only two ways to manage your context window: Reduce and Delegate” (R&D framework, named explicitly across 2026-04-20-indydevdan-claude-code-2-0-agentic-coding and 2026-04-20-indydevdan-one-agent-to-rule-them-all). Thariq (Anthropic): “Context isn’t free — performance degrades as context grows” (2026-04-15-thariq-claude-code-session-management-1m-context) — same problem, no operational name. The agent-experts pattern (2026-04-20-indydevdan-agent-experts-self-improving) is R&D applied at the skill-knowledge layer (just-in-time expertise files vs always-on memory). The library-meta-skill (2026-04-20-indydevdan-library-meta-skill) is R&D at the file-system layer — different device profiles activate different subsets of the global skill universe, so each device’s CLAUDE.md and skill catalog only carries what’s relevant to its role (Mac Mini = autonomous loop skills; founder laptop = human-in-loop skills). Concrete operational rule: every skill should declare what it REDUCES from the parent’s context (e.g., “scout removes file-discovery from planner”) and what it DELEGATES to sub-agents (e.g., “scout fans out to 4 sub-agents in parallel”). The thesis: R&D is not optional for multi-agent systems — it is the structural answer to context rot, and naming it gives engineers a vocabulary for the trade they are already making implicitly. Sources (4): 2026-04-20-indydevdan-claude-code-2-0-agentic-coding, 2026-04-20-indydevdan-one-agent-to-rule-them-all, 2026-04-15-thariq-claude-code-session-management-1m-context, 2026-04-20-indydevdan-library-meta-skill (R&D at the file-system layer — device-profile activation as a reduce mechanism) — also adjacent to 2026-04-20-indydevdan-agent-experts-self-improving (R&D applied to expertise-knowledge layer) and 2026-04-11-garry-tan-thin-harness-fat-skills (thin-harness-fat-skills is R&D at the harness layer) Status: Ripe (4 sources, promotion-bar exceeded); could be drafted as ~/rdco-vault/06-reference/concepts/r-and-d-context-discipline.md. The library-meta-skill source adds the file-system-layer reduction sub-pattern explicitly. Why founder cares: Operational rule that should land in every SKILL.md template. Currently RDCO’s skill ecology has implicit context discipline; making R&D explicit gives a checklist item for every new skill (what does this reduce? what does this delegate?). Also a strong Sanity Check angle for the data-engineering audience that conflates “more context = better” — the named alternative makes the trade-off visible.

CA-012 — Notation is the conceptual move (not bookkeeping)

Surfaced: 2026-04-20 cycle (3Blue1Brown backfill — Exploration & Epiphany by Paul Dancstep) Synthesis: The canonical mathematical example: Sol LeWitt’s switch from corner-labels to numbered-edges on incomplete open cubes unlocked the complementary-pair concept (4-part ↔ 8-part), cutting his search effort in half. Same data, different label, different concepts became thinkable. Dancstep’s explainer names this explicitly: “Finding the right labeling system can be an invaluable step in making sense of a problem.” The vault pattern is identical: the Apr 19 harness-thesis cluster recognition only became thinkable because every relevant file carried disciplined tags: [..., harness, ...] frontmatter; the Apr 16 graph-reingest exposed author-authority only because frontmatter author: was disciplined; the 3blue1brown Grover’s-algorithm follow-up succeeds because Sanderson swaps a misleading example notation (check input == 12) for a load-bearing one (SHA-256 verifier). This is one principle operating across artistic enumeration, group-theoretic counting, vault metadata discipline, and pedagogical framing. The thesis: disciplined notation is not bureaucracy — it is what determines which cross-cutting patterns become noticeable at all. Concrete implication: the tag / frontmatter / filename-slug regime is earning its keep every time a cluster becomes recognizable; the cost of loose notation is invisible until you need the pattern it would have exposed. Sources (4): 2026-04-20-3blue1brown-exploration-epiphany-paul-dancstep, 2026-04-20-3blue1brown-grovers-algorithm-clarification (example-as-notation angle), 2026-04-12-harness-thesis-dissent (the Apr 19 cluster recognition that only worked because of tag discipline), 2026-04-20-practical-engineering-ancient-pump-no-moving-parts (terminology-drift in engineering literature as the same pattern — pulser pump / hydraulic air compressor / unnamed-by-historians; lack of canonical terminology is itself why the technology stayed lost) — pairs with CA-004 (harness-era convergence vocabulary) and CA-011 (thread-based engineering vocabulary) as adjacent “naming-is-doing” candidates Status: Inbox — 4 sources, promotion-bar exceeded; ready to draft as ~/rdco-vault/06-reference/concepts/notation-is-the-conceptual-move.md. The pulser-pump source adds the engineering-domain twin of the vault tag-discipline argument: terminology drift causes lost discoverability in the literature exactly as tag drift causes it in the vault. Why founder cares: Direct defense of the vault-metadata investment. Every time the founder has wondered whether the frontmatter / tag / filename-slug discipline is worth the overhead, the answer is the LeWitt cubes: you cannot see the complementary-pair pattern (or the harness-thesis cluster) without the notation that exposes it. Also useful for Sanity Check as a standalone argument: most data-quality writing is about correctness; this is about representation fitness — the under-discussed half of the data-work skill set.


CA-011 — Thread-based engineering vocabulary (named patterns for multi-agent work)

Surfaced: 2026-04-20 cycle (IndyDevDan AGENT THREADS — Boris Cherny + BIG 3 SUPER AGENT + One Agent to RULE them ALL trio) Synthesis: Dan’s six-thread vocabulary (Base / P / C / F / B / L + hidden Z) names patterns RDCO is already executing implicitly across the autonomous loop — the YouTube backfill is a P-thread, /process-newsletter batch is a B-thread, /deep-research is an F-thread, /check-board with /loop is closest to an L-thread, and cron-driven /sync-contacts and /finance-pulse approach Z-threads. Naming them explicitly upgrades planning fidelity from “spawn some sub-agents” to “this work calls for a 3-way F-thread fused at the parent.” Pairs with the four improvement metrics (more / longer / thicker / fewer-checkpoints) which are directly measurable as vault-health KPIs (sub-agent spawn count, average runtime, average tool calls per agent, founder-review pauses per cycle). The orchestrator-must-not-stream-child-logs principle from the One Agent video and the closed-loop validation principle from BIG 3 are the design constraints that make these threads actually work — they’re the “physics” beneath the vocabulary. Boris Cherny’s setup tweet (5 in terminal + 5-10 in cloud-web, always Opus 4.5, verification loops above all else) is the practitioner-grade evidence that the framework matches how the Claude Code creator himself operates. Sources (4): 2026-04-20-indydevdan-agent-threads-boris-cherny, 2026-04-20-indydevdan-big-3-super-agent, 2026-04-20-indydevdan-one-agent-to-rule-them-all, 2026-04-15-thariq-claude-code-session-management-1m-context — pairs with 2026-04-19-indydevdan-top-2-percent-plan-2026 and 2026-04-19-indydevdan-self-validating-hooks as supporting context Status: Inbox — 4 sources, promotion-bar met; ready to draft as ~/rdco-vault/06-reference/concepts/agent-thread-vocabulary.md. Strong pairing with CA-004 (harness-era convergence) — both live in the agentic-engineering vocabulary space and could share scaffolding or cross-link heavily. Why founder cares: Direct operational vocabulary for Ray’s autonomous loop. Once “P-thread” and “F-thread” are first-class concepts in skills/, any new skill spec can declare its execution shape unambiguously. Also unlocks four concrete vault-health KPIs that would tell us empirically whether RDCO is getting more or less leveraged on agents over time. Z-thread (zero-touch) is the technical north-star that matches SOUL.md’s “no babysitting” operational principle.

CA-010 — Stored-emotion vs present-signal protocols (when to disambiguate, when to discharge)

Surfaced: 2026-04-20 cycle (Tim Ferriss — Jordan Peterson Rules/Psychedelics/Bible + Jamie Foxx interview, building on existing Maté pieces) Synthesis: Three independent traditions converge on the same problem (present emotional intensity drives bad action) but prescribe different protocols depending on whether the emotion is stored or live. Maté (somatic/clinical): stored childhood material requires RAIN — Recognize / Allow / Investigate / Nurture, body-first not cognitive-first. Peterson (clinical-psychological): present resentment is informational — disambiguate “someone treading on your territory” vs “you need to grow up,” then act. Foxx/CBT (behavioral-experiment): predicted-fear-vs-actual disconfirmation — “what’s on the other side of fear? nothing” works because the catastrophe rarely materializes when tested. The unification: the vintage of the emotion determines the right protocol. Old material = somatic discharge (Maté). Live signal = intellectual disambiguation (Peterson). Predicted-but-untested = behavioral disconfirmation (Foxx/CBT). A concept page would map the decision tree and let the founder (and Ray) pick the right tool by symptom. Sources (4): 2026-04-20-tim-ferriss-jordan-peterson-rules-psychedelics-bible, 2026-04-20-tim-ferriss-jamie-foxx-interview, 2026-04-19-tim-ferriss-gabor-mate-anger-rage, 2026-04-19-tim-ferriss-gabor-mate-trauma-addiction-ayahuasca — CBT primary literature (Beck, Burns) optional 5th source Status: Inbox — 4 sources, promotion-bar met; ready to draft. Pairs with CA-009 (addiction-as-coping) — both live in the founder-internal-state problem space and could share scaffolding. Why founder cares: Direct operational vocabulary for Ray to use during high-intensity founder moments — “is this Maté, Peterson, or Foxx?” gives a decision tree instead of generic “manage your emotions” advice. Also seeds two candidate skills: /rain (somatic protocol invocation) and /disconfirm (predicted-vs-actual downside generator). Both are concrete builds.

CA-009 — Addiction-as-coping (founder workaholism reframe)

Surfaced: 2026-04-20 cycle (Tim Ferriss — Gabor Maté Anger/Rage + Trauma/Addiction/Ayahuasca) Synthesis: Maté’s clinical worldview from In the Realm of Hungry Ghosts applied across two Ferriss interviews: addiction is not chemistry, it’s a coping mechanism for unprocessed pain — and the substance is incidental (drugs, work, achievement, status, validation all serve the same dampening function). The founder-workaholism / 80-hour-week pattern is structurally identical to the Downtown Eastside opioid pattern, just with a socially-celebrated substance. Healthy anger (in-the-moment boundary defense) vs. rage (stored childhood material that magnifies as it expresses) is the operational corollary — explains why “venting” makes things worse and why somatic-first processing (RAIN: Recognize / Allow / Investigate / Nurture) outperforms cognitive-first reframing. Authenticity-vs-attachment tradeoff is the third leg — explains the “yes-when-I-mean-no” pattern in client scope creep. 2 primary Maté sources currently in vault; needs 1-2 more (Bessel van der Kolk Body Keeps the Score, or Anna Lembke Dopamine Nation) to graduate to a written concept page. Sources (2 in-vault, 4+ when ripe): 2026-04-19-tim-ferriss-gabor-mate-anger-rage, 2026-04-19-tim-ferriss-gabor-mate-trauma-addiction-ayahuasca — pending: van der Kolk Body Keeps the Score, Lembke Dopamine Nation, Maté Hungry Ghosts (book-form) Status: Inbox — 2 sources, below 3+ ripeness bar; flagged because the pattern mapping into RDCO operations (workaholism-as-coping, venting-as-corrosive, scope-creep-as-attachment-pattern) is unusually direct and the founder is the relevant test case Why founder cares: Directly explains the structural pattern behind sustained-intensity startup operations and gives operational language (RAIN, “present boundary or stored grievance?”, body-check before commitment) for distinguishing productive intensity from corrosive intensity. Also feeds into a possible /rain skill and a “boundary-violations log” template for active client engagements.

CA-008 — Vertical integration as a founder-only move

Surfaced: 2026-04-19 (Hengsperger “Request for Startups That Will Reindustrialize America”) Synthesis: CA-006 reads founder-as-strategic-edge as a personality/agency story (Bezos/Sinegal/Chang/Montezemolo/Ecclestone/Gates). Hengsperger reframes the same shape as a capital-structure story: vertical integration of capacity + technology requires holding both layers through a multi-year capex curve, which professionalized capital structurally cannot do. PE strips, VC ignores the $20-200M revenue band, public-co CEOs optimize for one layer because margin pressure forces specialization. Only founders with high agency and patient capital can hold both. Industrial sector becomes the 8th cross-domain instance — pattern is now genuinely cross-economy (tech + retail + auto + chips + industrial), not “tech founders are special.” Decision: keep CA-006 as the personality-shaped read; stand up CA-008 as the structural-capital read; both can graduate to a single concept page that names both reads. Sources (8): All 7 from CA-006 + 2026-04-19-hengsperger-reindustrialize-america Status: Inbox — promotion-bar exceeded; ready to draft as paired concept page with CA-006 Why founder cares: Tightens the strategic frame for the founder’s CAD/CNC discovery thread (small-scale participation in a real industrial wave, not hobbyist tinkering) and gives RDCO a structural-capital argument for why agentic-AI consultancies that own both ingest and deploy will out-compete pure-software peers

CA-007 — State-as-path-dependent (biology + AI convergence)

Surfaced: 2026-04-19 cycle 13 (Tim Ferriss — Huberman Foundations) Synthesis: Two independent communities arguing the same shape — current performance is dictated by the prior 24-72h state, not an independent draw. Huberman (neuroscience): REM ratio in any 90-min sleep cycle depends on slow-wave/REM ratio in the previous cycle; performance is path-dependent. Thariq (AI lab): model reasoning quality degrades as context grows; today’s reasoning is paid for in degraded reasoning later in the session. Cedric Chin (reading craft): peripheral capture protects against raw-load contamination of the reading mind. All three are saying the same info-theoretic thing: state is path-dependent, not memoryless. 3rd source already in vault; promotion-bar met. Sources (3+): 2026-04-19-tim-ferriss-huberman-foundations-physical-mental-performance, 2026-04-15-thariq-claude-code-session-management-1m-context, 2026-03-25-seattle-data-guy-know-nothing-and-be-happy (Cedric quote) Status: Inbox — strong cross-domain synthesis, ready to draft Why founder cares: Direct ammunition for both RDCO operating discipline (founder energy is path-dependent → schedule recovery, not just work) and harness-thesis positioning (the path-dependence point applies to agent context windows the same way it applies to neural state)

CA-006 — Founder-as-strategic-edge

Surfaced: 2026-04-19 cycle 7 (Acquired F1 + 10-Years-with-Michael-Lewis + Microsoft Vol II Ballmer) Synthesis: The founder’s removal from the strategic seat (even when keeping board presence) costs the company a specific irreplaceable execution-speed asset. Antitrust, succession planning, and “professional CEO” handoffs all eat this. Sources (canonical, 7+): Bezos/Marketplace 2001 (2026-04-19-acquired-amazon-com), Sinegal/hot-dog-cap (2026-04-19-acquired-costco), Chang/Apple-9B-plant (2026-04-19-acquired-tsmc-remastered), Montezemolo/Ferrari-recovery (2026-04-19-acquired-ferrari), Ecclestone/F1-Concorde (2026-04-19-acquired-formula-1), Ben+David/Acquired-meta (2026-04-19-acquired-10-years-michael-lewis), Gates-leaving-MSFT-via-DOJ (2026-04-19-acquired-microsoft-volume-ii-ballmer) Status: Inbox — strongest pattern of the day, ready to write Why founder cares: Direct mirror of his own RDCO posture (founder-operating discipline) + ammunition for any “should I take a board seat or stay operating” decision

CA-005 — Ingestion economics

Surfaced: 2026-04-19 cycle 12 (Tim Ferriss reading-craft trio) Synthesis: Cedric Chin (strategic high-volume reading) + Ferriss Speed Read (eye mechanics) + Ferriss Remember (retention via indexes) + Thariq context-rot guidance all converge on the same info-theoretic move — peripheral capture without raw-load. 4 sources currently disconnected; would unify a real practical reading discipline. Sources (4): 2026-03-25-seattle-data-guy-know-nothing-and-be-happy (Cedric quote), 2026-04-19-tim-ferriss-how-to-speed-read, 2026-04-19-tim-ferriss-how-to-remember-what-you-read, 2026-04-15-thariq-claude-code-session-management-1m-context Status: Inbox — highest-density orphan-prevention opportunity per cycle 12 subagent Why founder cares: Operationalizes how Ray reads the vault, exposes a missing revisit_score signal worth adding to vault entries

CA-004 — Harness-era convergence (vocabulary map)

Surfaced: 2026-04-19 cycle 9 (Tobi Lütke ACQ2 + IndyDevDan /PLAN 2026) Synthesis: Tobi (constitutions + evals), Dan (custom-agents + private-evals + out-loop + year-of-trust), Thariq (context-rot), Greyling (weights→context→harness language shift) — 4 independent voices from 4 different communities (public-co CEO / practicing engineer / AI lab / industry analyst) saying the same thing in different vocabularies. Concept page would map the vocabulary across the four sources and become canonical RDCO reference for “scaffold matters more than weights” positioning. Sources (4+ via existing harness-thesis cluster of 14+): 2026-04-19-acquired-tobi-lutke-shopify, 2026-04-19-indydevdan-top-2-percent-plan-2026, 2026-04-15-thariq-claude-code-session-management-1m-context, 2026-04-12-cobus-greyling-weights-context-harness Status: Inbox — would be the canonical citation for any RDCO content touching the harness thesis Why founder cares: Direct ammunition for distribution strategy (Sanity Check “scaffold matters” positioning is a core thread)

CA-003 — Structural counter-positioning

Surfaced: 2026-04-19 cycle 6 (Acquired Trader Joe’s, with companions) Synthesis: When the incumbent’s revenue model itself prevents them from copying you. Trader Joe’s vs CPG-grocery, Costco vs traditional retail, TSMC vs IDMs, Substack vs WordPress. The moat isn’t a feature you have — it’s a constraint they can’t escape. Sources (4): 2026-04-19-acquired-trader-joes, 2026-04-19-acquired-costco, 2026-04-19-acquired-tsmc-remastered, 2026-04-12-substack-platform-deep-dive (or similar Substack-vs-WordPress note if filed) Status: Inbox Why founder cares: Maps to RDCO’s positioning question — what would an incumbent need to give up to copy us?

CA-002 — Demand-discipline as moat

Surfaced: 2026-04-19 cycle 4 (Acquired Ferrari + NFL + Sutton) Synthesis: “Always ship one less than the market wants” is a cross-domain moat principle. Ferrari (production capping), NFL (one prime-time game per week), Sutton’s “transfer between states” (learning by withholding). All three independently arrive at the same shape applied to product/attention/learning. Sources (3+): 2026-04-19-acquired-ferrari, 2026-04-19-acquired-nfl, 2026-04-19-dwarkesh-richard-sutton-rl-llm-dead-end Status: Inbox Why founder cares: Direct application to Sanity Check pacing (don’t publish more, publish better-spaced) and to RDCO Client Reporting capacity (don’t take more clients than we can serve well)

CA-001 — Verification-layer LLM contamination as the strongest harness-thesis dissent

Surfaced: 2026-04-19 (Kingsbury essay scoring against Tan’s rebuttal) Synthesis: Tan’s “the harness verifies the model” answer collapses if the harness was written by the model. Of Kingsbury’s 10 arguments, this is the one Tan ducked and that survives engineering scrutiny. Sources (4): 2026-04-19-kingsbury-future-of-everything-is-lies, 2026-04-19-garry-tan-build-the-car-jepsen-response, 2026-04-12-harness-thesis-dissent (already updated with this as 6th counter-argument), 2026-04-11-garry-tan-thin-harness-fat-skills Status: Partially-written — already lives as the 6th counter-argument inside the dissent doc. Could be promoted to its own concept page if we want it citable from outside the dissent context. Why founder cares: Already in production via the Jepsen-style audit script (audit-newsletter-outputs.py) — RDCO’s concrete answer to this dissent is shipped.


Meta candidates (about the system)

MA-001 — Build “Concept Pages Backlog” Notion DB (path B)

Surfaced: 2026-04-19 (this conversation) Synthesis: This vault file (path A) is the interim capture surface. The full fix is a Notion DB mirroring Research Backlog: subagents auto-write candidates, founder triages Inbox → Approved, /improve cron picks up Approved entries Mon morning and drafts the actual concept page. Replaces this file once shipped. Sources: This conversation; the Research Backlog precedent Status: Inbox (queue as Notion task for next week) Estimated build: 1-2 hours


How to use

When a subagent’s “SUGGESTED FOLLOW-UP” line surfaces a concept-page candidate worth keeping:

  1. Add a new entry at the top of the Active section with the next CA-N ID
  2. Include sources as wikilinks (must be 1+ to surface, 3+ to be ripe)
  3. State synthesis in 2-3 sentences
  4. Mark status Inbox (default), Ripe (3+ sources), Written, or Scrapped
  5. Cross-link to founder-relevance angle so triage is mechanical

When a candidate has 3+ sources AND fits an active project/positioning question, surface it to the founder via channel for write-or-skip decision.