06-reference

rdco bet architecture playbook

Wed Apr 29 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·playbook ·status: canonical

RDCO bet architecture playbook — applying the four-layer framework to each bet

Why this exists

Founder articulated the canonical RDCO thesis 2026-04-30 16:01 ET: targeting systems + instrumentation + tools + feedback loop, applied to a portfolio of niche bets (see 2026-04-30-rdco-thesis-targeting-systems-feedback-loops). At 2026-04-30 17:21 ET, he extended the thesis with a worked example (Squarely) + a recursive structure (P&L as meta-targeting over sub-process targeting systems) + the modular-components insight (build capabilities once, apply across bets).

This page captures all three extensions as the canonical playbook for how to evaluate, build, and run any RDCO bet.

The recursive structure (added 2026-04-30)

Targeting systems nest. Two layers:

  1. Sub-process targeting system — defines “good” for a specific operational concern within a bet
  2. P&L targeting system — the meta-layer; defines “good” for the bet’s economic viability

The P&L always layers on top. Sub-process gains that violate P&L economics get vetoed at the meta-layer.

Founder’s vertical-farming example (canonical):

If it turns out that we can increase yield by using twice as much water, perhaps that makes it uneconomical given our utility bills. The bottleneck shifts and we would either have to cut down yield to not bleed the business dry or we would need to find a creative solution to reducing our water bill (rain collection & water treatment on site as an example).

Three implications:

Examples across active bets:

BetSub-process targeting systemP&L meta-layer
SquarelyPuzzle-product viability (design quality + production cost + pricing power)Bootstrapped P&L: revenue – KDP fees – ads spend – fulfillment > 0
MACData-pipeline reliability (test coverage, severity tiers, false-alarm rate)Future P&L: client-engagement revenue > delivery cost; until then, brand/credibility ROI
Sanity CheckReader comprehension + trust (article-level signal: replies, forwards, paid-tier conversion)P&L: subscriber LTV > acquisition cost (currently subsidized by personal time, not yet a real P&L)
Vertical farming (hypothetical)Yield × time-to-harvestP&L: yield revenue > water + electricity + nutrient + labor cost

When evaluating a candidate decision: ask “does it improve the sub-process targeting AND survive the P&L meta-layer?” If yes, prioritize. If no, look for a creative cross-layer move (rain capture, productized expertise, etc.).

The Squarely worked example

Founder ran the four-layer audit on Squarely 2026-04-30 17:21 ET. Captured here as the canonical template for auditing any other bet.

Layer 1: Targeting system

Exists: Bootstrapped indie-puzzle-shop P&L. Revenue from KDP-printed-puzzle sales – production costs – ads spend – fulfillment.

Layer 2: Sensors / instrumentation

SensorStatusNotes
Monthly KDP report exportsRoyalty + sales-volume signal
Monarch MCP cost visibility✅ partialNeed cost-routing discipline (which expenses route to Squarely vs other bets)
Website tracking✅ partialHave it for organic; haven’t documented depth
Amazon ads performance tracking❌ GAPNo visibility on which ads convert, ad spend efficiency, or campaign-level ROAS

Layer 3: Tools / actuators

ToolStatusNotes
Website-update pushStandard build/deploy
Amazon A+ copy review/createManual today; could productize
Mailing-list capabilityWired (Resend)
Image generation⚠️ workablexAI integration; quality “so-so but workable”
Run/modify Amazon ads❌ GAPCan’t modify campaigns programmatically
Organic-traffic posting❌ GAPNo social/blog/community-posting capability

Layer 4: Feedback loop

⚠️ Partial — formalize. Have signals, have actuators, don’t have a structured loop that says “P&L outcome → diagnostic on which layer’s bottleneck shifted → next experiment to run.” This is doable; just hasn’t been built.

Synthesis: Squarely’s three load-bearing gaps

  1. Amazon ads performance tracking (sensors layer) — blind to ad spend efficiency
  2. Run/modify Amazon ads (tools layer) — can’t act on the diagnostic even if we had it
  3. Formalize the feedback loop (loop layer) — outcomes exist but the diagnostic-to-experiment cycle isn’t structured

The first two are coupled — sensor without actuator is wasted information; actuator without sensor is blind action.

The modular-components library (added 2026-04-30)

Founder’s insight: “This should be a playbook for each of our bets. I think we will build up modular components that we can apply to each bet.”

Recurring capabilities that pay off across multiple bets:

ComponentLayerBets it serves
Image generationToolsSC (article visuals), Squarely (product imagery, A+ copy), MAC (diagram/infographic)
CopywritingToolsSC (drafts), Squarely (A+ copy, product descriptions), MAC (LP copy)
Website generation + SEO/GEOToolsEvery bet with a public surface (sc.raydata.co, raydata.co, future MAC LP, future Squarely shop)
Traffic monitoringSensorsEvery public-surface bet
Paid-ads run/modifyToolsSquarely (Meta + Amazon), MAC (LinkedIn), SC (X)
Mailing-list managementToolsSC (primary), Squarely (customer retention), MAC (drip course)
P&L instrumentationSensorsEvery bet (the meta-targeting system needs sensor coverage too)
Cost-routing disciplineSensorsEvery bet (which expense maps to which bet’s P&L)

The pattern: build each component once as a skill or tool surface, parameterize it per-bet via prompt + config, reuse across the portfolio. Same shape as Garry Tan’s “fat skills, thin harness” framing — but applied at the bet-level instead of the skill-level.

When a new capability candidate surfaces, the question becomes: “Which layer does it tighten? Across which bets? Is the cross-bet payoff strong enough that building it once costs less than building it bet-by-bet?” The cross-bet reuse criterion is what distinguishes a load-bearing capability from a single-bet shiny object.

Application template for any new bet

When evaluating a new candidate bet (e.g., vertical farming hypothetically, or whatever the next portfolio addition turns out to be):

Step 1: Identify the targeting systems

Step 2: Audit each layer (1-4)

Use the Squarely worked example as the template. For each layer, ask:

Step 3: Identify load-bearing gaps

The 1-3 gaps that block the bet from running. Coupled gaps (sensor + actuator pairs) get prioritized together.

Step 4: Map gaps onto modular-components library

For each gap, ask: “Does this build a capability that pays off across multiple bets?” If yes, it’s a high-priority modular investment. If no, it’s a single-bet local fix — either invest selectively or defer.

Step 5: Sequence the work

Apply Bush’s #7 (single biggest bottleneck → full attention → remove → next). Combined with the prioritization filter from feedback_targeting_system_prioritization_filter.md, this gives a coherent operating system for what to build next.

Step 6: Pressure-test the bottleneck pick against the false-bottleneck patterns

Adapted from Ole Lehmann’s bottleneck skill (per 2026-04-30-theory-of-constraints-bottleneck-interview-x-post). When you’ve identified what looks like the binding gap, run it against this list of common symptom-disguised-as-constraints. If the named bottleneck matches one of these patterns, look upstream — the real one is usually one layer back.

Tempting answerWhat it usually actually is
”We need more leads”Conversion or offer is the real bottleneck
”We need to hire”Founder hasn’t built a system someone else can run
”We need better tools”Almost never; comfort move; existing tools are fine
”We need more time”Priority confusion or decision-queuing on the founder
”Marketing is the bottleneck”Often retention, churn, or LTV
”Our team is too small”Work flowing in is unfocused, not undersized
”We need more capital”Almost never; throughput rarely capital-constrained at small scale

Plus the canonical lock-in test: “If this step had infinite capacity starting tomorrow, would monthly revenue actually move within 90 days? By how much?” If the answer is hesitant or has no specific number, the candidate is a symptom — loop back.

Application to active bets — current snapshot

Squarely

Audited above. Three load-bearing gaps; paid-ads sensor+actuator pair is the highest-leverage cluster.

MAC

Audit not yet done. Anticipated layers:

Worth running the full audit when founder is ready.

Sanity Check

Audit not yet done. Anticipated layers:

Worth running the full audit when founder is ready.

RDCO ops (the meta-layer)

This playbook’s framework is itself the targeting system for RDCO ops. The autonomous loop (cron + sub-agents + bookshelf + Notion + memory) is the instrumentation + tools + feedback layer for “running RDCO well.” Eat-our-own-dog-food evidence.

Cross-references