RDCO bet architecture playbook — applying the four-layer framework to each bet
Why this exists
Founder articulated the canonical RDCO thesis 2026-04-30 16:01 ET: targeting systems + instrumentation + tools + feedback loop, applied to a portfolio of niche bets (see 2026-04-30-rdco-thesis-targeting-systems-feedback-loops). At 2026-04-30 17:21 ET, he extended the thesis with a worked example (Squarely) + a recursive structure (P&L as meta-targeting over sub-process targeting systems) + the modular-components insight (build capabilities once, apply across bets).
This page captures all three extensions as the canonical playbook for how to evaluate, build, and run any RDCO bet.
The recursive structure (added 2026-04-30)
Targeting systems nest. Two layers:
- Sub-process targeting system — defines “good” for a specific operational concern within a bet
- P&L targeting system — the meta-layer; defines “good” for the bet’s economic viability
The P&L always layers on top. Sub-process gains that violate P&L economics get vetoed at the meta-layer.
Founder’s vertical-farming example (canonical):
If it turns out that we can increase yield by using twice as much water, perhaps that makes it uneconomical given our utility bills. The bottleneck shifts and we would either have to cut down yield to not bleed the business dry or we would need to find a creative solution to reducing our water bill (rain collection & water treatment on site as an example).
Three implications:
- Bottleneck identification is recursive too. Sometimes the binding constraint is at the sub-process layer (“we can’t grow more”), sometimes at the P&L layer (“we can grow more but it costs more than it earns”). The right diagnostic asks BOTH.
- Sub-process optimization can be P&L-negative. Don’t reward yield optimization that bleeds margin.
- Creative solutions cross the layer boundary. The rain-capture move solves the P&L problem WITHOUT sacrificing the sub-process gain. Look for these.
Examples across active bets:
| Bet | Sub-process targeting system | P&L meta-layer |
|---|---|---|
| Squarely | Puzzle-product viability (design quality + production cost + pricing power) | Bootstrapped P&L: revenue – KDP fees – ads spend – fulfillment > 0 |
| MAC | Data-pipeline reliability (test coverage, severity tiers, false-alarm rate) | Future P&L: client-engagement revenue > delivery cost; until then, brand/credibility ROI |
| Sanity Check | Reader comprehension + trust (article-level signal: replies, forwards, paid-tier conversion) | P&L: subscriber LTV > acquisition cost (currently subsidized by personal time, not yet a real P&L) |
| Vertical farming (hypothetical) | Yield × time-to-harvest | P&L: yield revenue > water + electricity + nutrient + labor cost |
When evaluating a candidate decision: ask “does it improve the sub-process targeting AND survive the P&L meta-layer?” If yes, prioritize. If no, look for a creative cross-layer move (rain capture, productized expertise, etc.).
The Squarely worked example
Founder ran the four-layer audit on Squarely 2026-04-30 17:21 ET. Captured here as the canonical template for auditing any other bet.
Layer 1: Targeting system
✅ Exists: Bootstrapped indie-puzzle-shop P&L. Revenue from KDP-printed-puzzle sales – production costs – ads spend – fulfillment.
Layer 2: Sensors / instrumentation
| Sensor | Status | Notes |
|---|---|---|
| Monthly KDP report exports | ✅ | Royalty + sales-volume signal |
| Monarch MCP cost visibility | ✅ partial | Need cost-routing discipline (which expenses route to Squarely vs other bets) |
| Website tracking | ✅ partial | Have it for organic; haven’t documented depth |
| Amazon ads performance tracking | ❌ GAP | No visibility on which ads convert, ad spend efficiency, or campaign-level ROAS |
Layer 3: Tools / actuators
| Tool | Status | Notes |
|---|---|---|
| Website-update push | ✅ | Standard build/deploy |
| Amazon A+ copy review/create | ✅ | Manual today; could productize |
| Mailing-list capability | ✅ | Wired (Resend) |
| Image generation | ⚠️ workable | xAI integration; quality “so-so but workable” |
| Run/modify Amazon ads | ❌ GAP | Can’t modify campaigns programmatically |
| Organic-traffic posting | ❌ GAP | No social/blog/community-posting capability |
Layer 4: Feedback loop
⚠️ Partial — formalize. Have signals, have actuators, don’t have a structured loop that says “P&L outcome → diagnostic on which layer’s bottleneck shifted → next experiment to run.” This is doable; just hasn’t been built.
Synthesis: Squarely’s three load-bearing gaps
- Amazon ads performance tracking (sensors layer) — blind to ad spend efficiency
- Run/modify Amazon ads (tools layer) — can’t act on the diagnostic even if we had it
- Formalize the feedback loop (loop layer) — outcomes exist but the diagnostic-to-experiment cycle isn’t structured
The first two are coupled — sensor without actuator is wasted information; actuator without sensor is blind action.
The modular-components library (added 2026-04-30)
Founder’s insight: “This should be a playbook for each of our bets. I think we will build up modular components that we can apply to each bet.”
Recurring capabilities that pay off across multiple bets:
| Component | Layer | Bets it serves |
|---|---|---|
| Image generation | Tools | SC (article visuals), Squarely (product imagery, A+ copy), MAC (diagram/infographic) |
| Copywriting | Tools | SC (drafts), Squarely (A+ copy, product descriptions), MAC (LP copy) |
| Website generation + SEO/GEO | Tools | Every bet with a public surface (sc.raydata.co, raydata.co, future MAC LP, future Squarely shop) |
| Traffic monitoring | Sensors | Every public-surface bet |
| Paid-ads run/modify | Tools | Squarely (Meta + Amazon), MAC (LinkedIn), SC (X) |
| Mailing-list management | Tools | SC (primary), Squarely (customer retention), MAC (drip course) |
| P&L instrumentation | Sensors | Every bet (the meta-targeting system needs sensor coverage too) |
| Cost-routing discipline | Sensors | Every bet (which expense maps to which bet’s P&L) |
The pattern: build each component once as a skill or tool surface, parameterize it per-bet via prompt + config, reuse across the portfolio. Same shape as Garry Tan’s “fat skills, thin harness” framing — but applied at the bet-level instead of the skill-level.
When a new capability candidate surfaces, the question becomes: “Which layer does it tighten? Across which bets? Is the cross-bet payoff strong enough that building it once costs less than building it bet-by-bet?” The cross-bet reuse criterion is what distinguishes a load-bearing capability from a single-bet shiny object.
Application template for any new bet
When evaluating a new candidate bet (e.g., vertical farming hypothetically, or whatever the next portfolio addition turns out to be):
Step 1: Identify the targeting systems
- Sub-process targeting system: what’s the operational “good” definition for the bet’s core process?
- P&L meta-layer: what’s the economic viability target?
Step 2: Audit each layer (1-4)
Use the Squarely worked example as the template. For each layer, ask:
- What sensors exist? What gaps?
- What tools exist? What gaps?
- What’s the feedback loop look like? Is it formalized?
Step 3: Identify load-bearing gaps
The 1-3 gaps that block the bet from running. Coupled gaps (sensor + actuator pairs) get prioritized together.
Step 4: Map gaps onto modular-components library
For each gap, ask: “Does this build a capability that pays off across multiple bets?” If yes, it’s a high-priority modular investment. If no, it’s a single-bet local fix — either invest selectively or defer.
Step 5: Sequence the work
Apply Bush’s #7 (single biggest bottleneck → full attention → remove → next). Combined with the prioritization filter from feedback_targeting_system_prioritization_filter.md, this gives a coherent operating system for what to build next.
Step 6: Pressure-test the bottleneck pick against the false-bottleneck patterns
Adapted from Ole Lehmann’s bottleneck skill (per 2026-04-30-theory-of-constraints-bottleneck-interview-x-post). When you’ve identified what looks like the binding gap, run it against this list of common symptom-disguised-as-constraints. If the named bottleneck matches one of these patterns, look upstream — the real one is usually one layer back.
| Tempting answer | What it usually actually is |
|---|---|
| ”We need more leads” | Conversion or offer is the real bottleneck |
| ”We need to hire” | Founder hasn’t built a system someone else can run |
| ”We need better tools” | Almost never; comfort move; existing tools are fine |
| ”We need more time” | Priority confusion or decision-queuing on the founder |
| ”Marketing is the bottleneck” | Often retention, churn, or LTV |
| ”Our team is too small” | Work flowing in is unfocused, not undersized |
| ”We need more capital” | Almost never; throughput rarely capital-constrained at small scale |
Plus the canonical lock-in test: “If this step had infinite capacity starting tomorrow, would monthly revenue actually move within 90 days? By how much?” If the answer is hesitant or has no specific number, the candidate is a symptom — loop back.
Application to active bets — current snapshot
Squarely
Audited above. Three load-bearing gaps; paid-ads sensor+actuator pair is the highest-leverage cluster.
MAC
Audit not yet done. Anticipated layers:
- Sub-process targeting: data-pipeline reliability (severity tiers, test coverage). MAC framework IS this.
- P&L meta-layer: future client-engagement revenue. Currently pre-revenue; brand/credibility ROI is the proxy.
- Sensors: founder’s Progress engagement is the live instrumentation. Documentation of the implementation friction is the ongoing sensor signal.
- Tools: drafting MAC content, productizing the framework as a sellable thing. /generate-tests skill is one tool. Audit script is another.
- Feedback loop: each Progress sprint should produce diagnostic on what works + what doesn’t, fed back into MAC framework refinement. Partially formalized.
Worth running the full audit when founder is ready.
Sanity Check
Audit not yet done. Anticipated layers:
- Sub-process targeting: reader comprehension + trust (proxy: open rate, reply rate, paid-tier conversion if/when added).
- P&L meta-layer: not yet a real P&L; subsidized by personal time. Future state: subscriber LTV > acquisition cost.
- Sensors: site analytics, Resend metrics, founder’s qualitative read of replies.
- Tools: build-landing-page skill, design-critic skill, draft-review + voice-match skills, /research-brief, /remix.
- Feedback loop: each issue should produce diagnostic on which angle landed + which didn’t. Partially formalized.
Worth running the full audit when founder is ready.
RDCO ops (the meta-layer)
This playbook’s framework is itself the targeting system for RDCO ops. The autonomous loop (cron + sub-agents + bookshelf + Notion + memory) is the instrumentation + tools + feedback layer for “running RDCO well.” Eat-our-own-dog-food evidence.
Cross-references
- 2026-04-30-rdco-thesis-targeting-systems-feedback-loops — canonical thesis (the four-layer + portfolio framing)
- 2026-04-30-quality-gate-as-brain-org-boundaries-agentic-companies — the “targeting system = brain” sharpening
- 2026-04-30-mitohealth-founder-5-layer-agent-native-company-loop — the external articulation that catalyzed the thesis
- 2026-04-30-bookshelf-source-material-architecture-gap — instrumentation pattern (bookshelf for source material)
- ../01-projects/squarely-puzzles — Squarely operating notes
- ../01-projects/data-quality-framework — MAC framework
- ../01-projects/newsletter — Sanity Check
~/.claude/projects/-Users-ray/memory/feedback_targeting_system_prioritization_filter.md— Ray’s behavioral filter for capability questions- ../06-reference/2026-04-30-dickie-bush-how-to-think-like-a-billionaire-without-ever-meeting-one — Bush #7 (ruthlessly prioritize / single bottleneck) is the operator-discipline this playbook codifies
- 2026-04-28-mrbeast-production-playbook — Critical Component field discipline (one per project; daily check-in target) — the bet-level analog to this playbook’s “single biggest bottleneck per bet”