MAC bet-architecture audit (2026-04-30)
Why this exists
Founder asked to dogfood the four-layer playbook (2026-04-30-rdco-bet-architecture-playbook) on MAC — the data-quality framework defined in ../01-projects/data-quality-framework/testing-matrix-template and currently being implemented at the Mammoth Growth / Progress Software engagement. The playbook is only worth what it surfaces when applied to the bet that’s messiest in practice. MAC qualifies: the framework PDF is clean, the lived implementation isn’t.
Per feedback_calibrate_overconfidence: the audit reflects Progress’s lived friction, not the polished testing-matrix-template version. Where the gap between the doc and the reality matters, it’s called out.
The targeting systems
- Sub-process targeting system: data-pipeline reliability — defined as full Scope × Basis coverage on gold-layer models, with severity tiers (Stop / Pause / Go) so test results are triage-able rather than alert-fatiguing. The framework IS this targeting system; “good” means a gold model has at least one check in every scope level and explicit reasoning recorded for every Basis cell.
- P&L meta-layer: pre-revenue today. Anticipated path is a fork — (a) productized drip course + anchor article generating subscriber-driven revenue and inbound consulting leads, (b) direct consulting engagements on the audit-model + generate-tests deliverables, (c) longer-tail tool licensing of the matrix surface to dbt-shop teams. Today the P&L proxy is brand/credibility ROI: does the framework hold up in real client work and produce artifacts a buyer would pay for. Progress is the live test of that proxy.
Layer 1: Targeting system
Exists, is documented, and is in active production use. The Scope × Basis matrix is the canonical “good” definition. The 14→95 test gap surfaced by gold_opp_pipeline (../01-projects/data-quality-framework/case-studies/2026-04-13-gold-opp-pipeline-mg-progress) is the proof point that the targeting system has bite — the matrix surfaces bug classes (PRO-303 dual-source override) that conventional schema.yml testing misses by construction.
Friction (honest): the targeting system is well-defined for gold-layer dbt models. It’s underspecified for silver/bronze layers, for non-dbt platforms (raw Snowflake, Fivetran landing tables, ML feature stores), and for streaming pipelines. The framework reads as universal but the lived implementation is gold-dbt-specific so far.
Layer 2: Sensors / instrumentation
| Sensor | Status | Notes |
|---|---|---|
| Founder’s Progress engagement (lived implementation) | Yes — primary | Gold_opp_pipeline 14→95 test count, PRO-303 surfacing, UAT retrofit. Qualitative: “what hurts during implementation” is the highest-signal sensor MAC has. |
Audit script ~/.claude/scripts/audit-newsletter-outputs.py | Yes — meta-sensor | Embodies MAC discipline (deterministic verification layer). Demonstrates the framework’s own outputs can be audited; sensor for whether RDCO eats its own dogfood. |
| dbt test pass/fail counts on Progress models | Yes — operational | Run-time signal. Tells us tests execute; doesn’t tell us whether the matrix is being filled correctly. |
| UAT mapping retrofit (existing client failures → matrix cells) | Yes — case-evidence | The 3,227-row Closed-Won UAT FAIL → R1 cell mapping is the canonical “framework retrofits existing findings” proof. |
| Adoption signal from external dbt teams | GAP | No instrumentation on whether anyone outside the founder uses the matrix. No download counter on testing-matrix-template, no GitHub stars, no inbound questions tagged MAC. |
| Time-to-fill-matrix per model | GAP | The matrix takes hours to fill. No measurement of how long, no measurement of which cells get skipped most often, no measurement of which cells produce the highest-value findings. The “friction sensor” is qualitative-only. |
| False-alarm rate on severity tiers | GAP | Stop/Pause/Go severity is the load-bearing claim that lets all tests run at once without alert fatigue. Today there’s no measurement of how often Stop alerts are real vs noise on Progress. If the tier mapping is wrong, the whole “agents triage, humans review exceptions” promise breaks silently. |
| Drip-course / anchor-article reader signal | GAP | Drafts exist (../01-projects/data-quality-framework/content/2026-04-15-mac-anchor-article-draft-v1, drip-day1) but no published surface, no LP, no email capture. Zero sensor on whether the productized path resonates. |
Layer 3: Tools / actuators
| Tool | Status | Notes |
|---|---|---|
/audit-model skill | Yes | Interactive Scope × Basis test-plan builder. Active, used. |
/generate-tests skill | Yes | Emits dbt YAML or Snowflake SQL from filled matrix. Active. |
| Testing matrix template (../01-projects/data-quality-framework/testing-matrix-template) | Yes | Canonical reference + populated examples. |
| Portable skills bundle | Yes — partial | Documents how to deploy MAC on a client project. Doesn’t yet include the productization layer (LP, drip, payment). |
| Audit-newsletter-outputs.py | Yes | Cross-bet modular component (MAC discipline applied to newsletter QA). |
| Anchor article draft + drip-day1 draft | Yes — unpublished | Content exists in vault; not on a public surface. |
| MAC landing page / public surface | GAP | No mac.raydata.co or comparable. The productize-MAC Notion task (344f7d49-36d1-8102-b6ad-c0b1c0bd140f) exists but the LP isn’t built. Without a public surface, the content drafts can’t convert. |
| Drip-course delivery infrastructure | GAP | The Sanity Check content series task (341f7d49-36d1-81b9-a254-d3cdd5737c9b) covers article + drip course. Resend wiring exists for SC but no MAC-specific drip sequence has been provisioned. |
| Severity-tier triage actuator | GAP | The framework specifies Stop/Pause/Go but there’s no skill or tool that takes a dbt run output, classifies failures by tier, and routes accordingly (Slack ping for Stop, queue for Pause, log for Go). The “agents triage” claim is currently aspirational, not implemented. |
| Cross-model recon test runner | GAP — partial | A2 / A6 (cross-model aggregate reconciliation) is the highest-leverage MAC pattern. /generate-tests emits the SQL; there’s no orchestration that runs cross-model recon on a schedule and surfaces drift. Reconciliation today is run-when-someone-thinks-of-it. |
| Matrix-fill copilot | GAP | Filling the matrix is the friction. No tool that pre-populates likely cells from a model’s schema YAML, leaving the engineer to confirm/edit rather than start blank. |
Layer 4: Feedback loop
Partial. The Progress engagement produces lived diagnostic per sprint (what hurts, what surfaces real bugs, where the matrix is overkill) but the diagnostic isn’t structured back into framework refinement on a cadence. Case-study artifact exists for gold_opp_pipeline; no ongoing loop that says “this week’s Progress findings → next testing-matrix-template revision → updated audit-model skill prompt.”
The productized loop (reader signal → article/drip refinement → conversion) doesn’t exist at all because the public surface doesn’t exist.
Synthesis: load-bearing gaps
- Productization surface (LP + drip delivery) — tools layer. Without a public MAC surface, the content drafts can’t convert, no reader sensor exists, and the P&L meta-layer has no path forward. This is the single most-binding gap because it blocks both the sensors-layer (reader signal) AND the P&L viability test simultaneously.
- Severity-tier triage actuator + false-alarm sensor — coupled tools+sensor gap. The Stop/Pause/Go promise is what differentiates MAC from “just run more tests.” If the tier mapping produces noise on Progress and we can’t measure it, the framework’s load-bearing claim is unverified. Same shape as Squarely’s paid-ads sensor+actuator pair: actuator without sensor is blind, sensor without actuator is wasted.
- Matrix-fill copilot — tools layer. Friction-reduction on the highest-cost step. Filling the matrix is hours per model; pre-population from schema YAML would compress that to minutes-of-review. This unlocks adoption velocity, which directly serves both the consulting and productized paths.
Modular-components mapping
Mapping the gaps onto the cross-bet capability library from the playbook:
- Productization surface (LP + drip) → cross-bet. Same shape as Sanity Check’s sc.raydata.co + Resend wiring, same shape as Squarely’s future shop site. The
/build-landing-pageskill +ray-data-co-designumbrella already exist as modular components; MAC just needs to be the next tenant. High-priority modular reuse, NOT a single-bet build. - Drip-course delivery infrastructure → cross-bet. Mailing-list management is already in the modular library (SC primary user). MAC drip is parameterization, not net-new capability.
- Severity-tier triage actuator → MAC-specific. The Stop/Pause/Go semantics are tied to dbt test output structure. Single-bet local fix, but high-priority because it’s load-bearing for the MAC sub-process targeting claim.
- False-alarm-rate sensor → MAC-specific in implementation, but the pattern (instrument the load-bearing claim of your targeting system, don’t trust the docs) is cross-bet. Squarely needs the same shape for ad-spend ROAS, SC needs it for reader-trust signals.
- Matrix-fill copilot → MAC-specific. Tied to the matrix structure.
- Reader signal on content drafts → cross-bet. Already exists for SC (Resend metrics, reply-tracking discipline). MAC inherits when LP+drip ship.
The pattern: 2 of the 3 load-bearing gaps resolve via existing modular components (LP-builder, drip-delivery). That’s the highest-leverage sequencing — build the productization surface first because it’s reuse, not net-new capital, AND it unlocks the reader-signal sensor that the rest of the bet needs.
Related
- 2026-04-30-rdco-bet-architecture-playbook — the playbook this audit applies
- 2026-04-30-rdco-thesis-targeting-systems-feedback-loops — canonical thesis
- ../01-projects/data-quality-framework/testing-matrix-template — MAC framework canonical
- ../01-projects/data-quality-framework/case-studies/2026-04-13-gold-opp-pipeline-mg-progress — anchor case study (lived instrumentation)
- ../01-projects/data-quality-framework/content/2026-04-15-mac-anchor-article-draft-v1 — productized-content draft (unpublished)
- ../01-projects/data-quality-framework/content/2026-04-15-mac-drip-day1-draft-v1 — drip course draft (unpublished)
- ../01-projects/data-quality-framework/portable-skills-bundle — client-deployment notes
~/.claude/skills/audit-model/SKILL.md— primary tool actuator~/.claude/skills/generate-tests/SKILL.md— secondary tool actuator~/.claude/scripts/audit-newsletter-outputs.py— meta-sensor (MAC discipline applied to its own outputs)- Notion task
341f7d49-36d1-81b9-a254-d3cdd5737c9b— Sanity Check MAC content series - Notion task
344f7d49-36d1-8102-b6ad-c0b1c0bd140f— Productize MAC + Client Reporting