Case Study: gold_opp_pipeline (MG Progress)
First production application of MAC. Canonical example for the drip course and consulting engagements.
Context
- Client: Mammoth Growth / Progress Software engagement
- Model:
apps/dbt-edw/models/03_gold/opp_renewals/gold_opp_pipeline.sql - Grain: One row per
opportunity_id - Sources:
silver_fct_ips_opportunity(core) +silver_opp_classification(derived) +silver_fct_combined_transaction(PRO-303 closed-won override) - BRD:
requirements/opp-pipeline-business-requirements-doc.md - UAT:
test-plans/uat/qa_test_catalog_20260413.csv
Why this is the canonical example
-
Real production complexity: 30+ columns, 3 source systems, conditional override logic (PRO-303 COALESCE pattern), cross-model reconciliation requirements (gold_opp_pipeline ↔ gold_txn_pipeline ↔ gold_pbi_arr_schedule).
-
MAC surfaces a real bug class: The dual-path pattern for closed-won vs open pipeline is exactly the kind of issue that conventional testing misses. Same column, same row, but the Relative:Source target depends on
is_closed_won. The matrix forces explicit reasoning about this per cell. -
UAT integration proof: Existing 3,227-row UAT FAIL (“Opp txn type must align for Closed Won”) maps cleanly to a specific matrix cell (R1 Row × Relative:Production for the
opportunity_typecolumn). The framework retrofits existing client findings without redesign. -
Covers all three scopes meaningfully:
- Column × 6 basis = ~30 columns × 6 cells = 180 cells evaluated (many marked n/a or skip)
- Row scope = 8 cross-column rules (R1-R8)
- Aggregate scope = 6 rules (A1-A6), 3 with cross-model joins
-
14 existing tests → ~95 total coverage: Demonstrates the typical gap between current-state dbt testing and MAC-level rigor (~85% gap).
The PRO-303 Pattern (key insight)
The founder’s gold_opp_pipeline has a COALESCE override: for closed-won opportunities, projected measures (O6), contract linkage (O5), classification (O7), and expansion flags (O8) are overridden with actuals from silver_fct_combined_transaction. Open pipeline retains projected values from Salesforce/OFC.
This creates a dual-source testing requirement that the MAC matrix catches by construction:
- Relative:Source for
projected_arrwhenis_closed_won = FALSE→ must matchsilver_fct_ips_opportunity.projected_arr - Relative:Source for
projected_arrwhenis_closed_won = TRUE→ must equalSUM(silver_fct_combined_transaction.arr)grouped by opportunity_id
Conventional schema.yml-based testing would never surface this gap because it treats each column as having a single source. The matrix forces the engineer to ask “what’s the source for this cell UNDER THIS CONDITION?” — and the answer reveals the override pattern needs two tests, not one.
Coverage Summary
| Scope | Existing | New | Total |
|---|---|---|---|
| Column × Absolute | 12 | ~15 (accepted_values, format checks, conditional not_null) | ~27 |
| Column × Relative:Source | 2 | ~30 (pass-through fidelity, PRO-303 override validation) | ~32 |
| Column × Relative:Production | 0 | ~5 (dashboard reconciliation) | ~5 |
| Column × Temporal | 0 | ~10 (distribution stability, null rate drift) | ~10 |
| Column × Human | 0 | ~5 (manual spot-checks) | ~5 |
| Row (all basis types) | 0 | 8 (R1-R8) | 8 |
| Aggregate (all basis types) | 2 | 6 (A1-A4, A6 + A5 temporal) | 8 |
| Total | 14 | ~79 | ~95 |
Key Rules Worth Highlighting in the Drip Course
- R1: Type / Sub-Type Taxonomy — classic parent-child validation, demonstrates Absolute Row check
- R3: PRO-303 Override Consistency — demonstrates dual-source Relative:Source pattern
- R6: Expansion Flag Cascade — demonstrates BRD-derived business rules
- A2: Closed-Won Reconciliation to gold_txn_pipeline — demonstrates cross-model Aggregate:Production
- A6: Orphan Closed-Won Opportunities — demonstrates cross-model referential integrity at aggregate level
What this case study proves
- MAC is applicable to real production dbt models — not just toy examples.
- The matrix surfaces bugs by construction — the dual-path pattern isn’t something you’d think to test; the framework forces it.
- Severity tiers (Stop/Pause/Go) allow all tests to run at once without alert fatigue — agents triage; humans review exceptions.
- The artifact is a consulting deliverable — this document, sent to a client, immediately communicates “you’re missing 80%+ of your coverage.”
Related
- ../testing-matrix-template — the MAC framework this spec applies
- ../portable-skills-bundle — how to deploy MAC on a client project
- ../../../.claude/skills/audit-model/SKILL.md — the skill that produces this artifact
- ../../../.claude/skills/generate-tests/SKILL.md — generates dbt tests from the filled matrix