06-reference

commoncog amazon weekly business review

Tue Apr 14 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Commoncog ·by Cedric Chin

“The Amazon Weekly Business Review (WBR)” — @CedricChin

Why this is in the vault

This is the canonical operational playbook for the worldview in Chin’s “Becoming Data Driven, From First Principles” — it’s where SPC theory becomes a meeting you can run on Wednesday morning. For RDCO, the WBR is the single most important comparable artifact for what MAC (Model Acceptance Criteria) is supposed to become: a pre-packaged, culture-first process control mechanism that clients can install and run without the consultant in the room.

The core argument (paraphrased)

The WBR is not a reporting meeting — it’s a process control tool that disseminates a causal model of the business to everyone who runs it.

Chin’s structure (compressed):

  1. Three goals, in strict order. (1) What did our customers experience last week? (2) How did our business do last week? (3) Are we on track to hit targets? Customer question comes first on purpose — internal-first framing biases the whole metric set toward vanity.

  2. Controllable input metrics vs output metrics. Output metrics (revenue, DAU/MAU, FCF) are what you care about but “you are not allowed to discuss output metrics” operationally. You drive output by finding controllable input metrics — directly actionable levers — and tracking whether they still move the output. When an input stops correlating, you discard it and hunt for a new one. The causal model of the business lives in this input→output graph, and it’s expected to evolve.

  3. Exception-driven discussion. Every metric in the deck gets one second of stare-time; routine variation earns “nothing to see here” and the meeting moves on. Only exceptional variation gets discussed — and the owner must either explain it or say “I don’t know, still investigating.” Fabricated explanations are forbidden.

  4. Three visualization types, forever. The 6-12 graph (trailing 6 weeks + trailing 12 months, same axis, prior-year faded ghost line, target triangles, box scores), the 6-12 table, and plain tables. Same fonts, same colors, same layout every week — because fingertip-feel requires repetition. Novelty in presentation destroys pattern recognition.

  5. Static deck, tight choreography. Deck is generated overnight Sunday, metric owners review Monday, departmental WBRs Tuesday, company WBR Wednesday at 60 minutes sharp. Handoffs under two seconds. No loading dashboards. No strategy discussion — facilitator cuts it off and schedules an offline follow-up.

  6. Throw the WBR at anything you want to improve. Chin’s operational observation: the decision to include a business unit’s metrics in the top-level WBR is itself a forcing function — functions subjected to WBR have to find their controllable inputs and shape up. The meeting isn’t just measurement; it’s a governance lever.

  7. Culture before tooling. “Good data practices stem from culture, not from tooling.” A WBR runs correctly when it runs without the CEO present. Colin Bryar’s test: skip a week deliberately, return, ask “what did you discuss for metric X?” If the org has internalized the practice, the answer comes back sharp.

Mapping against Ray Data Co

This is the single most important article for RDCO’s consulting productization thesis. Six mappings:

1. MAC is the WBR for AI-era data models. The WBR’s 3-question structure (customer experience / business state / target tracking) has a direct analog in MAC: (1) what did the model produce for users, (2) what is the data/model state, (3) are we on track against acceptance thresholds. The exception-driven discipline — stare, say “nothing to see here” or investigate — is exactly how MAC review cadences should run. Cross-ref 2026-04-15-commoncog-becoming-data-driven-first-principles mapping #1; this article turns the principle into a meeting format we can ship to clients.

2. Controllable input metrics = MAC test cells. Chin’s input/output distinction maps cleanly onto MAC’s 3×6 matrix: most MAC cells are controllable input metrics (column-level null rate, row-level dupe rate, reconciliation drift) that drive output metrics the client actually cares about (model accuracy, downstream decision quality, revenue impact). The insight: present MAC results in the same input→output causal order, deck layout, and governance cadence as a WBR. See ../01-projects/data-quality-framework/testing-matrix-template.

3. The 6-12 graph is the template for MAC reporting decks. When we productize MAC, we should stop inventing visualizations. Rip the 6-12 graph wholesale — 6 weeks on the left, 12 months on the right, prior-year ghost line, target triangle, box scores. Use it for every MAC cell. The pedagogical benefit (fingertip-feel) is identical.

4. “Throw the WBR at anything you want to improve” is the agent-deployer wedge. Per 2026-04-14-levie-agent-deployer-role-jd, enterprises are scrambling to instrument AI workflows. RDCO’s angle: the consulting engagement starts by putting the client’s AI agent outputs into a weekly process-control meeting, regardless of which agents they’re running. The meeting is the wedge; the skills and MAC framework follow.

5. “Culture before tooling” validates the state-ownership architecture. ../04-tooling/rdco-state-ownership-architecture argues the client owns vault + skills + data; model is a commodity. Chin’s point is that tooling fails without cultural adoption — which means RDCO’s deliverable has to include the operating ritual, not just the Python package or the Notion template. The 3-6 month coaching engagement is specifically about installing the ritual until it runs without us.

6. phData / MG comparison. Traditional data-consulting shops (phData, MG) deliver pipelines, dashboards, and Looker models — they sell tooling. RDCO’s bet (consistent with Chin) is that the durable value is the meeting discipline, not the dashboard. The WBR is the existence proof: Amazon’s edge isn’t better BI software, it’s twenty years of operators walking through the same deck every Wednesday. RDCO sells the ritual; MAC is the deck.

One thing this article implicitly challenges us on: Chin describes the WBR as a 60-minute meeting reviewing 400-500 metrics. An AI-era equivalent has to scale the “stare for one second, move on” model across potentially thousands of agent-output metrics. This is where LLM-augmented triage has a real role — not to replace the meeting, but to pre-flag exceptional variation so the human review stays under 60 minutes. The agent isn’t replacing the operator; it’s the facilitator.