06-reference

commoncog secret heart continuous improvement

Tue Apr 14 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Commoncog ·by Cedric Chin

“The Secret at the Heart of Continuous Improvement” — @CedricChin

Why this is in the vault

This is Chin’s clearest statement of why continuous improvement is the discipline MAC is trying to operationalize: it’s a methodology for single-subject studies, where the subject is your own business. Directly reinforces the state-ownership + agent-deployer thesis — every client runs their own SPC loop on their own AI systems.

The core argument (paraphrased)

Deming’s favourite challenge to managers proposing process changes was “how would you know?” — how would you know your idea worked? Most operators answer with experience, which Deming called “superstition.” We all do this: launch a campaign, change our dev practices, update sales enablement, then shrug and say “I guess things are better.”

Continuous Improvement is the antidote, and Chin argues it reduces to three load-bearing concepts:

  1. The PDSA cycle (Plan-Do-Study-Act) — Deming’s formalization of trial and error. The critical step most people skip is Study: circling back to ask what was learned. Without it, no knowledge accumulates.
  2. The Three Questions of Continuous Improvement (via Wheeler):
    • What do you want to accomplish?
    • By what method?
    • How would you know? Most operators can answer questions 1 and 2. Almost none can answer question 3.
  3. The Process Behaviour Chart (XmR chart) — Deming’s default answer to question 3. Because “any number you attempt to track will wiggle,” you need statistically principled limits to separate signal from noise. Without this, you’re forced back onto vibes, and “you may all too easily trick yourself.”

Together these three constitute a methodology for single-subject studies. The subject is you, your team, or your business.

Why single-subject matters. RCTs are the gold standard in medicine but are expensive and slow, and even a successful RCT doesn’t guarantee the drug works for you. Doctors know this — “try this, come back in two weeks” is itself a PDSA cycle. Best practices from other companies suffer the same limitation: what worked elsewhere may not work here. Deming warned against copying best practices blindly; a “little Theory of Knowledge should make you suspicious of another company’s knowledge.” You must always test when you apply.

The payoff: Japanese industrialists in 1950 listened, ran single-subject studies on their own production lines for two decades, and emerged with the Toyota/Honda/Mazda production systems. The ideas have been forgotten outside manufacturing. Chin is handing them back.

Mapping against Ray Data Co

Mapping strength: strong. This piece is the companion to 2026-04-15-commoncog-becoming-data-driven-first-principles and sharpens three specific levers for RDCO:

1. MAC is the third question, operationalized for AI systems. The core gap Chin identifies — operators can answer “what” and “how” but not “how would you know?” — is exactly the gap MAC fills for AI-era teams. When an agent-deployer ships a new workflow, prompt, or eval, “how would you know it’s working?” must be answered with a process behaviour chart over accuracy/latency/reconciliation metrics, not vibes. The 3×6 MAC matrix is a pre-packaged answer to Deming’s third question, specialized for model outputs. See ../01-projects/data-quality-framework/testing-matrix-template.

2. Single-subject studies are the consulting unit of work. Chin’s key insight — that you don’t need RCTs across companies, you need one disciplined subject study in your own company — is the bones of RDCO’s consulting posture. We don’t hand clients industry benchmarks; we install the PDSA + MAC + XmR discipline so they can run studies on their own operations. The deliverable is the methodology, not the numbers. This reinforces the ../04-tooling/rdco-state-ownership-architecture thesis: the client owns the subject, the vault, the chart, and the learning. We are not the subject.

3. PDSA is the agent-deployer’s loop. Per 2026-04-14-levie-agent-deployer-role-jd, the modern agent-deployer instruments AI workflows and runs evals. That is PDSA: plan a prompt/agent change, do the deployment, study the eval metrics on an XmR chart, act on what you learn. Chin’s warning — that most trial-and-error in business skips the Study step because “you’re too busy, a business emergency occurs, you forget” — is the #1 failure mode the agent-deployer role must institutionalize against. MAC + weekly review cadence is the forcing function.

4. “You can improve whatever you set your mind to” is the RDCO pitch in one line. The Japanese industrialists didn’t have better tools than anyone else in 1950; they had the discipline. RDCO’s bet on phData/MG-style implementation partners is that the AI era rewards the same thing: not better models (those commoditize), but clients who have installed the discipline to run PDSA loops on their AI systems. The vault, the skills, the state — those persist the learning across cycles.

One implicit challenge. Chin says “most people don’t talk about how all of them are necessary.” The three ideas are useless individually — XmR without PDSA is reporting; PDSA without the three questions is busywork; the three questions without XmR is qualitative guessing. This is a warning for how RDCO packages MAC: shipping the matrix alone is shipping one-third of the discipline. The coaching engagement has to include PDSA cadence (weekly review) and the three questions (as a habit of mind), or clients will treat MAC as a dashboard and miss the point.