“There Is No Truth in Business, Only Knowledge” — @CedricChin
Why this is in the vault
This is the philosophical spine of the Becoming Data Driven series and the direct intellectual parent for RDCO’s state-ownership framing and the MAC severity tiers. Chin translates Deming’s epistemology — that business beliefs are evaluated on predictive validity, not truth — into actionable practice, which is exactly the posture the agent-deployer needs when auditing AI outputs.
The core argument (paraphrased)
Deming’s assertion: in business, there are no fixed truths, only knowledge — defined as “theories or models that lead to better predictions”. Chin frames this as epistemology applied to business judgment: “how do you know what you believe is true?”
The default worldview most of us carry — inherited from chemistry, math, school — is that the world has immutable foundational truths we can learn once. But business isn’t like that. Christensen’s disruption stories (integrated steel vs. mini mills, disk drives, hydraulic excavators), Andy Grove’s Pentium recall, and Formica’s failure against Ralph Wilson Plastics all share one pattern: “businesspeople think they know X” and keep doing X as Y emerges, until their companies die.
What killed the incumbents wasn’t stupidity. Christensen’s insight was that executives were doing the rational thing inside “the Church of New Finance” — ratios, margins, ROE — an orthodoxy with strong construct validity but poor predictive validity. As Ed Baker summarizes Deming: “A theory that is internally consistent … has construct validity but may not have predictive validity.”
The three implications Chin draws:
- Predictive validity is the only evaluation criterion. If downstream outcomes don’t map to your expectations, your beliefs have stopped being knowledge. Discard them faster.
- The slowest-changing predictive beliefs are the most valuable. This is why Bezos asks “what’s not going to change in the next 10 years?” — stable predictions compound.
- An existence proof beats a wonderful internal argument. When an Operations Research grad dismissed SPC outside manufacturing, Chin’s reply was that Amazon’s execs got real mileage from it — that’s predictive validity, and it should override even elegant objections.
Chin’s closing frame: “scientists are interested in what is true; practitioners are interested in what is useful.” Hold beliefs loosely; evaluate them on whether they predict.
Mapping against Ray Data Co
This piece is the intellectual parent of several RDCO primitives. Four tight mappings:
1. MAC severity tiers encode Deming’s predictive-validity test. The MAC framework (../01-projects/data-quality-framework/testing-matrix-template) treats an AI output as knowledge whose value equals its ability to predict reality when checked. Stop/Pause/Go isn’t about whether the model is “true” — it’s about whether its predictions survive reconciliation against source-of-truth, temporal baselines, and human judgment. When a row-level reconciliation check fails, the model’s construct validity (it trained, it ran, it produced confident output) is irrelevant. Predictive validity failed. Discard. This is Deming’s frame applied to LLM outputs.
2. State-ownership is knowledge-ownership. RDCO’s ../04-tooling/rdco-state-ownership-architecture argument — the client owns the vault/skills/data, the model is commodity — is the same move Chin makes. What persists and compounds is the predictive causal model of the business. The vault is where that knowledge lives across sessions; the agent is the interpreter that refreshes predictions against reality. Deming is the intellectual parent.
3. The agent-deployer’s core skill is predictive-validity testing. Per 2026-04-14-levie-agent-deployer-role-jd, the deployer doesn’t evaluate AI workflows on whether they seem correct — they run them against outcomes. That’s Chin’s Amazon existence-proof heuristic: elegant objections to AI-in-the-loop lose to deployed systems that predict well. The phData/MG decision pressure-tests on the same axis — which partner demonstrably helps clients predict outcomes from their data investments, not which one has the prettier deck.
4. “Earn the right to criticize” applies to harness-thesis dissent. Chin’s OR-grad anecdote is the same shape as Moura’s entangled-software critique of harnesses (2026-04-13-moura-entangled-software-agent-harnesses-dead): strong construct validity, weak predictive validity against the existence proof (Amazon, RDCO’s clients, any org shipping reliable agent workflows). Deming’s frame gives us the principled rebuttal without needing to argue ideology.
One implicit challenge: Chin says you should re-examine beliefs quickly when outcomes don’t match. For RDCO, this means the MAC framework itself has to be held on predictive-validity terms — if clients run MAC for a quarter and quality doesn’t improve, the framework is the thing to revise, not the client’s discipline. Build that feedback loop into the consulting engagement from day one.
Related
- 2026-04-15-commoncog-becoming-data-driven-first-principles — the series culmination; this essay is the philosophical foundation it builds on
- ../04-tooling/rdco-state-ownership-architecture — knowledge-over-truth as the state-ownership thesis
- ../01-projects/data-quality-framework/testing-matrix-template — MAC as predictive-validity test
- 2026-04-14-levie-agent-deployer-role-jd — deployer as Deming-style operator
- 2026-04-13-moura-entangled-software-agent-harnesses-dead — dissent; construct-validity-only argument
- 2026-04-12-corr-stagnitto-agile-data-warehouse-design-master-synthesis — data profiling as continuous knowledge-update