06-reference

commoncog how to read chart

Tue Apr 14 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Commoncog ·by Cedric Chin

“What To Think When Looking at a Chart” — @CedricChin

Why this is in the vault

Chin distills the entire BDD series into a two-question reflex every operator needs when staring at a business time series. That reflex is the literacy test for the agent-deployer reading MAC outputs, and the Day-1 lesson for any phData/MG coaching engagement — it reframes “what does this chart mean?” from passive dashboard consumption into an action-generating habit.

The core argument (paraphrased)

Chin’s trigger question: “what should I think when I’m looking at a chart?” — the implicit question beneath the whole Becoming Data Driven series, never asked directly until now. He opens with a Commoncog traffic chart and names the universal operator experience: you look at a chart, see spikes, and have no idea what to do next.

The move is to install what he calls the process control worldview: “everything in business is a process.” A chart is the output of some underlying phenomenon — not the thing itself. “Process” here means phenomenon, broader than workflow: website visitors, mall footfall, safety incidents all qualify. Once you think this way, you stop reacting to the chart and start reasoning about the generator behind it.

From that worldview, three common-sense questions fall out:

  1. Have you successfully changed the process?
  2. Has something else changed in the underlying process?
  3. Is the data point routine variation, or exceptional?

Chin then collapses these. Questions 2 and 3 are the same question — if you can detect exceptional variation, you can detect change regardless of source. Question 1 reduces to hunting for inputs you can vary and measure. Final distilled reflex when looking at any operational chart:

That’s the whole literacy. Chin’s insistence: “the magic isn’t really in the charts” — the magic is that asking these two questions forces action, and action is what generates knowledge. You can’t answer “what are the inputs?” without running experiments. You can’t answer “routine vs exceptional?” without understanding your process at a depth most operators skip.

Worked example from his own data: the Commoncog traffic chart is obviously exceptional variation — massive Hacker News spikes — therefore unpredictable, therefore unimprovable as-is. His fix: separate HN traffic onto its own graph. The residual (direct + organic search) looks predictable, which means now he can run experiments. That one decomposition moves him from paralysis to a testable roadmap.

Caveats Chin flags:

Mapping against Ray Data Co

1. The two-question reflex is the agent-deployer’s core literacy. The MAC framework produces time series — test pass rates, drift deltas, recon gaps, LLM eval scores. An operator without Chin’s reflex will either react to every wiggle (alert fatigue, MAC degrades into noise) or ignore genuine process shifts (MAC becomes theater). The agent-deployer JD (2026-04-14-levie-agent-deployer-role-jd) is functionally “person who runs the two-question reflex against AI agent outputs.” We should bake this essay into the Day-1 onboarding for any deployer we coach.

2. “Everything is a process” is the data-model worldview in one sentence. RDCO’s ../04-tooling/rdco-state-ownership-architecture argues the client owns the vault + skills + data. Chin’s framing sharpens it: what the client actually owns is a causal model of the business phenomenon generating the data. The vault persists that model; MAC charts are how the model gets tested. Without the process-worldview reflex, state-ownership degrades into “we have a folder of numbers.”

3. Chin’s decomposition move is the MAC matrix in microcosm. When he separates HN traffic from direct/organic, he’s doing exactly what the MAC 3×6 matrix does — splitting a composite signal into sub-processes where each sub-process can be evaluated on its own variation band. The matrix formalizes this: scope (column/row/aggregate) × basis (absolute/rel-source/rel-production/rel-recon/temporal/human) is a library of legitimate decomposition axes. See ../01-projects/data-quality-framework/testing-matrix-template. Chin’s essay gives us the motivating story for why that decomposition is non-negotiable: mixed processes are unimprovable.

4. phData/MG coaching hook — this is the candidate Day-1 reading. The essay is short, concrete, and forces the reflex before any tooling arrives. It also pre-empts the failure mode we’ll see in every engagement: the client’s BI team shows us a chart and asks “what does this mean?” The right answer is Chin’s — “what process is this the output of, and what are its inputs?” — and that redirect reframes the entire engagement from “fix our dashboards” to “instrument your processes.” It is cheap, portable, and it lets the client test whether their existing team can already think this way.

5. The “informational chart” caveat protects against MAC over-application. Chin’s mobile/desktop-split example is the direct analog of MAC tests we should not write. Not every metric deserves a severity tier. Part of the coaching work is helping operators distinguish “this is actionable and deserves a band” from “this is situational awareness and deserves a readout.” Worth noting in the MAC playbook: the framework is opt-in per metric, not universal.

Implicit challenge this version of the essay surfaces: Chin’s whole argument turns on the word action. The two questions are only useful because answering them forces you to run experiments and understand your process mechanics. For RDCO this means the coaching engagement has to include a “run at least one input-hunting experiment” milestone — otherwise the client installs the vocabulary without installing the reflex. Add to the coaching curriculum: the first MAC drill should be “pick a noisy metric, decompose it like Chin does with HN traffic, identify one controllable input, run the experiment.”