Eric Weber: Data Team ROI in an AI-First World
Weber argues that the standard way data teams prove their worth — counting dashboards shipped, models deployed, tickets closed — was never right, and AI makes the problem impossible to ignore. When an LLM can spin up a dashboard in seconds, “artifacts produced” stops being a meaningful proxy for value. The uncomfortable admission: “We probably always measured ROI for data incorrectly.” Most analysis never actually changed a decision; counting deliverables just hid that fact.
Three Impact Metrics
Decision Velocity. Time from question asked to action executed. Not time-to-insight — time-to-decision. An 80-percent-right answer delivered today beats a perfect answer delivered next quarter. The metric forces teams to optimize for unblocking humans, not polishing artifacts.
Experiment Yield. Percentage of experiments that produce a clear ship-or-kill signal. A healthy team runs at 70%+ yield. Low yield means the data feeding those experiments is too noisy or too slow to be actionable — the team is burning cycles without producing learnable outcomes.
Revenue Affected. Incremental revenue directly traceable to data-informed decisions. Not “we supported the revenue team” — actual dollars moved by a decision that would not have happened without the data team’s work.
The through-line: shift from operational metrics (how busy are we?) to value metrics (did someone act differently because of us?). Data teams create value by accelerating better decisions, not by producing sophisticated analysis nobody uses.
RDCO Mapping
Connects directly to the 2026-03-30-founder-data-quality-framework: the testing matrix ensures data is trustworthy enough to act on; Weber’s metrics measure whether anyone actually acts. Decision Velocity is the output variable the testing framework’s input quality feeds into. Also ties to the 2026-04-07-dbt-semantic-layer-vs-text-to-sql-benchmark — semantic layers reduce time-to-question, which is the first segment of the Decision Velocity clock.
The 01-projects/data-quality-framework/testing-matrix-template gives consulting clients the “how” — Weber’s framework gives them the “why measure it this way.” Strong deliverable pairing for phData/MG engagements: implement the testing matrix, then measure its impact using Weber’s three metrics. Also reinforces the “Solve Everything” book chapter on the outcome-based economy — the shift from counting effort to counting results.