Reforge — Why Most Analytics Efforts Fail
Summary
Most companies describe their data as “a mess,” but that’s a symptom, not the disease. The mental model breaks analytics failure into four symptoms and five root causes, then offers a framework for fixing them.
Four symptoms of broken analytics:
- Lack of shared language — Different teams define the same metric differently, rendering data discussions unproductive.
- Slow transfer of knowledge — When people switch roles/teams/companies (avg every 18 months), institutional data knowledge walks out the door.
- Lack of trust — “Is that really right?” becomes the default reaction to any data presented.
- Inability to act quickly — All three symptoms above compound into paralysis. Teams skip data entirely because using it takes too long.
Five root causes (what most teams miss):
- Tracking metrics vs. analyzing them — The goal isn’t to report numbers; it’s to separate what successful users do from what failed users do. This distinction fundamentally changes what you track and how.
- Developer mindset vs. business user mindset — Data teams build for themselves instead of their actual customer (business users).
- Wrong level of abstraction — Events that are too broad or too specific. Great tracking balances the two. Different “eras” of implementation create clashing abstractions.
- Written-only vs. visual communication — Bad teams have no documentation. Good teams have an event dictionary. Great teams combine written documentation with visual journey maps.
- Data as a project vs. ongoing initiative — Treating data as a one-time project instead of a product that requires constant iteration leads to the “Data Wheel of Death.”
The journey framework for event tracking: every user action should map to Intent -> Success -> Failure events. This is the right level of abstraction. Failure events split into implicit (user disappears from the journey) and explicit (something goes wrong).
Diagnostic exercise: “Decisions Made Without Data” — Each quarter, track decisions the broader team made without data. This surfaces the highest-value gaps in your analytics coverage.
Relevance to projects:
- 01-projects/phdata/index — This is essentially a diagnostic framework for client engagements. The four symptoms / five root causes structure could be a discovery questionnaire. The “Decisions Made Without Data” exercise is a killer opening move for new client relationships. See 06-reference/2026-04-03-selling-data-science on how to position this value.
- 01-projects/data-marketplace/index — The Intent -> Success -> Failure framework should be the tracking standard from day one. Don’t repeat the “different eras of implementation” problem. See 06-reference/2026-04-03-data-maturity-processes-tools on matching process maturity to tooling decisions.
- 01-projects/newsletter/index — Content about analytics failures resonates with the target audience (data practitioners). The symptoms/root causes framework is shareable content that demonstrates expertise.
Connects to 06-reference/2026-04-03-analytics-engineering-everywhere (analytics engineering as a response to these failures), 06-reference/2026-04-03-analytics-at-a-crossroads (industry-level view of analytics maturity), 06-reference/2026-04-03-headless-bi (tooling approach to the abstraction problem), and 06-reference/2026-04-03-scaling-data-informed-driven-led (organizational maturity model).
Signals of success (from the article):
- Bad: single person knows tracking, analysts needed for basic analysis, duplicate/inconsistent event names, decisions-without-data growing
- Good: multiple teams using event tracking, other teams contributing, tracking updated with new features
- Great: tracking embedded in goal-setting, non-technical teams self-serve, tracking survives major redesigns, data trusted enough for monetary decisions (referrals, discounts)
Open Questions
- For 01-projects/phdata/index, could the symptoms/root causes framework become a productized diagnostic? A “data health assessment” that generates a score and roadmap?
- The “Decisions Made Without Data” exercise is simple but powerful. How do you implement this for a solo operator? A weekly journal entry noting when you made a gut call that data could have informed?
- The journey framework (Intent -> Success -> Failure) is compelling but requires discipline. What’s the minimum viable implementation for a small product team?