Reforge — Strategic vs. Ad Hoc Experimentation
Summary
Most teams think they have an experimentation program when they really have ad hoc testing dressed up with process. The mental model is a six-dimension comparison between strategic and ad hoc experimentation:
| Dimension | Ad Hoc | Strategic |
|---|---|---|
| Orientation | Reactive — tests whatever’s handy | Proactive — tests flow from strategy |
| Alignment | Disconnected from business objectives | Grounded in broader business goals |
| User understanding | Relies on pre-existing assumptions | Seeks deeper understanding of user experience |
| Failure response | Writes off failed tests, moves to next | Iterates on failed experiments to generate wins |
| Idea sourcing | Siloed — one person or team generates ideas | Cross-organizational idea solicitation |
| Analysis depth | Surface-level results (win/lose) | Drills into why results happened |
The core insight: ad hoc testing optimizes for test velocity (how many tests can we run?) while strategic experimentation optimizes for learning velocity (how much do we understand about our users and business?). A failed experiment in a strategic system is often more valuable than a win in an ad hoc system, because the strategic team will dig into WHY it failed and iterate.
Relevance to projects:
- 01-projects/squarely-puzzles/index — With limited traffic, every experiment is expensive (slow to reach significance). This makes the strategic approach even more critical — can’t afford to waste experiments on ad hoc ideas. Each test needs to be grounded in a specific hypothesis about the user journey.
- 01-projects/data-marketplace/index — Early-stage products often default to ad hoc testing because “we don’t have enough data yet.” But the framework suggests the opposite: with limited resources, being strategic about what you test matters MORE, not less.
- 01-projects/phdata/index — Client analytics engagements often help companies move from ad hoc to strategic experimentation. This framework could be a diagnostic tool in initial client conversations.
Connects to 06-reference/2026-04-03-five-myths-of-experimentation (common misconceptions that lead to ad hoc approaches), 06-reference/2026-04-03-reforge-defining-strategy (strategy as the foundation for what to test), and 06-reference/2026-04-03-reforge-monetization-defensibility (experimentation should target the growth pillars).
Open Questions
- What’s the minimum viable experimentation program for a solo operator running multiple small bets? Full strategic rigor isn’t practical, but pure ad hoc wastes cycles.
- How do you build “iterate on failures” into a workflow when you’re moving fast between projects? The temptation to write off a failed test and move on is strongest when you have many parallel bets.
- Is there a lightweight version of cross-organizational idea sourcing that works for a one-person company? Customer interviews? Community feedback loops?