Four Levels of AI Use
A framework for classifying AI adoption maturity, originally from an Anthropic growth marketer. Shared by @shannholmberg on X (687 likes, 1,403 bookmarks, 84K impressions). Resonated widely because it’s simple enough to teach and precise enough to be actionable.
The Framework
Level 1 — Automate what you already do Reporting, copy, data pulls. Tasks that existed before AI and now run faster. The work is the same; the labor is reduced.
Level 2 — Use AI as a thinking partner where it’s better than you Negotiation, brainstorming, pattern-matching across domains. AI acts as a collaborator, not just an executor. The work product is better, not just cheaper.
Level 3 — Do work that was below the ROI threshold before This is the novel category — work that never existed because the manual cost was never worth it. AI doesn’t just speed up existing work; it makes previously uneconomical work viable. Column-level dbt documentation. Comprehensive cross-referencing. Full CRM enrichment. Nobody did these because the payoff didn’t justify the hours. Now the math flips.
Level 4 — Build custom tools only you would ever build Tools shaped to your specific data, workflows, and edge cases. No generic product covers them. ROI compounds because the tools improve with use and can’t be replicated by a competitor running off-the-shelf.
Key Insight
Level 3 is where AI creates new value rather than just redistributing existing value. Level 4 is where that value becomes defensible — the tools are idiosyncratic by design, which means they can’t be commoditized.
The framework generalizes beyond marketing to any discipline. In data/analytics: Level 3 is comprehensive column documentation in dbt — it took too long before, but now it’s quick, can be updated incrementally, and becomes valuable context for agents to do more accurate work downstream. The documentation becomes a product for agents, not just for humans.
RDCO Examples by Level
Level 1 — Automating existing work
/process-inbox— reading and filing content that previously required manual triage/check-board— surfacing Notion task status that previously required manual review- Tax document extraction — pulling numbers from PDFs into structured tables
Level 2 — AI as thinking partner
- Negotiation framework development (phData offer, client terms)
- Interview prep — mock Q&A where AI stress-tests answers better than a solo rehearsal
- Growth strategy analysis — pattern-matching across cases the founder hasn’t seen
Level 3 — Work that never existed
- 859-contact CRM import with Sivers-style relationship tiers — manually uneconomical at that scale, now done incrementally
- Vault maintenance — 556 cross-linked docs is a Level 3 artifact; no solo operator maintained a knowledge base this dense before AI made it viable
- Comprehensive tax cross-referencing across all entity types in a single session
- Sivers book processing — every highlight enriched with commentary, cross-links, and vault connections
Level 4 — Custom tools only we’d build
- Skills architecture (
~/.claude/skills/) — built for our specific operating model, incompatible with a generic Claude setup - 1Password MCP wrapper pattern — secrets management without
.envfiles; shaped to our exact security posture - Notion CRM with Sivers relationship tiers — our specific contact taxonomy, not a generic CRM feature
- Newsletter pipeline (
/research-brief,/draft-review,/remix) — the full Sanity Check production stack, tuned to our voice and distribution model
Vault Connections
The vault itself is a Level 3 artifact. Maintaining 556 cross-linked, semantically searchable documents was never worth doing manually — Karpathy’s compounding knowledge loop describes exactly why: the work only becomes valuable once the base is dense enough that retrieval starts surfacing non-obvious connections. We crossed that threshold.
The skills architecture is Level 4. A ~/.claude/skills/ directory is infinitely customizable, but the specific skills we’ve built — /compile-vault, /research-brief, /check-board — are built for our operating model and our data. See Claude Code architecture teardown for how this maps to the layers of the harness.
Products for agents is the Level 3/4 pattern applied to data: structuring knowledge for AI consumption rather than human browsing. Level 3 produces the artifacts; Level 4 builds the infrastructure that makes those artifacts useful to downstream agents.
The Karpathy idea file pattern is Level 3 knowledge management — maintaining a compounding wiki was never economical before. The RDCO vault operationalizes this at the business level.
Consulting Application
This framework is immediately useful for client conversations — it gives enterprises a self-assessment tool and a roadmap in one.
Assessment questions by level:
- Level 1: Where are humans still manually running reports or pulling data that could be templated?
- Level 2: Where is analysis happening in silos, without a thinking partner pressure-testing assumptions?
- Level 3: What work has been consistently deprioritized because the manual cost was too high? (Documentation, enrichment, cross-referencing, audit trails)
- Level 4: What workflows are idiosyncratic enough that no off-the-shelf tool will ever cover them?
Most enterprises are stuck between Level 1 and 2. They’ve added AI to existing workflows but haven’t asked what work becomes viable now that didn’t exist before. The Level 3 question is often the most productive one in a discovery session — it surfaces latent demand the client didn’t know to articulate.
At phData, this maps directly to the AI Workforce team’s mandate: helping enterprises on Snowflake Intelligence and Cortex AI identify where Level 3 and 4 work is possible with the data infrastructure they already own. The phData project context: clients have the data; the consulting question is what AI-enabled work that data now makes viable.
The framework also provides a maturity benchmark for roadmapping — “you’re Level 1/2 today; here’s what Level 3 looks like in your context; here’s what you’d need to build to reach Level 4.” Concrete, actionable, doesn’t require the client to have AI literacy to engage with it.
Newsletter Angle
Strong candidate for a Sanity Check issue mapped to data/analytics. The Level 3 framing — work that was always worth doing but never economical — translates directly to analytics backlogs: column documentation, lineage annotation, data quality scoring, semantic layer enrichment. Every data team has a list of things they’d do “if they had time.” AI just gave them the time.