06-reference

four levels of ai use

Tue Apr 07 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·concept ·source: x.com/@shannholmberg ·by Shannon Holmberg
ai-maturityconsultingframeworksproductivity

Four Levels of AI Use

A framework for classifying AI adoption maturity, originally from an Anthropic growth marketer. Shared by @shannholmberg on X (687 likes, 1,403 bookmarks, 84K impressions). Resonated widely because it’s simple enough to teach and precise enough to be actionable.

The Framework

Level 1 — Automate what you already do Reporting, copy, data pulls. Tasks that existed before AI and now run faster. The work is the same; the labor is reduced.

Level 2 — Use AI as a thinking partner where it’s better than you Negotiation, brainstorming, pattern-matching across domains. AI acts as a collaborator, not just an executor. The work product is better, not just cheaper.

Level 3 — Do work that was below the ROI threshold before This is the novel category — work that never existed because the manual cost was never worth it. AI doesn’t just speed up existing work; it makes previously uneconomical work viable. Column-level dbt documentation. Comprehensive cross-referencing. Full CRM enrichment. Nobody did these because the payoff didn’t justify the hours. Now the math flips.

Level 4 — Build custom tools only you would ever build Tools shaped to your specific data, workflows, and edge cases. No generic product covers them. ROI compounds because the tools improve with use and can’t be replicated by a competitor running off-the-shelf.

Key Insight

Level 3 is where AI creates new value rather than just redistributing existing value. Level 4 is where that value becomes defensible — the tools are idiosyncratic by design, which means they can’t be commoditized.

The framework generalizes beyond marketing to any discipline. In data/analytics: Level 3 is comprehensive column documentation in dbt — it took too long before, but now it’s quick, can be updated incrementally, and becomes valuable context for agents to do more accurate work downstream. The documentation becomes a product for agents, not just for humans.

RDCO Examples by Level

Level 1 — Automating existing work

Level 2 — AI as thinking partner

Level 3 — Work that never existed

Level 4 — Custom tools only we’d build

Vault Connections

The vault itself is a Level 3 artifact. Maintaining 556 cross-linked, semantically searchable documents was never worth doing manually — Karpathy’s compounding knowledge loop describes exactly why: the work only becomes valuable once the base is dense enough that retrieval starts surfacing non-obvious connections. We crossed that threshold.

The skills architecture is Level 4. A ~/.claude/skills/ directory is infinitely customizable, but the specific skills we’ve built — /compile-vault, /research-brief, /check-board — are built for our operating model and our data. See Claude Code architecture teardown for how this maps to the layers of the harness.

Products for agents is the Level 3/4 pattern applied to data: structuring knowledge for AI consumption rather than human browsing. Level 3 produces the artifacts; Level 4 builds the infrastructure that makes those artifacts useful to downstream agents.

The Karpathy idea file pattern is Level 3 knowledge management — maintaining a compounding wiki was never economical before. The RDCO vault operationalizes this at the business level.

Consulting Application

This framework is immediately useful for client conversations — it gives enterprises a self-assessment tool and a roadmap in one.

Assessment questions by level:

Most enterprises are stuck between Level 1 and 2. They’ve added AI to existing workflows but haven’t asked what work becomes viable now that didn’t exist before. The Level 3 question is often the most productive one in a discovery session — it surfaces latent demand the client didn’t know to articulate.

At phData, this maps directly to the AI Workforce team’s mandate: helping enterprises on Snowflake Intelligence and Cortex AI identify where Level 3 and 4 work is possible with the data infrastructure they already own. The phData project context: clients have the data; the consulting question is what AI-enabled work that data now makes viable.

The framework also provides a maturity benchmark for roadmapping — “you’re Level 1/2 today; here’s what Level 3 looks like in your context; here’s what you’d need to build to reach Level 4.” Concrete, actionable, doesn’t require the client to have AI literacy to engage with it.

Newsletter Angle

Strong candidate for a Sanity Check issue mapped to data/analytics. The Level 3 framing — work that was always worth doing but never economical — translates directly to analytics backlogs: column documentation, lineage annotation, data quality scoring, semantic layer enrichment. Every data team has a list of things they’d do “if they had time.” AI just gave them the time.