06-reference

ramp ai adoption playbook

Tue Apr 07 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·article ·source: x.com/@geoffintech ·by Geoff Charles (CPO Ramp)
ai-adoptionenterpriseconsultingcultureframeworksphdata

How to Get Your Company AI-Pilled — Geoff Charles (CPO, Ramp)

A practitioner playbook from Ramp’s CPO on how they drove a 6,300% increase in AI usage year-over-year. Not a theory piece — this is a working model from a company that shipped 1,500+ internal apps in six weeks, got 84% of their engineering team using coding agents weekly, and had non-engineers author 12% of all human-initiated PRs. The numbers are unusually specific, which means they’re measuring, which means the outcomes are real.


The Numbers

The non-engineer PR stat is the most interesting signal. It means the abstraction layer got thin enough that people who don’t think of themselves as builders started building. That’s not tool adoption — that’s a culture shift.


The Eight Strategies

1. Start today — no plan needed

The instinct is to design the rollout before launching it. Charles argues the opposite: launch now, build culture and velocity, let the plan emerge from what actually works. Planning AI adoption is slower than doing it. Culture moves at the speed of visible examples, not documents.

2. AI proficiency is a learning curve, not a light switch

Ramp uses an L0–L3 proficiency ladder to frame where people are and where they’re going. The mistake most companies make is treating AI adoption as binary — you either use it or you don’t. The reality is a progression, and different people are at different points.

Ramp’s L0–L3 framework:

The L2→L3 transition is the critical one. L2 is still personal productivity. L3 is leverage — you’re now multiplying other people’s capacity, not just your own. See Four Levels of AI Use for the mapping to RDCO’s framework.

3. Creative destruction — tools should be obsolete in weeks

If your team’s AI tooling from three months ago looks basically the same as today, you’re not moving fast enough. Charles frames this as a feature, not a problem. Build things knowing they’ll be replaced. The goal is momentum, not elegance. Shipping a workflow that lasts six weeks and gets replaced by something better is a win, not a waste.

4. Build from center, drive from spokes

Ramp’s org design for AI: a central platform team owns the foundation (the infrastructure, the internal app platform, the guardrails), while functional builders at the edges own the use cases. This prevents both chaos (no central team = fragmented, insecure, unscalable) and bottlenecks (all central = too slow, wrong priorities, low adoption).

The platform team makes it easy and safe to build. The spoke builders make it actually used. The spoke builders are also the ones who understand domain problems deeply enough to build things worth using.

5. Give people a stage, not a mandate

Top-down mandates generate compliance. Stages generate culture. Ramp ran Slack channels, office hours, and all-hands demos specifically to surface what people were building and let it spread organically. The mechanism: make visible examples easy to share. The social proof of seeing a peer ship something real is more motivating than any policy.

This is Sivers’s conference protocol applied internally: the real work happens in the follow-up and the social graph, not in the announcement. Put people in rooms (or channels) where they can learn from each other, and get out of the way.

6. Get to the “aha” moment fast

Ramp built two internal products specifically to accelerate individual breakthroughs:

Both are products-for-agents and products-for-builders — structured artifacts that lower the floor for the next person who wants to build something similar. The underlying insight: adoption isn’t stalled on motivation, it’s stalled on not knowing where to start. Dojo solves the “I don’t know what to build” problem by showing what others already built.

7. Make it a competition

Ramp ran a 700-person hackathon and built leaderboards for AI usage. They made it a hiring requirement. The mechanism: peer pressure and status games, when pointed in the right direction, are among the most powerful behavior change tools available. Engineers who know they’ll be asked about their AI tooling in interviews will develop AI tooling.

8. Remove every constraint

Unlimited tokens. No approval workflows. Pre-connected tools. Charles is explicit that friction is the enemy of adoption. Every time someone has to submit a request to access a tool, a meaningful fraction of potential users never do it. The ROI calculation on removing friction is asymmetric — the cost of a few wasted tokens is negligible compared to the adoption lost to a single extra approval step.


The L0–L3 / Four Levels Mapping

Ramp’s framework and RDCO’s Four Levels of AI Use cover the same ground from different angles. The mapping:

Ramp LevelDescriptionRDCO Level
L0Sometimes uses ChatGPT personallyBelow Level 1
L1Custom GPTs, Notion agents, dabblingLevel 1 — automating what already exists
L2Built an app automating part of job, committed codeLevel 3 — doing work that was previously below ROI threshold
L3Builds infrastructure for othersLevel 4 — custom tools only you would build

A few notes on the mapping:

Ramp skips the equivalent of RDCO’s Level 2 — using AI as a genuine thinking partner where it’s better than you. This makes sense for a fintech engineering org; their frame is builder-centric. The thinking-partner level matters more in consulting and knowledge work contexts.

Ramp’s L3 maps cleanly to RDCO Level 4. Systems builders who build for others are the ones creating compounding leverage. The 12% of PRs from non-engineers is a downstream effect of L3 builders lowering the floor — they built the platform that non-engineers could ship on. This is the multiplier effect.

The gap between L1 and L2 is the hardest jump. Dabbling with Notion agents is low-stakes and reversible. Committing code that automates part of your job is irreversible and visible. Organizations that stall between L1 and L2 usually have one of two problems: lack of permission (no stage to show work) or lack of examples (nobody near them has done it yet). Ramp’s Glass and Dojo both directly attack this gap.


Actionable for RDCO

1. The spoke builder model is the right frame for RDCO’s AI Workforce consulting work. The center/spoke architecture maps directly to how phData can position AI rollouts: phData as the platform team (governance, integration, infrastructure), client champions as the spoke builders. This gives clients agency without chaos.

2. The “get to aha fast” strategy is a product question, not a training question. Ramp built Glass and Dojo specifically because documentation and training don’t work as well as “see what your colleague already built and copy it.” For RDCO consulting engagements, the equivalent is building a visible internal wins catalog in week one — not a slide deck of use cases, but a living log of things that actually shipped.

3. The non-engineer PR metric is the right north-star for enterprise clients. Most orgs measure AI adoption by tool licenses purchased or training sessions attended. Ramp measures it by output from people who weren’t previously builders. Track that metric from day one on engagements.

4. Remove friction before adding features. The insight from strategy 8 applies directly to the Claude Architect course material on tool scoping and MCP configuration: pre-connecting tools removes a decision point that kills momentum. For internal RDCO tooling, this means every skill should ship with pre-loaded context, not require the user to configure it first.

5. The hackathon model is underused. 700 participants in a hackathon is a culture signal, not just a fun event. The format accelerates the L1→L2 transition by creating social permission and a deadline. Worth proposing as a structured kickoff format for enterprise AI rollout engagements.


Consulting Application

This is directly useful for positioning RDCO’s consulting work with phData clients. Here’s how to translate each strategy into a client conversation:

The L0–L3 ladder as a diagnostic tool. Before any AI rollout conversation, map the team. Ask: where are your engineers? Where are your domain experts? The diagnostic reveals not just maturity but where the biggest leverage is. A team at 80% L1 has a different problem than a team split 50/50 between L0 and L2.

“Creative destruction” reframes the ROI conversation. Enterprise clients often stall on ROI because they want tools that last. Ramp’s frame flips this: the right question isn’t “will this tool still be worth it in two years?” — it’s “what will we learn in the first six weeks?” Velocity generates more value than permanence at this stage.

The center/spoke model solves the governance vs. adoption tension. The biggest tension in enterprise AI rollouts is between IT/security (who want centralized control) and business units (who want to move fast). The center/spoke design gives IT the control surface they need (the platform) while giving business units the autonomy they want (spoke builders). This is a negotiation frame, not just an architecture diagram.

The non-engineer PR stat is a closing argument. When a client asks why they should invest in broader AI rollout beyond their engineering team, this number is the answer: at Ramp, builders who had never committed code were authoring 12% of all PRs within a year of rollout. The floor dropped far enough that the definition of “who can build” changed. That’s the enterprise transformation story.

The “stage not mandate” principle maps to change management. Most enterprise AI rollouts fail because they’re pushed top-down without creating visible social proof. The Ramp model — Slack channels, office hours, all-hands demos — is a structured social proof generation engine. Early adopters become the sales force. The consulting engagement design should include “stage creation” as an explicit deliverable, not an afterthought.


Vault Connections