We Built Every Employee at Ramp Their Own AI Coworker — Seb Goddijn
The technical companion to Geoff Charles’s strategy article. Geoff covered culture, org design, and the eight strategies. Seb covers what they actually built — Glass, the internal AI platform that made 99% adoption mean something. These two articles are designed to be read together.
The Problem They Were Solving
Ramp got to 99% tool adoption and still had a failure mode: most people were stuck. The surface-level issue was technical — terminal setup, npm installs, MCP configuration. These are real barriers, but they’re symptoms. The deeper issue was that the people who pushed through those barriers and built something powerful for themselves had no way to share it. Every breakthrough was siloed. The organization created urgency without infrastructure to route that urgency into compounding gains.
The result: high tool adoption, low capability ceiling. Getting from “I have Claude installed” to “I have a working AI coworker” required more activation energy than most people would absorb unprompted. And each person who absorbed it started from scratch.
Three Design Principles
1. Don’t limit upside — make complexity invisible. The common response to complex tooling is to simplify it, which means removing capability. Ramp’s design principle was different: preserve full capability, hide the configuration. Power users shouldn’t be constrained by what the median user needs. The goal is to make the floor higher without lowering the ceiling. This is the “raise the floor, don’t lower the ceiling” principle as a concrete architectural commitment, not just a slogan.
2. One person’s breakthrough becomes everyone’s baseline. When someone discovers a workflow that works, it should automatically propagate. The infrastructure should make the best individual discovery the default starting point for the next person. This is the compounding logic — each iteration of the system starts where the last iteration ended, not at zero.
3. The product IS the enablement. Workshops, onboarding sessions, and documentation don’t transfer capability at scale. The thing people install should teach them what’s possible by showing them results on day one. If you need training to get value, the product isn’t good enough yet.
Glass Architecture
Glass is Ramp’s internal AI platform built on the Claude Agent SDK. The architecture is a set of interlocking systems, each solving a different piece of the “from installed to indispensable” problem.
Auto-Configuration via Okta SSO
On install, Glass connects 30+ tools immediately using Okta SSO. No manual configuration, no API keys to hunt down, no per-tool setup steps. The principle: the first session should already be more powerful than anything a user would have built manually. This is the same insight behind RDCO’s MCP Server Setup SOP — “everything connects on day one” eliminates the decision point that kills most users’ momentum before they start.
Dojo — Skills Marketplace
Dojo is Glass’s internal skills marketplace with 350+ shared skills. Skills are Git-backed, versioned, and code-reviewed before merging — the same quality bar as production software. When someone discovers a workflow that works, they publish it to Dojo. From that point forward, it’s available to everyone, not just the person who invented it.
This is the infrastructure that converts individual breakthroughs into organizational baselines. It maps directly to Level 4 AI use — custom infrastructure that only this organization would build, designed to raise the capability floor for everyone else. Compare this to our own ~/.claude/skills/ architecture: same logic (shared, versioned, reusable), different scope (team of one vs. company of thousands). See also Anthropic’s internal skills practices for the original pattern Ramp is implementing at scale.
The contrast with Every’s shared database architecture is instructive: Every coordinates four agents through a shared Notion graph; Ramp coordinates hundreds of humans through a shared skills graph. Different coordination primitives, same underlying insight — the shared artifact is what prevents everyone from starting from scratch.
Sensei — Skill Recommendation Engine
Sensei is the AI guide inside Glass that recommends skills based on three inputs: the user’s role, the tools they have connected, and what they’re currently working on. Rather than requiring users to browse a 350-item catalog and figure out what’s relevant, Sensei surfaces the right skill at the right moment.
This is the “get to aha fast” mechanism applied at the skill level. The aha moment Geoff described in the strategy article isn’t just about installing Glass — it’s about seeing a skill do something useful immediately. Sensei closes the gap between “skill exists in Dojo” and “user knows it exists and applies it.”
The pattern worth stealing for RDCO: skill recommendation as a function of context, not just taxonomy. A catalog you have to browse is a library. A catalog that surfaces the right thing at the right moment is a coworker.
Memory System
Glass builds a memory profile from connected tools on first open, then runs a 24-hour synthesis pipeline to keep it current. The memory isn’t a static onboarding form — it’s derived from actual work: calendar, Slack, code, docs. The agent that knows your role, your current projects, and your communication patterns before you’ve typed a single message is categorically different from one that starts fresh every session.
This is the infrastructure version of what the Claude Code architecture teardown calls persistent state — the investment in context that makes each session start from a higher floor than the last.
Scheduled Automations
Glass treats the user’s laptop as a server, running cron-based automations that post to Slack on schedule. The same pattern as RDCO’s Mac Mini always-on agent: the machine doesn’t need to be actively used to do useful work. Recurring summaries, daily digests, status updates — these run whether or not the user is at their desk.
This is the bridge between “AI assistant I have to talk to” and “AI coworker that does things while I’m doing other things.”
Slack-Native Assistants
Glass can deploy assistants that listen in Slack channels with the full Glass setup — all 30+ tool connections, the user’s memory profile, the full Dojo skill library. The assistant isn’t a limited Slack bot; it’s Glass, operating where the work conversation already happens. The interface moves to where attention is, rather than requiring attention to move to the interface.
Headless Mode
Kick off a long-running task, step away, approve permission requests from your phone when they come in. The agent keeps working between approvals. This decouples “task started” from “human watching.” The same mental model as our channels agent setup — the agent runs continuously; the human checks in when needed, not constantly.
Workspace UI
Glass has a proper workspace: split panes, tabs, persistent layout. Not a chat window with history, but a working environment with structure. The design choice signals the intended use case — not “ask it a question,” but “work in it all day.”
Why Build vs. Buy
Seb’s argument for building Glass internally rather than buying an off-the-shelf tool: internal productivity is a moat. The team that builds the internal platform learns faster than the team that adopts a vendor’s roadmap. The iteration speed is higher because the builders and the users are in the same building, often the same Slack channels. And in Ramp’s case, what they learn building Glass directly informs what they ship externally — the internal platform is a forcing function for product intuition.
This connects to Compound Engineering: the internal platform doesn’t just produce productivity, it produces knowledge that compounds into the external product. Each Glass feature is also a signal about what enterprise AI tooling should look like.
Key Lesson
The people who got the most value weren’t the ones who attended training sessions. They were the ones who installed a skill on day one and immediately got a result. The activation energy to adopt something is highest before the first win. Dojo + Sensei exist to make the first win arrive before users have time to decide it’s not worth the effort.
This is the “product IS the enablement” principle made concrete. You can’t train people into capability. You can build something that demonstrates capability so clearly that people train themselves.
Actionable for RDCO
Steal the Sensei pattern. Context-aware skill recommendation is more powerful than a well-organized catalog. When building out our ~/.claude/skills/ library further, the question isn’t just “what skills do we have?” — it’s “how does the right skill surface at the right moment?” Role + active tools + current work = the three signals Glass uses. We can approximate this with good CLAUDE.md context, but a Sensei-equivalent would recommend skills proactively rather than requiring the user to know to ask.
Build the 24-hour memory synthesis pipeline. Right now RDCO’s agent memory is largely session-based or manually maintained. A synthesis pipeline that reads connected tools on a cycle and updates the memory profile changes the quality ceiling. This is a project worth scoping as a standalone build.
Headless mode is the missing piece for async work. The channels agent (Mac Mini, LaunchAgent, tmux) handles the “always running” dimension. Headless mode in Glass adds the “approve from phone” dimension — long-running tasks that need occasional human gates. Worth building a lightweight permission approval mechanism into the channels architecture.
The Dojo model is the right direction for skills governance. Our skills are currently Git-backed but without a formal review process. As the library grows, Dojo’s “code-reviewed before merge” model prevents skill drift and quality degradation. The overhead is low; the compounding benefit to the skill library is high.
Consulting Application
Glass is the reference implementation for what enterprise AI platform work should look like. When pitching phData clients on AI workforce strategy, Glass answers the question “what does the infrastructure look like?” concretely:
Auto-configuration via SSO = the client’s IT team doesn’t have to build an onboarding workflow for every user. The identity layer does it. This is the technical argument for SSO-first MCP integration in enterprise deployments.
Dojo = the internal skills marketplace clients need to build. The center/spoke model from Geoff’s article requires this artifact: the center team builds and curates skills; spoke builders consume and contribute them. Without a Dojo equivalent, every spoke starts from zero. Proposing “we help you build your Dojo” is a concrete deliverable with a clear value proposition.
Sensei = the adoption acceleration layer. The reason most enterprise AI rollouts stall at tool adoption without productivity gains is that users don’t know what skills exist or which ones apply to their work. A recommendation layer that maps role + tools + current work to skills closes this gap. This is buildable as a lighter-weight version — even a structured skill catalog with good role-based filtering is better than nothing.
The “raise the floor, don’t lower the ceiling” principle is the right frame for enterprise AI platform architecture conversations. The failure mode clients fear is shadow IT — powerful tools in the hands of a few, chaos at the edges. The Ramp design principle shows that the answer isn’t to restrict capability, it’s to make configuration invisible. You can have governance and power simultaneously if the infrastructure handles complexity rather than removing it.
Vault Connections
- 06-reference/2026-04-08-ramp-ai-adoption-playbook — Geoff Charles’s culture and strategy article. This document is the technical companion. Read together, they cover the full Ramp model.
- 06-reference/2026-04-08-four-levels-of-ai-use — Dojo is Level 4 infrastructure: custom tools built specifically for this organization that raise the capability floor for everyone else. Sensei makes Level 4 skills accessible to Level 1 users.
- 06-reference/2026-04-09-every-four-ai-agents — Every’s shared Notion database vs. Ramp’s skills marketplace: two different approaches to the same problem of preventing individuals from starting from scratch. Every’s coordination is agent-to-agent; Ramp’s is human-to-human via shared artifacts.
- 06-reference/2026-04-07-claude-code-architecture-teardown — Glass is built on the Claude Agent SDK. The four-tier architecture Rohit reverse-engineered (system prompt, tools, memory, harness) maps directly to Glass’s components. Understanding the SDK layer explains why Glass can do what it does.
- 06-reference/2026-04-04-anthropic-skills-internally — Anthropic’s internal skills practices are the upstream pattern that both RDCO’s
~/.claude/skills/and Ramp’s Dojo implement. The same compounding logic, different scales. - 02-sops/mcp-server-setup — The “everything connects on day one” principle in Glass’s Okta SSO auto-configuration is the same insight behind our MCP Server Setup SOP. Pre-connection removes the friction point that kills most users’ momentum.
- SOUL.md — “The product IS the enablement” aligns directly with the RDCO compound engineering loop: each skill, each SOP, each system we build is itself the training material. We don’t document how to do the work and then do the work — the work is the documentation.