How We Run a 25-Person Company on Four AI Agents — Every
Summary
The Every team (Brandon, Austin, and others) runs a 25-person media company using four purpose-built AI agents that collectively replace what would have been a COO function. The agents don’t talk to each other — they coordinate implicitly through a shared Notion database graph. The result is a lightweight operating system that produces daily prioritization, meeting-to-task conversion, aligned OKRs, and business scorecards without a dedicated operations hire.
The Four Agents
Anton (Prioritization). Each morning Anton analyzes the launch calendar, strategic priorities, and project dependencies across Notion, then posts a ranked priority list to Slack. The agent replaced the manual coordination work that previously fell to whoever was acting as COO — the “what should we work on today and why” question answered automatically before anyone opens their laptop.
Max (Meeting Processing). Max ingests meeting transcripts after calls end, extracts numbered action items, and creates Notion tasks linked to the relevant project. The output lands in Slack as a clean list. No one has to do the post-meeting admin; the context captured during the call becomes structured work immediately.
Strategy Interviewer. Rather than running a traditional OKR planning process (weeks of async docs, multiple rounds of revision), the team deployed a structured interview agent. It conducted one-on-one conversations with team members, asked consistent questions about goals and dependencies, and synthesized aligned 2026 OKRs in roughly two days. The agent replaced facilitation bandwidth, not just writing bandwidth.
Campaign Reporter. Daily scorecards generated from PostHog and Stripe data: key metrics, pace-to-goal indicators, explicit ahead/behind status. Every person on the team gets the same factual picture of the business each morning without anyone manually pulling numbers.
Architecture: Shared Database as Coordination Layer
All four agents query the same interconnected Notion databases — calendar, tasks, strategy, people, notes. There’s no explicit inter-agent communication protocol. Coordination happens because every agent reads and writes to the same structured reality. Anton knows what Max extracted because Max wrote it to Notion. The Strategy Interviewer’s outputs are available to Anton because they live in the same strategy database.
This is the key architectural insight: shared structured data makes multi-agent coordination tractable without orchestration complexity. The database is the message bus.
Five Lessons
-
Describe outcomes, not steps. Agent instructions that specify the desired end state (“post a ranked priority list with reasoning”) outperform step-by-step procedures. Outcomes are stable; procedures become brittle.
-
Your database is the agent’s intelligence. The quality of the Notion graph determines the quality of the agents. Anton isn’t smart because the prompt is clever — it’s smart because the calendar and task data is well-structured and consistently maintained. Garbage in, garbage out applies at the schema level, not just the data level.
-
Let AI generate its own instructions. The team’s most productive technique: have the agent interview you about the problem before writing the prompt. Describe what you’re trying to accomplish, answer its questions, then let it draft the instructions. You get prompts shaped to real constraints, not idealized ones.
-
Start with the dumbest system. The first question to ask is “what’s the simplest version of this that would actually help?” Not “what would be impressive?” The Campaign Reporter started as a cron job that formatted a Stripe export. That’s the right instinct.
-
Expand incrementally, reusing infrastructure. Each new agent leverages the same Notion databases the previous ones built. Max created structured task records; Anton consumes them. The Strategy Interviewer wrote OKR data; Anton weights priorities against it. Compounding infrastructure value, not compounding complexity.
RDCO Mapping
| Their Agent | What It Does | Our Equivalent | Gap? |
|---|---|---|---|
| Anton (Prioritization) | Daily Slack priorities from calendar + tasks | /check-board (Notion task scan) | Partial — lacks calendar + strategy synthesis |
| Max (Meeting Processing) | Transcript → action items → Notion tasks | No equivalent | Gap: /process-meeting |
| Strategy Interviewer | Structured OKR interviews → aligned goals | Manual (vault + SOUL.md) | Partial — vault captures strategy, no interview loop |
| Campaign Reporter | Daily PostHog/Stripe scorecard | No equivalent | Gap: /financial-pulse |
The two high-leverage gaps are clear:
/process-meeting — Every meeting that ends without structured extraction is a context leak. The RDCO operating model runs on meetings with phData, Squarely, and external partners. A skill that ingests a transcript (from Fathom, Fireflies, or manual notes), extracts numbered actions, and pushes them to the Notion task board is Max, translated to a one-person firm.
/financial-pulse — We have the financial data (see 01-projects/financials/financial-overview). We don’t have the daily habit of surfacing it. A skill that pulls Stripe ARR/MRR pace, pipeline movement, and any relevant activity metrics into a morning Slack post or vault entry is the Campaign Reporter at RDCO scale. The value isn’t the data — it’s the daily rhythm of looking at it.
”Database is the Intelligence” — Validated by Our Architecture
The Every team’s second lesson isn’t just advice — it’s a structural argument. And our vault confirms it from the other direction.
The reason /check-board, /vault-health, and /process-inbox work as well as they do is that the vault is a well-structured, consistently maintained knowledge graph. QMD can find a relevant cross-link in 500+ documents because the documents are formatted consistently and the connections are explicit. Anton can rank priorities because Notion has clean schema.
The Karpathy knowledge base framing makes the same claim: the compilation layer (summaries, cross-references, index maintenance) is the intelligence, not the raw storage. Every’s agents demonstrate this in production at a team scale. Our vault + QMD architecture demonstrates it at an individual operator scale. Same principle, different surface area.
This also connects to Compound Engineering — every structured record added to Notion or the vault increases the quality of every future agent query. The database compounds. The agents harvest the compound.
Consulting Application: The phData Selling Vision
This article is one of the clearest enterprise-ready demonstrations of the Level 4 AI use pattern: custom tools built for specific organizational workflows, not generic AI adoption. Every built Anton, Max, Strategy Interviewer, and Campaign Reporter because no off-the-shelf tool does what they needed. That’s the Level 4 distinction.
The Ramp center/spoke model maps directly to how this scales in enterprise contexts. The “spoke” teams (sales, marketing, customer success) get their own purpose-built agents. The “center” (IT, data, AI platform team) maintains the shared data infrastructure those agents run on. Every’s Notion graph is the center. Their four agents are the spokes.
For phData enterprise deployments, this article is the selling story: “Here’s a 25-person company that replaced COO-level coordination with four agents on a shared Notion schema. Your 2,500-person company has the same coordination problems at 100x scale. The infrastructure to solve it is Snowflake Intelligence and Cortex AI. The pattern is the same.”
The key selling point isn’t the technology — it’s the shared database architecture. The data platform is the intelligence layer. Agents are the harvest mechanism. phData’s positioning as a data platform consultancy makes them the right partner for building the center that enables the spokes.
Vault Connections
- SOUL.md — Our own operating model follows the same implicit coordination pattern. Skills (check-board, vault-health, process-inbox) each query the same vault + Notion graph. No orchestration; shared data does the work.
- 06-reference/2026-04-04-compound-engineering — The Plan > Work > Review > Compound loop is what Every is running. Each agent is a loop component. Anton = prioritize. Max = extract. Reporter = review. The loop compounds because the database compounds.
- 06-reference/2026-04-08-four-levels-of-ai-use — Every is operating at Level 4 throughout. Custom agents for specific workflows, not generic assistants. This is the ceiling most enterprises haven’t reached yet.
- 06-reference/2026-04-08-ramp-ai-adoption-playbook — Center/spoke architecture maps to Every’s shared Notion database (center) + four agents (spokes). Creative destruction lens: Anton made manual COO coordination obsolete by design.
- 06-reference/sivers-your-music-and-people — Sivers argues that systems outlast talent and transfer to anyone who inherits them. Every’s agents are the system. No individual team member holds the coordination knowledge anymore — it lives in Notion schema and prompt instructions.
- 06-reference/concepts/skills-as-building-blocks — RDCO’s skills architecture (check-board, process-inbox, vault-health) is the direct analog to Every’s agent suite. Composable, incrementally expanded, built on shared infrastructure.
- 04-tooling/2026-04-03-notion-workspace-discovery — Our Notion workspace is the RDCO equivalent of Every’s agent data substrate. The task board and CRM are the coordination layer for any future
/check-boardexpansion. - 06-reference/2026-04-01-karpathy-llm-knowledge-bases — Karpathy’s argument that LLM-compiled structure is the moat. Every’s well-maintained Notion graph is the practical proof. Our vault + QMD is the same bet.