“Meta-Meta-Prompting: The Secret to Making AI Agents Work” — @garrytan
Why this is in the vault
Founder shared 2026-05-09 ~20:05 ET as 5th Garry Tan piece (vault has 4 prior: Thin Harness Fat Skills, Build the Car, Skillify It, plus Fat-Skills-Thin-Harness commentary doc). Founder’s read: “I’m just passing all Garry Tan articles at this point. This one didn’t pique my interest as much but maybe the article is more interesting.” Founder’s instinct is correct — 80% of this is a synthesis + open-source pitch for gbrain we already have the thesis on. The 20% that IS new is two specific patterns: the book-mirror pipeline and the brain-repo page schema. Both are directly liftable into RDCO.
⚠️ Sponsorship
Heavy self-promotion throughout. Tan promotes:
- gbrain (github.com/garrytan/gbrain) — his open-source knowledge-infrastructure project, “39 installable skillpacks,” “97.6% recall on LongMemEval, beating MemPalace with no LLM in the retrieval loop”
- gstack — his open-source coding skill framework, “87,000+ stars”
- OpenClaw (openclaw.ai) and Hermes Agent (hermes-agent.nousresearch.com) — harness alternatives to Claude Code
The article doubles as the all-in-one funnel for the gbrain stack. No third-party paid placement; this is Tan-product-marketing dressed as architectural commentary. Bookmark-to-like ratio of 2.8x suggests readers ARE saving it, but mostly as a single-link-to-everything-Tan-has-shipped reference rather than for net-new ideas.
The core argument
Tan has been coding til 2am for 5 months building his personal AI system. He treats it as “an operating system, not a chat window.” The thesis is the same one from his April pieces (Fat Skills, Fat Code, Thin Harness, Naked-Models-Are-Stupider, Build the Car) — but now with concrete examples of what the architecture produces in practice.
The architectural primitives (already in vault from prior Tan pieces):
- Harness is thin — OpenClaw routes messages to skills. A few thousand lines.
- Skills are fat — 100+ markdown files, each a self-contained instruction set for one task.
- Data is fat — 100,000-page brain repo of every person/company/meeting/book/idea.
- Code is fat — 100+ crons, scripts for transcription/OCR/social/calendar/API integrations.
- Models are interchangeable — Opus 4.7 1M for precision, GPT-5.5 for recall, DeepSeek V4-Pro for creative, Groq+Llama for speed. Skill picks per task.
This is repetition of the existing thesis. The new content is the specific demonstrations.
Two genuinely new patterns
1. The book-mirror pipeline (highest-value lift for RDCO)
Tan was reading Pema Chödrön’s When Things Fall Apart (162 pages, 22 chapters on Buddhist approaches to suffering, groundlessness, letting go).
He asked his AI to “do a book mirror”:
- System extracted all 22 chapters
- Ran one sub-agent per chapter, IN PARALLEL, each doing two things:
- Summarize the author’s idea
- Map every idea to Tan’s actual life context (his immigrant family history, his YC presidency, what his therapist works on, what he’s been reading at 2am)
- Output: 30,000-word two-column artifact (left: what Pema says, right: how it maps to Tan’s actual life)
- 40 minutes total
Tan’s claim: “A $300/hour therapist reading this book and applying it to my life couldn’t do this in 40 hours, because they don’t have the full graph of my professional context, my reading history, my meeting notes, and my founder relationships all loaded and cross-referenceable.”
He’s done this with 20+ books. Each gets richer because the brain accumulates: “the 20th knew about all 19.”
How book-mirror got better through iteration:
- v1 had factual errors (said his parents were divorced when they weren’t, said he grew up in Hong Kong when he was born in Canada)
- Added mandatory fact-check step → cross-modal eval (Opus 4.7 1M for precision, GPT-5.5 for missing context, DeepSeek V4-Pro for genericness)
- v3 added per-section deep retrieval (every right-column entry cites actual brain pages — meeting notes with specific founders, conversations with his brother James, IM chats from when he was 19, etc.)
2. The brain-repo page schema
Every entity in his 100,000-page brain follows the same schema:
COMPILED TRUTH (current best understanding) — at top
─────────────────────────────────────
APPEND-ONLY TIMELINE (events in chronological order)
─────────────────────────────────────
RAW DATA SIDECARS (source material)
Pages exist for:
- Every person he meets — with timeline, current state, open threads, score
- Every meeting — with transcript + structured summary + entity propagation (after every meeting, system walks through every person/company mentioned and updates their pages)
- Every book — with chapter-by-chapter mirror
- Every article, podcast, video — ingested, tagged, cross-referenced
Key claim: “This is the difference between having a filing cabinet and having a nervous system. The filing cabinet stores things. The nervous system connects them, flags what’s changed, and surfaces what’s relevant to right now.”
Other notable bits (not load-bearing)
- Skillify is a meta-skill that creates skills (not just a workflow he ran once). When he encounters a workflow he’ll repeat, he says “skillify this” and the system extracts the pattern, writes a tested skill file with triggers + edge cases, registers it in the resolver. RDCO already has /skillify per
~/.claude/skills/skillify/. No update needed; Tan validates the pattern. - Demis Hassabis fireside-chat prep as the showcase example: under 2 min for system to pull Demis’s full brain page (months of accumulated articles + podcasts + Tan’s own notes), Mallaby biography highlights, published beliefs about AGI timelines, demo scripts for the brain’s multi-hop reasoning, conversation hooks where worldviews overlap and diverge. “Preparation that used my accumulated context about Demis, my own positions, and the strategic goals for the conversation. The system prepped not just facts, but angles.”
- Specific skills he’s open-sourced (gbrain ships 39, here are a few he names):
meeting-ingestion— entity propagation back to every person/company page after meetingenrich— give it a name, pulls from 5 sources, merges into a brain page with cited sourcesmedia-ingest— handles video/audio/PDF/screenshots/GitHub reposperplexity-research— brain-augmented web research that checks brain first to flag what’s actually new vs. already captured
Mapping against Ray Data Co
Two patterns directly liftable
1. New skill candidate: /book-mirror — most concrete and immediately valuable lift:
- Input: book (PDF or just a title we can pull a chapter list for)
- Process: extract chapter list, dispatch one subagent per chapter, each summarizes + maps to Ben’s vault context (~/rdco-vault/03-contacts/, /05-meetings/, /01-projects/, prior 06-reference/ notes)
- Output: vault note with two-column markdown (“what the author says” / “how it lands for Ben right now”) + cited brain pages
- Substrate is already in place: vault is the personal-context graph, qmd + graph DB enable the cross-references, sub-agent fan-out is a known pattern (process-newsletter / process-youtube already do it)
- Implementation effort: ~2-3 hours to build a v0
- Test target: pick a book Ben’s actually reading (or already read recently — Beck’s Tidy First? is in vault, Pema Chödrön’s When Things Fall Apart would be a fair canary if Ben’s interested in the same source) and dispatch the pipeline
- Trigger: founder green-light needed before queueing
2. Brain-repo schema (compiled truth + append-only timeline + raw data sidecars):
- Lower urgency, higher impact-per-hour
- Current vault structure: 03-contacts/ files are flat notes (one document per person, frequently rewritten in place); 06-reference/ files are static once-written; 05-meetings/ files are point-in-time captures
- Tan’s schema would refactor each entity-page into:
- Top: compiled truth (current best understanding, frequently rewritten)
- Middle: append-only timeline (every meeting / mention / interaction logged in chronological order)
- Bottom: raw data (source materials linked or embedded)
- /sync-contacts already partly does the timeline pattern via touch logs. Could be extended to the full schema across vault entity-pages.
- Implementation effort: schema design + migration script for 03-contacts/ first (~4-6 hours), then iterate to 06-reference/ if it lands well
What this validates retroactively
- Our /skills/ ecosystem matches Tan’s “fat skills” pattern. Counts converge: he reports 100+, RDCO has ~65+. We’re on the same trajectory, slightly behind.
- Our cron loop (13 active per the loop re-arm) approximates his “100+ crons” structure. Same architectural intuition.
- HQ /vault/ surface (shipped today) is the visible-internal version of his brain-repo. Same idea: render the canonical knowledge as queryable HTML.
Where Tan’s piece exposes a real gap (less urgent than book-mirror)
- We don’t yet have entity propagation — when a meeting happens, Ray writes the meeting note but doesn’t walk every person/company mentioned and update their pages. Could be a /improve queue item: post-meeting hook that diffs the meeting note for entity mentions and patches the entity’s brain page.
- Our retrieval is good (qmd lex/vec/hyde + graph DB) but doesn’t yet do “brain-augmented web research” (check what we already know before going to web). Could be a /improve queue item for /deep-research or /research-brief.
Sanity Check candidate
The rhetorical hook is good: “The difference between keeping a journal and having a nervous system.” Frames the canonical-vault-with-active-skills pattern as a categorical step beyond note-taking. Could anchor a Sanity Check piece on RDCO’s vault-as-nervous-system architecture, written from the practitioner perspective (Ben’s been operating Ray for ~6 months, here’s what changed when the brain compounded).
But — fits “no derivative Sanity Check pieces” memory rule. Don’t restate Tan’s article. Original re-frame would need to come from RDCO’s actual experience: what did our vault enable that we couldn’t have done without it. That’s a real piece.
Lower-priority candidate than the Tobi Lütke “agent that refuses to work in private” piece filed earlier today — that one had cleaner contrarian framing.
Notable quotes (≤15 words each, in quotation marks)
- “The system that runs my life didn’t exist as a monolith. It was assembled from skills.”
- “When someone asks how I ‘prompt’ my AI, the answer is: I don’t. The skills are the prompts.”
- “Wrong question. The model is just the engine. Everything else is the car.”
- “Filing cabinet stores things. Nervous system connects them, flags what’s changed.”
Open follow-ups
- /book-mirror skill candidate — queue for founder green-light before building. ~2-3 hours of work.
- Brain-repo schema migration — design memo + 03-contacts/ pilot. ~4-6 hours of work.
- Entity propagation hook — post-meeting note-write triggers a walk over mentioned entities to patch their brain pages. ~3-4 hours.
- Brain-augmented retrieval hook — /research-brief and /deep-research check vault first to flag what’s already known. ~2 hours.
Related
- 06-reference/2026-04-11-garry-tan-thin-harness-fat-skills — the original thesis
- 06-reference/2026-04-19-garry-tan-build-the-car-jepsen-response — Tan’s response to Kingsbury, “naked models are stupider”
- 06-reference/2026-04-22-garry-tan-skillify-it-workflow — the skillify pattern Tan extends here as meta-skill
- 06-reference/commentary-tan-fat-skills-thin-harness-2026-04-14 — RDCO’s own commentary
- 06-reference/synthesis-harness-thesis-dissent-2026-04-12 — the harness-thesis dissent synthesis (Tan vs. Kingsbury)
- 06-reference/2026-05-09-tobi-lutke-river-public-channel-agent — Tobi’s piece from earlier today (different angle on agent deployment, complements Tan’s “what” with “where”)
- 06-reference/2026-05-08-jaya-gupta-shape-as-moat — Jaya’s piece from yesterday (orgs are the moat; Tan’s brain-repo is one example of organizational shape)
- 06-reference/2026-05-09-smart-ape-md-vs-html-three-questions — same week, same theme of “the harder-to-copy layer is what matters”
- ~/.claude/skills/skillify/SKILL.md — RDCO’s skillify already exists; Tan validates the pattern
Source caveat
Article body retrieved via xmcp getPostsById with tweet.fields: ["article", ...] + expansions: ["article.cover_media", "article.media_entities"]. Same fetch path validated repeatedly this week. Plain text returned full ~2200-word body cleanly.