Raw transcript — Agent Experts: Finally, Agents That ACTUALLY Learn
Source: https://www.youtube.com/watch?v=zTcDwqopvKE Duration: 19m Captured: 2026-04-20
Full clean transcript stored at /tmp/backfill-2026-04-20/agent-experts.txt during ingestion (3,330 words). Per copyright policy, raw transcript preserved for internal reference. Re-download via:
yt-dlp --write-auto-sub --sub-lang en --skip-download --sub-format vtt -o "/tmp/yt-process/%(id)s" "https://www.youtube.com/watch?v=zTcDwqopvKE"
python3 ~/.claude/scripts/vtt-to-text.py /tmp/yt-process/zTcDwqopvKE.en.vtt
Key segments (timecoded)
- [00:00–02:00] The persistent problem: agents forget. Memory files, prime prompts, sub-agents, and skills all require manual updates. Introduces the agent expert as the agent that “executes AND learns.”
- [02:00–04:00] Definition: agent expert = self-improving template metaprompt. Mental model is a data structure that evolves over time with each useful action.
- [04:00–07:00] Meta-agentics walkthrough: meta-prompts, meta-agents, meta-skills. “The system that builds the system.” Demos meta-prompt creating a question-with-mermaid-diagrams prompt; meta-agent creating a planner agent; meta-skill creating a start-orchestrator skill.
- [07:00–10:00] First agent expert demo — database expert. The expertise file (YAML) is NOT a source of truth; it’s a working mental model. Question prompt reads expertise file FIRST, then validates against codebase.
- [10:00–13:00] Websocket expert demo. Three-step workflow
/plan/build/self-improve. Three parallel websocket experts deployed to one question for higher confidence answers. - [13:00–16:00] Multi-agent expertise compounding. Per-agent context isolation: plan step took 80k tokens but top-level orchestrator’s context “completely protected.” Three of three websocket experts vs. five-agent ultra-validate option.
- [16:00–18:54] R&D framing (Reduce and Delegate). Core Four (context, model, prompt, tools). The self-improve step is the key — agents update their own expertise file based on what they learned during the run. “Just-in-time context engineering” via expertise files vs. always-on memory files.