06-reference

arscontexta vault agent series

Sun Jan 18 2026 19:00:00 GMT-0500 (Eastern Standard Time) ·article-series ·source: x.com/@arscontexta ·by arscontexta (heinrich)
claude-codeobsidianvaultknowledge-managementagentscontext-engineeringskill-graphsprogressive-disclosure

arscontexta: Building an OS for Thinking with AI

Six-part series by heinrich (@arscontexta) on using Claude Code as an operating layer for an Obsidian vault — treating the vault not as a note-taking app but as a living knowledge graph that an agent operates. Directly relevant to how we run our own vault + Claude Code setup.


Article 1: The Foundational Vault Concept

Tweet 2013045749580259680 — “obsidian + claude code 101”

The core argument: vibe coding changed how we write software; vibe note-taking changes how we think. A vault is just markdown files that link to each other, but it gives LLMs (which have no persistent memory) something to work against.

Key ideas:

Navigation layers (how the agent orients without reading everything):

  1. Folder structure visible at session start
  2. Index file with one-sentence descriptions per note
  3. Topic pages (MOCs) that the agent uses as tables of contents and leaves breadcrumbs on for future sessions

Vault types: different purposes need different philosophies. Heinrich runs separate vaults for thinking/AI work and for client/project work. Same underlying patterns (markdown, claude.md, index), different rules.

Human role evolution: writer → editor; creator → curator. Your job becomes judgment.


Article 2: Yapping to PRDs

Tweet 2013718955576250466 — “Yapping to PRDs: Claude Code & Obsidian”

Recorded conversations (meetings, brainstorms) get transcript-mined into structured vault documents — not summarized, but deeply extracted. This externalizes tacit knowledge that you can’t easily write down because you don’t know you’re doing it.

The mining mindset: a 1-hour meeting should yield 10+ idea notes, multiple framework notes, several decisions with reasoning, state updates across multiple project hubs — not a 3-bullet summary. If you’re getting a short summary, you’re leaving knowledge on the table.

A well-structured transcript extraction for one meeting might produce:

Why transcripts work better than writing: you naturally include reasoning paths, uncertainties, alternatives considered, and explanation depth — all the tacit context that never makes it into written docs.

The PARA parallel: the folder structure maps to Tiago Forte’s PARA system (Projects, Areas, Resources, Archives), repurposed for team knowledge with agent navigation in mind.

Context engineering: everything is defined in CLAUDE.md — vault philosophy, folder structure, navigation rules. Each folder has its own README for granular context. Without structure, you have a pile of transcripts. With structure, you have a knowledge system Claude can build on.


Article 3: Build Claude a Tool for Thought

Tweet 2015201046469943660 — “Build Claude a Tool for Thought”

The meta-move: use the vault system to research how humans built tools for thought, then apply those findings to agent architecture. The system builds itself a tool for thought.

Historical lineage: Llull’s rotating wheels, Bruno’s memory palaces, Luhmann’s Zettelkasten, Evergreen Notes, MOCs — all were tools to think with, not just store in. The shift here: the operator is now an agent, not a human.

Technical primitives:

Discovery layer: every note has a YAML description field. Before loading any file, the agent grabs descriptions and decides if the content is worth the context budget. Most decisions can be made at description level without opening files — this is the key curation move.

Filenames as claims: before opening anything, the file tree already tells you what each note argues. “quality is the hard part” tells you more than “quality notes”.

The self-engineering loop: the system logs observations across sessions, reflects on learnings, and proposes changes to its own rules. Every rule starts as a hypothesis.

The Cornell Notes adaptation: Claude found the Cornell 5R framework while researching, adapted it for agents, and added a 6th phase for self-improvement. The system can request deep research to learn more about specific topics.


Article 4: Context Engineering (Progressive Disclosure)

Tweet 2015585363318743071 — “Obsidian & Claude Code 101: Context Engineering”

The core context engineering technique: progressive disclosure — force the agent to earn each level of detail before loading more. Four layers:

  1. File tree — injected at session start via hook. Descriptive filenames give first-impression signal without opening anything. “queries evolve during search so agents should checkpoint.md” > “search notes.md”.
  2. YAML descriptions — every note has a one-sentence description in frontmatter. If something looks interesting, query it with ripgrep before loading.
  3. Outline — if the description passes, check the note’s heading structure. Often only one section is needed; loading the full file adds noise.
  4. Full content — only for notes that passed all three filters. Most notes never get here, and that’s the point.

The MCP parallel: this mirrors how Claude handles 50+ tools — tool specs are available but deferred until actually searched. Same structure: lazy loading, progressive commitment.

Implementation: a SessionStart hook that runs tree, YAML frontmatter with a description field, and CLAUDE.md instructions telling Claude to check descriptions before reading. Low-code, high-leverage.


Article 5: Editing Workflow (Spatial Comments)

Tweet 2015909609999941965 — “Vibe Note-Taking 101: Editing Workflow”

The problem: editing long content with Claude Code requires constant copy-paste — pull text out, give it context, wait for edits, repeat. This breaks flow.

The spatial editing solution: leave {edit instructions} inline, embedded in the text where they apply. Position IS context. The agent knows what the comment refers to because of where it sits.

Workflow:

  1. Write draft without stopping
  2. Do a pass and drop {thoughts} wherever something needs work
  3. Run /edit
  4. Review changes — the command outputs a summary of what changed

If run with no file open, /edit searches the vault for all pending {thoughts} and lets you pick which files to edit — useful for cross-file consistency changes.


Article 6: Async Hooks for Note History

Tweet 2016587691505164749 — “Obsidian & Claude Code: Async Hooks for Note History”

Auto-commit every edit to git using Claude Code’s async hooks, then add an interpretation layer that reads diffs conceptually, not just syntactically.

The insight: notes are living documents. The history of how a note changed is itself valuable — it’s a journal of how thinking evolved, written automatically.

Technical setup:

The outcome: every note has a complete, interpretable history. The vault becomes a timeline of how thinking evolved, reconstructable at any point.


Alignment with Our Setup

Where we’re doing the same thing:

arscontextaRDCO
Vault index for agent orientationQMD hybrid search (BM25 + vector) over 561 docs
CLAUDE.md teaches vault philosophySOUL.md + project-level CLAUDE.md files
Every note is a skill (composable, injectable)skills-as-building-blocks in ~/.claude/skills/
Folder structure as navigation signalrdco-vault/ directory convention (01-projects, 02-sops, etc.)
PARA for project knowledge structureSame PARA influence in our folder architecture
MOCs as topic hubs with agent breadcrumbsIndex files per project directory
Recording + transcript mining → vaultNot yet systematic — currently ad-hoc

Where we differ:

DifferenceNotes
arscontexta uses Obsidian as the agent’s IDE; we use it as human UIWe interface with the vault through Claude Code + QMD MCP, not direct Obsidian file access
File tree injected via SessionStart hookWe have QMD semantic search instead; worth considering whether a tree hook adds signal
Description-level filtering (YAML + grep before loading)QMD abstracts this with its snippet/scoring layer — similar effect, different mechanism
Async auto-commit hook for note historyWe don’t do this — see “steal” below
Spatial {comment} editing patternNot in our workflow — this one is directly applicable
Self-engineering loop (system researches tools for thought to improve itself)Adjacent to compile-vault skill; not yet self-directed

Where we’re ahead:


Ideas to Steal

High priority:

Medium priority:

Low priority / already addressed:


Article 7: Skill Graphs > SKILL.md

Tweet 2023957499183829467 — “Skill Graphs > SKILL.md” — February 18, 2026 8,756 likes · 25,731 bookmarks · 4M impressions

The argument: single skill files are fine for simple tasks but real depth requires something structurally different. A skill for summarizing is one file. But a therapy skill that covers cognitive behavioral patterns, attachment theory, active listening, and emotional regulation frameworks can’t live in one file — the scope is too large and the interconnections too important.

Skill graphs are the answer: a network of skill files connected by wikilinks. Instead of one monolithic skill, many small composable pieces that reference each other. Each file is one complete thought, technique, or skill. Wikilinks between them create a traversable graph. The same skill discovery pattern applies recursively inside the graph itself.

The progressive disclosure stack: Index → descriptions → links → sections → full content

Most decisions happen before reading a single full file. Every node has YAML frontmatter with a description the agent can scan. Every wikilink carries meaning because it’s woven into prose — the agent follows relevant paths and skips what doesn’t matter.

The primitives (you already have them):

The arscontexta plugin itself is a skill graph — ~250 connected markdown files teaching an agent how to build a knowledge base. The files cover cognitive science, zettelkasten, graph theory, and agent architecture, each piece linking to others. One skill file couldn’t hold that. A graph can.

What this enables:

How to build one:

The evolution: individual skills are context engineering — curated knowledge injected where it matters. Skill graphs are the next step: instead of one injection, the agent navigates a knowledge structure, pulling in exactly what the current situation requires. The difference between an agent that follows instructions and an agent that understands a domain.

Alignment with our setup:

This directly extends what Article 1-6 established for vault design. The skill graph pattern is exactly how our ~/.claude/skills/ directory should be architected — individual skills that reference each other, navigated by the agent via description scanning before full file loading. Our current skills are mostly flat (one file per skill). Adding wikilink cross-references between related skills and an index with YAML descriptions per skill would convert the skills directory into a traversable skill graph.

The vault itself already functions as a knowledge graph (files = nodes, wikilinks = edges, YAML frontmatter = queryable metadata). The insight here is that the same architecture applies to the skills layer, not just the reference layer.

Ideas to steal:


Connections