06-reference / transcripts

indydevdan self validating hooks transcript

Sat Apr 18 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: IndyDevDan YouTube ·by IndyDevDan
indydevdanclaude-codehookssub-agentsskillsslash-commandsself-validationdeterministic-validationralph-wiggumagents-plus-codetranscript

IndyDevDan — The Claude Code Feature Senior Engineers KEEP MISSING (Transcript)

Source: https://www.youtube.com/watch?v=u5GkG71PkR0 Title: The Claude Code Feature Senior Engineers KEEP MISSING Channel: IndyDevDan Duration: 27m 29s Published: 2026-01-19


If you want your agents to accomplish loads of valuable work on your behalf, they must be able to validate their work. Why is validation so important? Validation increases the trust we have in our agents, and trust saves your most valuable engineering resource: time.

The Cloud Code team has recently shipped a ton of features, but one in particular stands above the rest: you can now run hooks inside of your skills, sub-agents, and custom slash commands. This is a big release most engineers have missed because it means you can build specialized self-validating agents inside of your codebase.

We kick off /review-finances February pointing to a CSV. This is an end-to-end pipeline of agents to review finances, format, generate graphs, and offer insights. The actual tool isn’t the point — whenever there’s a new valuable feature, build against it and truly understand its value proposition. This entire multi-agent pipeline runs specialized self-validation every single step of the way.

Let’s start with the most foundational and most important: the prompt. In Cloud Code, prompts come in the form of custom /commands. New csv-edit.md markdown file. We’re specializing this command to do one thing extraordinarily well, and therefore we’re specializing an agent to do just that. The agentic-prompt-template snippet has a huge front-matter payload, with support for pre-tool-use, post-tool-use, and stop. This is what’s supported in prompt, sub-agent, AND skill hooks.

Whenever you’re creating a new prompt by hand — which I still do — you have meta-agentics to help quickly spin up new prompts/sub-agents/skills. But I do like to write prompts by hand. When you’re working on your agents and really understanding agentic patterns and tools, you want to slow down and do it by hand.

CSV edit. Tools: search, read, write, edit. Run in the current agent. Model-invocable: false. Now the juicy part: the hook. We want our agent to validate that the CSV is in the correct format. We want the post-tool-use hook because this runs after edit, write, and read. We’re going to run a specific script.

Project layout in the age of agents with self-validation: in your .claude/, alongside commands/agents/skills, inside hooks/ keep a validators/ directory. We have a CSV-single-validator. The agent runs uv run with the path. Each validator outputs its own log file. After every post-tool-use call it runs this script.

The prompt: purpose = “make modifications or report on CSV files”. Workflow: 3 steps — read the CSV file (first arg), make the modification or report (second arg), report the results.

Test it. New Claude Code instance running Opus. /csv-edit with a mock-input-data raw savings February file. Request: “read and report on the file structure”. Found the file, reports the data structure. The magic: after our agent runs, our self-validation specific to this use case has run. This self-validation is HYPER-FOCUSED on the purpose of this prompt. The prompt extends to the sub-agent extends to the skill — it doesn’t really matter the format. It all boils down to the core four: context, model, prompt, tools. In the end, every abstraction adds a powerful deterministic layer that is specialized.

CSV-single-validator log shows everything passed fine — valid CSV file. Now let’s break it: remove the last quote. Run the same prompt. It read, the read hook fired, it broke. Validator said “resolve this error”. The agent immediately fixed it. Now it reruns, does that report properly, mentions the fixed issue.

What happened? Our post-tool-use hook ran and inserted determinism into our agent’s workflow. Not only did it do whatever we asked, it ran our specialized self-validation. We can push this further with specialized hooks embedded in prompts, sub-agents, and skills. Why critical? We can push specialization further. A focused agent with one purpose outperforms an unfocused agent with many purposes — many tasks, many end states.

Side rant: I’m seeing way too many engineers and vibe coders just go to the top of the page, copy, open Claude Code, paste, and say “build PC of this”. You learn absolutely nothing. The big difference between real engineering and vibe coding is that engineers know what their agents are doing. If you want to know what they’re doing, you have to read the documentation. Highly recommend you take the time, read through the documentation, understand what you can do so you can teach your agents to do it.

Self-validation is now specializable. Before we were stuck writing global hooks in our settings.json. That’s still very important — we built a Cloud Code damage-control skill that protects your codebase with powerful hooks to block commands. But this is something extraordinary: specialized self-validation.

Now into a sub-agent and skill. From release notes: “merge slash command and skills” — the Cloud Code team is combining skills and custom slash commands into one. Validates the foundational core-four idea: everything just turns into a prompt that runs into your agent.

CSV-edit-agent. Sub-agents give us two key things over a prompt: parallelization (we can deploy multiple agents at one time) and context isolation (effectively delegate our context window). Same setup. Same hooks. “Determine from prompt” instead of “pull from prompt arg” because sub-agents take the prompt passed in. Same prompt structure.

Demo: 8-minute agentic workflow that automatically handled this month’s finances (mock data).

Then: “Use one csv-edit-agent per file in mock-input-data and append three new rows to expenses file. We properly increment the balance.” Four CSV agents in parallel, each editing a file. After every one of these agents runs, they validate the file they just operated on. Not only do we have individual prompts that can self-validate, we have sub-agents that we can scale that self-validate. You can scale specific commands.

Build.md has a linter, a formatter using brand-new Astral uvty and Ruff tooling. Two hooks running on stop — when the agent finishes, it looks over all the code only when the build agent runs. We don’t run commands when we don’t need them.

Imagine you’re doing a migration, updating fields in a database, doing any type of work that you yourself would come in and validate. You can now teach your agents to do this. This is a closed-loop prompt. And we don’t even have to prompt-engineer it anymore. The massive benefit of throwing this inside the hook is you know it will always run. Every time one of these tools is called inside this agent, it validates its work. Guaranteed. This is why the Ralph Wiggum technique — agents-plus-code — is gaining popularity. Agents plus code beats agents. That’s it. That’s what self-validation is. That’s what the closed-loop prompt is.

Every engineer, every good engineer at least, validates their work. Soon every good specialized agent, great at doing one thing well, will validate that one thing. I’m highly convicted: if you want to build an agent that outperforms over and over, build a focused specialized agent. Even down to a CSV-edit agent that just edits CSV files. That’s it. This will perform better over tens, hundreds, thousands, and millions of runs. You want your agents specializing their self-validation.

Don’t need to walk through building the skill. Same shape — csv-edit-skill/skill.md plus the same hooks block.

Demo finished: all four agents ran in parallel, validated their work — proof in the CSV-single-validator logs at the bottom, every file validated within a second. Edits we know worked because we gave them the tools to validate their own work.

Generative UI: Some of the tables the agent created I had no idea it was going to create. Insights about spending for the month, sortable tables, burn, income, balance.

If we open the review-finances prompt: same agentic workflow pattern. HTML validator running on the top level. This super-workflow kicks off multiple agents underneath. Categorize-CSV agent → validator. Generative-UI agent → HTML validator. Merge-accounts → CSV validator. Normalized-CSV agent runs two validators on the stop command — global validators on all files in this codebase. General rule of thumb: if you want to test a bunch of files at the end, use the stop hook. If you want to test just one file, use post-tool-use so the script gets the path that was read/edited/written.

Push further: claude --p settings — pass in an entire settings file as JSON, including hooks. Take validating agents to another level.

Don’t delegate learning to your agent. It doesn’t do you or them any good outside of that one shot. Read the docs. Follow the big releases. Cloud Code is the leader in agentic coding and lets us tap into agentic engineering. Opus 4.5 has been changing the game. But be very careful: do not overuse these models to the point where you’re not learning anymore. The worst thing any engineer can do is start the self-deprecation process by not learning anything new.

Definitely check out specialized self-validation in the form of hooks inside your custom /commands, sub-agents, and skills. Stay focused and keep building.