06-reference

claude code best practices

Fri Apr 03 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·article ·source: https://sankalp.bearblog.dev/my-experience-with-claude-code-20-and-how-to-get-better-at-using-coding-agents/ ·by Sankalp

Claude Code 2.0 Best Practices — A Practitioner’s Guide to Coding Agents

Sankalp’s comprehensive guide to Claude Code 2.0 and Opus 4.5, framed around a core thesis: treat AI tooling as augmentation of your judgment and taste, not as a replacement for engineering skill. The post covers context engineering, sub-agents, skills architecture, and practical workflow patterns.

Key mental models

Self-augmentation has three axes. Stay current on tooling, upskill in your domain (use faster implementation cycles to focus on architecture and quality), and keep an open mind about which models to use where. This maps directly to how we think about SOUL.md — the founder provides vision and taste, the agent provides execution leverage.

Context is the bottleneck, not capability. LLMs are stateless; every tool call result stays in the window. Token bloat from extended sessions degrades performance. Sankalp’s rule of thumb: compact at 60% context utilization for complex tasks. “Context engineering answers what configuration of context most likely generates desired behavior.” This is the same principle behind on-demand skill loading in 06-reference/2026-04-04-anthropic-skills-internally — keep skills under 500 lines, load only when relevant.

Faster feedback loops unlock visceral progress. The speed of iteration matters more than raw model capability. Opus 4.5 won the author back from OpenAI Codex primarily on speed and communication quality, not benchmark scores. Echoes the 06-reference/2026-04-04-talking-to-agents-is-all-you-need insight that how you talk to agents shapes what you get back.

Practical workflow

Three-phase execution

  1. Exploration — Ask clarifying questions, use ultrathink for rigorous analysis, generate ASCII diagrams for visualization, do a “throw-away first draft” for complex features to learn the model’s tendencies before committing to an approach.
  2. Execution — Close monitoring, cross-validation with a second model when uncertain, iterative refinement. This is the 06-reference/2026-04-04-compound-engineering loop in action — each pass compounds understanding.
  3. Review — Use a different model (author prefers GPT-5.2 Codex) for bug detection and severity classification, specifically because it produces fewer false positives than Claude on review tasks.

Sub-agent architecture

Sub-agents are specialized Claude instances spawned via the Task tool. Four types: Explore (read-only codebase search), Plan (architecture), General-purpose (full tool access), and claude-code-guide (documentation lookup). The critical insight: full context inheritance enables better cross-attention between information pieces compared to summary-only handoffs. This validates our approach in 06-reference/2026-04-04-planning-with-files-skill — planning works better when the agent can see everything, not just a digest.

Skills, plugins, hooks

The taxonomy Sankalp describes matches what we’ve been building: skills as on-demand domain expertise (folders with SKILL.md + scripts, <500 lines each), plugins as shareable bundles of skills/commands/hooks/MCP servers, and hooks as lifecycle triggers (Stop, UserPromptSubmit). This is the 06-reference/concepts/skills-as-building-blocks pattern — small, composable, loaded on demand. Each skill we build becomes a 06-reference/concepts/compounding-knowledge asset.

Context engineering strategies

Relevance to our setup

Sankalp runs a similar multi-model workflow to what we’re building on the Mac Mini (04-tooling/2026-03-29-infrastructure-decisions). His primary is Claude Code (Opus 4.5), secondary is Codex for review, tertiary is Cursor for manual edits. Our always-on agent architecture takes this further — we don’t switch tools manually, we layer skills and dedicated instances. But his compaction strategy (60% threshold) and sub-agent discipline are directly applicable to our long-running sessions.

The article also reinforces a pattern from 06-reference/2026-04-04-anthropic-skills-internally: Anthropic themselves keep skills small and on-demand. The convergence between internal Anthropic practice, Sankalp’s independent findings, and our own architecture is a strong signal that we’re on the right track.

Open questions