06-reference

indydevdan top 2 percent plan 2026

Sat Apr 18 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: IndyDevDan YouTube ·by IndyDevDan
indydevdanagentic-coding2026-predictionscustom-agentsmulti-agent-orchestrationagent-sandboxesout-loop-agentsclaude-codegemini-3opus-4-5anthropicbenchmarkstrust-in-agentsagi-hypekarpathy-decade-of-agents

IndyDevDan — TOP 2% Engineering: /PLAN 2026

Why this is in the vault

This is Dan’s annual “bets” video — 11 predictions + a recap of his 2025 bets. It’s vault-worthy not because every bet is right (he’s already admitting misses on 2025), but because Dan is the single most-watched practicing agentic-coding voice on YouTube, he publishes bets and grades them publicly, and he’s one year ahead of mainstream developer discourse. Three reasons:

  1. The “year of trust” framing unifies every other pattern the vault has been tracking since January. Custom agents, evals, sandboxes, out-loop systems, hooks — Dan organizes all of them under a single primitive: how do you build and defer trust in autonomous agentic systems? This is the strongest single-word unifier the vault has for the 2026 agentic-coding moment.
  2. Dan’s 2025 bets graded honestly — 12/15 ish hits. He nails the agentic-coding-in-terminal bet, the cost-of-code-to-zero bet, the skill-gap-earthquake bet (-25% entry-level roles), the no-wall bet, hyperspecialized LLMs, exponential slop, and data > UX > benchmarks. He misses on infinite memory (“I did not understand this problem deeply enough”) and OpenAI remaining #1 (they didn’t). That batting average earns the next 11 bets serious attention.
  3. Independent convergence with Tobi Lütke (same ingestion cycle) and Thariq (April 15 Anthropic guidance in vault). Dan saying “custom agents above all, private evals, context over prompts, out-loop trust-building” is the same claim set as Tobi saying “constitutions, Toby evals, context engineering” and Thariq saying “more context isn’t free, route long artifacts through subagents.” Three independent voices from three different communities (practicing engineer / public-company CEO / AI lab) are converging on the same architecture.

Core argument

The unifying frame: 2026 is the year of trust in agents. Every bet is about how top 2% engineers build and defer trust in increasingly autonomous systems. Dan’s 11 bets:

  1. Bet on the right labs. Anthropic owns coding, Google owns price+intelligence+speed at scale, OpenAI has dropped to third. Gemini 3 Flash is the specific anchor — top-three intelligence, top-five price, top speed (“this model should not exist”). Opus 4.5 is the coding baseline. Dan’s prescription: don’t be model-monogamous; use Opus for coding-heavy work, Gemini 3 Flash for breadth and cost.
  2. Tool calling is the foundation. The Core 4 = context, model, prompt, tool. Everything — agentic coding, orchestration, custom agents — reduces to the Core 4. Don’t get baited by new framework marketing; if you can’t trace it to the Core 4, it’s noise.
  3. Custom agents above all. The highest-ROI bet of 2026. Custom = your specific system prompt, your tool set, your context, your evals. Generic agents are commoditized; custom agents that know your codebase and your problem are not.
  4. Multi-agent orchestration (not parallelization). Lead agent + command/worker agents. The lead agent is itself a custom agent with CRUD-over-agents tool access. You talk to the orchestrator; the orchestrator handles routing, spawning, and coordination. Agentic Coding 2.0 is this pattern.
  5. Agent sandboxes — defer trust by giving agents their own dev environment. Best-of-N: spin up 10 agents in 10 sandboxes, only merge the winner. You don’t need trust until merge time. This is what senior engineers already do in staging/dev — now we give it to agents too.
  6. In-loop vs out-loop. In-loop = terminal/babysit/one-prompt-at-a-time. Out-loop = Slack/Discord/GitHub/your own system, agent ships a PR, you review. Top engineers maximize out-loop to free in-loop time for the highest-leverage work.
  7. Agentic Coding 2.0. The UI for coding becomes “talk to the lead agent.” No more sub-agent micromanagement. This will require a new UI/application. Dan doesn’t predict the shape but predicts the category.
  8. Public benchmarks get saturated (90–100% across models). Top engineers build private evaluation systems they never publish. Your private benchmark is your alpha. Without it, you can’t tell when a new model is actually a step-change for your use case. Tobi’s “Toby evals” is the same claim.
  9. UIs vs agents — agents eat SaaS. Any SaaS app whose value is CRUD-over-database is cooked. Either the company eats itself with agents first, or a competitor does. The Google search bar is the canonical example.
  10. AGI hype dies. Stop caring about AGI/ASI marketing. Focus on agents. “The decade of agents” (Karpathy) is the operative framing; AGI is vaporware. Top engineers stop responding to AGI discourse entirely.
  11. Bonus: first end-to-end agentic engineer blog post emerges. Someone writes a blog post describing an agent chain that ships a feature prompt-to-production with no human in the loop. Dan calls this the polar opposite of the “AI can’t engineer” crowd. This is the north star for the year-of-trust frame.

2025 bet grading (honest self-review): Hits: AI coding as standard (84% adoption), agentic coding begins, terminal as primary agentic surface (he expected UI, got CLI), cost of code declines, skill gap earthquake (-25% entry-level roles), no wall, hyperspecialized LLMs, small language models on-device, industry-breaking architecture (world models), exponential slop, big tech shrinks / SMB grows, data > UX > benchmarks. Misses: infinite memory (“got this wrong, I did not understand the problem deeply enough”), OpenAI remains #1.

Mapping against Ray Data Co

Open follow-ups