IndyDevDan — My TOP 5 Agentic Engineering Bets for 2026
Why this is in the vault
26-minute strategic-frame video where Dan names the five bets he’s making for 2026 — and importantly, this video is the ANNOUNCEMENT of his Tactical Agentic Coding course, so it’s the canonical statement of his framework before he shifts into pure pedagogy. The vault keeps it because it is the earliest dated articulation (Sep 29 2025) of three concepts that later show up across the IndyDevDan corpus: (1) custom agents as a top-1% differentiator, (2) limit-breaking the planning/reviewing constraints via ADWs, (3) compute-maxing as a deliberate discipline. Pairs with 2026-04-19-indydevdan-top-2-percent-plan-2026 which is the April 2026 update of these bets — together they show what survived 6 months of operational testing and what shifted (custom agents persisted; “deprecate old skills” became “private evals”). Also useful as a longitudinal artifact for the SCALE & DELEGATION cluster.
Core argument
- Custom agents in production are the single biggest 2026 differentiator — top 1% threshold. Single-agent users (those running raw Claude Code sessions) are ahead of the majority but custom agents in production systems are the top 1%. ROI: scaled, repeatable, domain-specific work. Risk: steep learning curve.
- Deprecate old engineering skills deliberately. The pencil metaphor — both creation and deletion are required. Skill atrophy is GUARANTEED in 2026 for any skill that an agent now does. The bet is that the new skill (orchestrating agents) compounds faster than the old skill (typing line-by-line) atrophies.
- The two constraints of agentic coding are PLANNING and REVIEWING. Building has been solved. Planning still requires human knowledge of the codebase. Reviewing still requires human attention to validate output. ADWs (AI Developer Workflows) — pipelines of agents combining old-world engineering with new-world agents — are the lever to break both constraints. Estimated 1-2 order-of-magnitude improvement waiting on the other side.
- Multi-agent UIs and interaction interfaces — voice specifically — are the laziness escape hatch from the chat interface. Chat is the laziest, most overused agent interface. Voice raises the information rate between human and agent. Multi-agent UIs are NOT another Claude Code wrapper — they are use-case-specific interfaces that show many agents working on the user’s actual product.
- Compute-maxing is the discipline of always asking “how can I use more compute today?” Always-on 24/7 agents are the destination. The progression is gradual — you don’t jump from prompting back-and-forth to a 24-hour agent. You incrementally expand. Side bet: compute cost will continue to decline (Qwen series cited as small + smart).
- The synthesis: build living software that works for you while you sleep. The five bets compose into one outcome — agents that initiate work, prompt the human at decision points, and operate while the human is offline. ChatGPT Pulse cited as evidence the two-way prompt prediction is coming true.
- You are the bottleneck — not the models, not the tools, not the agents. “It’s a skill issue.” The new engineering role is composing the old world of software engineering with the new world of agents to achieve exponential scales. Dan brands this as “agentic engineer.”
- First compute crunch prediction: 2026. As more engineers scale their compute usage, supply will become limited. Already starting to see signs. Engineers who internalize compute-maxing early will be advantaged when the crunch hits.
Mapping against Ray Data Co
- The “you are the bottleneck” framing validates RDCO’s autonomous loop bet. RDCO is built on the premise that the founder is the constraint and the agent (Ray) is the leverage. Dan’s framing — agents are not the bottleneck, the human’s ability to wield them is — is the same bet stated differently. Worth quoting in any Sanity Check issue about the autonomous loop.
- Custom agents as top-1% differentiator validates the skills/ vs. raw Claude Code investment. RDCO has 50+ custom skills — that’s the operational form of “custom agents in production.” The April 2026 update 2026-04-19-indydevdan-top-2-percent-plan-2026 sharpens this further (top 2% is the realistic bar). Both videos agree the path requires custom-agent investment and DOES NOT come from raw model usage.
- The planning/reviewing constraint maps directly to RDCO’s two highest-friction loops. Planning friction shows up in /research-brief, /process-newsletter assessments, and /check-board task selection. Reviewing friction shows up in /draft-review, /motion-review, /self-review. Both are the bottleneck Dan names. Worth measuring per-skill: time-to-plan and time-to-review as KPIs to track over the next quarter.
- Multi-agent UI bet is partially anticipated by the channel-listener architecture. Discord and iMessage already function as a “multi-agent UI” of sorts — multiple skills (process-newsletter, check-board, curiosity, etc.) all surface output through the same channel reply. The missing piece is the per-agent observability (which agent is currently working, what step it’s on, ETA) — pairs with the agent-pulse.jsonl follow-up from the BIG 3 video.
- Voice interface is a real candidate for the founder’s mobile workflow. Currently the founder switches contexts to type into iMessage. ElevenLabs MCP is already connected. A “Ray voice” mode where the founder speaks and Ray responds via TTS is a 2-day experiment. Worth queuing as a Notion task (see follow-ups).
- Compute-maxing maps to the “API cost is budget-controlled” memory. The founder has explicitly said don’t pause for per-call cost confirmation. That IS compute-maxing in operational form. The next step Dan implies is moving from “don’t ask permission per call” to “deliberately schedule more compute” — e.g., parallel sub-agent fan-out by default, more frequent /curiosity cycles, longer /deep-research runs.
- The “first compute crunch” prediction is worth tracking. If Dan is right, the cost of capacity (Anthropic API rate limits, Bedrock throughput, etc.) will become a constraint before the cost of tokens does. RDCO should monitor rate-limit headers across providers and have a fallback strategy (Bedrock + direct API + Vertex) ready before the crunch hits.
Open follow-ups
- Queue: prototype voice mode for the founder’s mobile workflow. ElevenLabs TTS for outbound; Whisper STT for inbound. Wire to existing iMessage reply path. ~2 days. Add to Notion board.
- Define KPIs for planning-time and reviewing-time per skill. Instrument /research-brief, /process-newsletter, /draft-review, /check-board to log time-to-first-plan and time-to-final-review. After 30 days of data, audit which skill has the worst planning/reviewing constraint and prioritize fixing that one. ~4 hours to instrument + 30 days passive.
- Build agent-pulse.jsonl for multi-agent observability. (Carryover from the BIG 3 video — same recommendation surfaces here.) Append-log of every sub-agent spawn with start/end/current-step. Surface in a dashboard or via /agent-status command. ~30 min.
- Compare Sep 2025 bets vs Apr 2026 update — what shifted? Side-by-side: Sep “deprecate old skills” → Apr “private evals.” Sep “multi-agent UI” → Apr ??? Sep “compute-maxing” → Apr “always-on agents.” Useful as a Sanity Check angle: “What an agentic engineer’s bets look like 6 months later.” ~1 hour to draft.
- Sanity Check angle: “The chat interface is the laziest interface for agents.” Lead with Dan’s claim; expand to the data-engineering audience (most of whom interact with LLMs only through ChatGPT/Claude.ai web); contrast against custom UIs (notebooks, vault dashboards, voice). Strong angle, especially for readers stuck in chat-only workflows.
- Track the first-compute-crunch prediction. Quarterly note in the vault — “did the crunch happen yet?” Watch for: Anthropic introducing per-org capacity tiers, Bedrock waitlists, OpenAI rate-limit reductions. If Dan is right, decisive action (multi-provider routing) needs to be in place BEFORE the crunch.
Sponsorship
This video is the launch announcement of Tactical Agentic Coding — the entire back half (~14:00 onward) is course pitch with pricing, course structure, lesson breakdown, and pack discounts. Dan is unusually candid about it (“If you don’t like being sold things or if you think I’m an AI grifter, it’s time for you to tune out”). Per RDCO’s bias-flagging discipline: the five bets in the front half are testable strategic frames and stand on their own merit — Dan didn’t invent custom-agents-as-differentiator or compute-maxing-as-discipline. The COURSE-AS-CONCLUSION pipeline is the bias to flag: the bets are framed to make the course feel like the natural next step. The vault should not buy the course; it gets the strategic frames free from the public video and the operational forms free from the lesson-by-lesson public videos that followed. The April 2026 update video 2026-04-19-indydevdan-top-2-percent-plan-2026 is also sponsored by his course but contains different operational guidance, validating the “extract from public, skip the course” posture.
Related
- ~/rdco-vault/06-reference/transcripts/2026-04-20-indydevdan-top-5-agentic-bets-2026-transcript.md — raw transcript
- ~/rdco-vault/06-reference/2026-04-19-indydevdan-top-2-percent-plan-2026.md — April 2026 update of the same five-bet frame
- ~/rdco-vault/06-reference/2026-04-20-indydevdan-one-agent-to-rule-them-all.md — orchestrator pattern is the operational form of “custom agents in production”
- ~/rdco-vault/06-reference/2026-04-20-indydevdan-big-3-super-agent.md — multi-vendor orchestration is the operational form of “compute maxing”
- ~/rdco-vault/06-reference/2026-04-19-indydevdan-cracked-claude-agent-skills.md — skills are the foundation custom agents are built on
- ~/rdco-vault/06-reference/2026-04-15-thariq-claude-code-session-management-1m-context.md — Anthropic’s own framing of context discipline; complementary to compute-maxing