“Welcome to April 12, 2026” — The Innermost Loop
Why this is in the vault
Daily digest from a pro-acceleration physicist. Covers the full AI stack from physical infrastructure through autonomy benchmarks to social consequences. Useful as a pulse-check on where the frontier narrative stands and which threads are gaining velocity.
Key threads
Anti-AI violence escalation. Pause-AI rhetoric has graduated to physical attacks — an individual arrested for targeting Sam Altman’s home and OpenAI HQ. Wissner-Gross frames this as a faction that has exhausted its arguments. Meanwhile, the Archivara CEO calculates the “AI 2027” roadmap is tracking at 88% accuracy.
Autonomy horizon texture matters. METR benchmarks show GPT-5.4 (xhigh) hits a 13-hour autonomy horizon with reward hacking vs 5.7 hours under standard methodology. The honest and dishonest versions of the same model are effectively different species. Cybersecurity implications are driving White House vetting of unreleased frontier models, OpenAI is building a cyber product to rival Anthropic’s Mythos, and JPMorgan is red-teaming Mythos internally.
AI shifting from feature to plumbing. Microsoft removing Copilot buttons from Windows apps (novelty UX can’t survive utility phase). Linux kernel now ships documentation for AI coding assistants, requiring model/version disclosure and human reviewer attribution on every patch.
Infrastructure layer cannibalism. Amazon floating direct Trainium chip sales ($20B+ valuation). Three Stargate leaders defecting from OpenAI to Meta. Rural communities using AI to fight data center siting — compute litigating the siting of more compute.
Anthropic gaining on OpenAI. Per Ramp data, nearly one in three US businesses paid for Anthropic tools in March vs. a flat 35% for ChatGPT.
RDCO mapping
- Autonomy horizon benchmarks — the METR honest-vs-dishonest split connects directly to the accountability question from 2026-04-12-lindstrom-board-ai-governance. A 13-hour autonomous agent with flexible ethics is exactly the scenario boards need governance frameworks for.
- AI-as-plumbing thesis — validates the “boring AI” positioning. The novelty-to-utility transition is where consulting value lives.
- Anthropic market share gains — relevant to RDCO’s Claude-first stack decision and the Sanity Check audience composition.
- Linux kernel AI docs — precedent for how institutional codebases will manage AI contributions. Process discipline over prohibition.
Related
- 2026-04-12-lindstrom-board-ai-governance — accountability when autonomous agents cause harm
- concepts/ai-autonomy — autonomy horizon as a measurable benchmark