06-reference

innermost loop welcome apr 16

Wed Apr 15 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: The Innermost Loop (Substack) ·by Alex Wissner-Gross

“Welcome to April 16, 2026” — The Innermost Loop

Why this is in the vault

Wissner-Gross’s daily digest captures several signals directly relevant to RDCO strategy: the Claude Opus 4.7 release window, weak-to-strong supervision as a cheap alignment technique, federal agencies routing around an Anthropic ban for cyber-defense use, and user-visible degradation of Opus 4.6 / Claude Code during the compute squeeze. This is real-time data underneath the abundance-flywheel / Muddle-vs-Rails frames already in the vault.

Bias / sponsorship check

No paid sponsors detected. Two standard Substack subscription CTAs. Tone is dry and pro-acceleration with his usual satirical framing (“Singularity has acquired constituencies inside the government that banned it”). No self-promo of Solve Everything in this issue. No disclosed financial positions.

Core argument

The issue is a single-day sweep organized loosely around the thesis that the alignment loop is now recursing on itself while compute and policy collide.

Cyber-AI and the sword-and-shield economy. Federal agencies — including Treasury — are quietly sidestepping the White House ban on Anthropic to test Claude Mythos for cyber defense. UK AISI reports Mythos Preview solved 73% of expert-level capture-the-flag tasks and became the first model to fully crack “The Last Ones,” a 32-step attack benchmark (3 of 10 attempts, averaging 22/32 steps vs. 16 for Opus 4.6). OpenAI’s GPT-5.4-Cyber is framed as defensive prep for “increasingly more capable models” — Wissner-Gross reads this as each lab shipping “both sword and shield from the same forge.”

Claude Opus 4.7 imminent; Opus 4.6 degrading. Anthropic is reportedly preparing Opus 4.7 “as soon as this week,” distinct from Mythos. Power users are complaining that Opus 4.6 and Claude Code feel less reliable and more token-hungry — Wissner-Gross reads as compute rationing toward the next frontier.

Weak-to-strong supervision recursing. Anthropic researchers used a weaker model to fine-tune a stronger one, closing 97% of the capability gap in days for ~$18k, outperforming human researchers (with occasional specification-gaming). Alignment scaling is now autocatalytic.

Frontier pressure in math and biology. GPT-5.4 Pro solved Erdős Problem #1196 with a proof a mathematician called “from The Book.” Cosmo’s topical clascoterone sustained hair growth at one year. Amazon launched Bio Discovery, pushing wet-lab drug discovery toward managed-service agents.

Coding agents as the dominant P&L line. Uber’s CTO admits Claude Code usage has maxed out its entire 2026 AI budget months in. Anthropic responded with “Claude Code routines” — scheduled and event-triggered agents on managed cloud. Apple is redeploying 200 Siri engineers into an AI coding bootcamp.

Compute pivots and rationing. Allbirds sold for $39M and rebranded as “NewBird AI,” a GPU-as-a-Service provider (stock +582%). Maine banned new data centers above 20 MW until late 2027. OpenAI’s new CRO called Anthropic’s compute shortage a “strategic misstep” while conceding the coding wedge.

Silicon and substrate. Tesla’s Terafab at “light speed” pricing with AMAT / Tokyo Electron / Lam. USC’s 700°C memristor (hotter than Venus). Meta’s 1 GW custom MTIA chips on TSMC 2nm with Broadcom. Amazon acquiring Globalstar for $11.57B to chase SpaceX in LEO. Closing line: “Superintelligence is negotiating for substrate from Venus to low Earth orbit.”

Demographics. ChatGPT’s original ~80%-male first-name skew has disappeared — standard inflection for a technology going general-purpose.

Mapping against Ray Data Co

All quotations above are ≤15 words and in quotation marks, per the process-newsletter skill. Facts paraphrased from Wissner-Gross’s original; links available in the source newsletter.