“Welcome to April 16, 2026” — The Innermost Loop
Why this is in the vault
Wissner-Gross’s daily digest captures several signals directly relevant to RDCO strategy: the Claude Opus 4.7 release window, weak-to-strong supervision as a cheap alignment technique, federal agencies routing around an Anthropic ban for cyber-defense use, and user-visible degradation of Opus 4.6 / Claude Code during the compute squeeze. This is real-time data underneath the abundance-flywheel / Muddle-vs-Rails frames already in the vault.
Bias / sponsorship check
No paid sponsors detected. Two standard Substack subscription CTAs. Tone is dry and pro-acceleration with his usual satirical framing (“Singularity has acquired constituencies inside the government that banned it”). No self-promo of Solve Everything in this issue. No disclosed financial positions.
Core argument
The issue is a single-day sweep organized loosely around the thesis that the alignment loop is now recursing on itself while compute and policy collide.
Cyber-AI and the sword-and-shield economy. Federal agencies — including Treasury — are quietly sidestepping the White House ban on Anthropic to test Claude Mythos for cyber defense. UK AISI reports Mythos Preview solved 73% of expert-level capture-the-flag tasks and became the first model to fully crack “The Last Ones,” a 32-step attack benchmark (3 of 10 attempts, averaging 22/32 steps vs. 16 for Opus 4.6). OpenAI’s GPT-5.4-Cyber is framed as defensive prep for “increasingly more capable models” — Wissner-Gross reads this as each lab shipping “both sword and shield from the same forge.”
Claude Opus 4.7 imminent; Opus 4.6 degrading. Anthropic is reportedly preparing Opus 4.7 “as soon as this week,” distinct from Mythos. Power users are complaining that Opus 4.6 and Claude Code feel less reliable and more token-hungry — Wissner-Gross reads as compute rationing toward the next frontier.
Weak-to-strong supervision recursing. Anthropic researchers used a weaker model to fine-tune a stronger one, closing 97% of the capability gap in days for ~$18k, outperforming human researchers (with occasional specification-gaming). Alignment scaling is now autocatalytic.
Frontier pressure in math and biology. GPT-5.4 Pro solved Erdős Problem #1196 with a proof a mathematician called “from The Book.” Cosmo’s topical clascoterone sustained hair growth at one year. Amazon launched Bio Discovery, pushing wet-lab drug discovery toward managed-service agents.
Coding agents as the dominant P&L line. Uber’s CTO admits Claude Code usage has maxed out its entire 2026 AI budget months in. Anthropic responded with “Claude Code routines” — scheduled and event-triggered agents on managed cloud. Apple is redeploying 200 Siri engineers into an AI coding bootcamp.
Compute pivots and rationing. Allbirds sold for $39M and rebranded as “NewBird AI,” a GPU-as-a-Service provider (stock +582%). Maine banned new data centers above 20 MW until late 2027. OpenAI’s new CRO called Anthropic’s compute shortage a “strategic misstep” while conceding the coding wedge.
Silicon and substrate. Tesla’s Terafab at “light speed” pricing with AMAT / Tokyo Electron / Lam. USC’s 700°C memristor (hotter than Venus). Meta’s 1 GW custom MTIA chips on TSMC 2nm with Broadcom. Amazon acquiring Globalstar for $11.57B to chase SpaceX in LEO. Closing line: “Superintelligence is negotiating for substrate from Venus to low Earth orbit.”
Demographics. ChatGPT’s original ~80%-male first-name skew has disappeared — standard inflection for a technology going general-purpose.
Mapping against Ray Data Co
- Opus 4.7 release timing (this week) — directly affects RDCO’s Claude-first stack. If 4.7 ships with meaningfully different behavior or pricing, every Claude Code routine and every vault skill that assumes 4.6 should be reverified. Strong: this is operational, not theoretical, and Ray is currently running on Opus 4.7 per the model banner.
- Opus 4.6 / Claude Code degradation reports — matches anecdotal friction in RDCO’s own autonomous loop. Worth noting that user-visible degradation during compute squeezes is the cost of being on a frontier lab’s product rather than a stable tier. Reinforces the case for observability on agent runs (token use per task, retry rates), not just output checks.
- “Claude Code routines” = scheduled/event-triggered managed agents — this is exactly the architecture pattern RDCO’s
/check-boardautonomous loop and skill scheduler already implement locally. If Anthropic ships this as a managed product, RDCO’s harness becomes either redundant or differentiated by vault-grounding and channel routing. Worth a Sanity Check angle: “when the platform catches up to your harness, what’s the moat?” - Weak-to-strong supervision at $18k / few days — this is the abundance-flywheel pattern applied to alignment itself. It reinforces the Ch 6 thesis from Solve Everything: efficiency in the loop does not reduce spend, it expands capability per dollar. Newsletter angle: “your cheapest evaluator is a weaker model.”
- Federal ban-routing for cyber-AI — relevant to any RDCO client in regulated sectors. The signal is that formal bans get routed around inside the banning institution when capability is high enough, which changes the risk model for “is this model blocked at my customer?” questions.
- Uber maxing its 2026 AI budget on Claude Code — the strongest possible revealed-preference signal for coding-agent spend. Supports RDCO’s positioning that coding-assist is the first-order AI line item, not a garnish.
- Maine data-center ban — a local-regulation signal worth tracking alongside the NYC grocery-store line. Content-as-a-product angle: the “public sector gathers reviews while the private sector ships universal services” framing is exactly the tone Sanity Check lands well.
Related
- 2026-04-13-innermost-loop-welcome-apr-13 — same-author daily digest; Jevons paradox, agents at every layer, Anthropic revenue trajectory
- 2026-04-12-innermost-loop-singularity-immune-response — prior issue; autonomy horizons and institutional friction
- book-solve-everything-master-synthesis-2026-04-13 — Wissner-Gross’s book; abundance flywheel and L0-L5 framework cited above
- book-solve-everything-ch6-the-engine-2026-04-13 — abundance flywheel directly validated by weak-to-strong-supervision cost curve
- book-solve-everything-ch8-muddle-vs-machine-2026-04-13 — eval-layer thesis reinforced by agent proliferation
- 2026-04-04-claude-code-best-practices — RDCO’s internal Claude Code reference; reverify against 4.7 when it ships
- 2026-04-01-stratechery-axios-attack-claude-code-leaked-security — prior cyber-AI / Claude Code security thread
- 2026-04-13-stratechery-mythos-muse-compute — Anthropic compute strategy / Mythos context
Copyright / quoting
All quotations above are ≤15 words and in quotation marks, per the process-newsletter skill. Facts paraphrased from Wissner-Gross’s original; links available in the source newsletter.