“Welcome to April 9, 2026” — The Innermost Loop
Why this is in the vault
This is the first Innermost Loop issue in the vault chronologically — it sits four days before the master Solve Everything synthesis and three days before the Apr 12 issue. The opening Mythos framing (“artisanal for-loop” as the new hazard) is the moment Anthropic’s cyber-capable model goes from rumor to commentary anchor, and the Scott Wu compute-scissor stat (FLOPs growing ~3x annually vs inference demand growing ~10x) is the most actionable economic signal in the issue. Both feed directly into the abundance-flywheel and Muddle-vs-Rails arguments we are already tracking.
Bias / sponsorship check
No paid sponsors detected. Two standard Substack subscription CTAs (“Subscribe for free to receive new posts and support my work”). Wissner-Gross writes as an independent commentator; no disclosure of personal positions in any of the named entities (Anthropic, Nvidia, Perplexity, OpenAI, Meta, ByteDance, Alibaba). Tone is dry-satirical and pro-acceleration — the framing routinely treats the Singularity as already underway, which the reader should price in.
The core argument
A single-day sweep across six layers of the AI stack — frontier models, model speciation, application layer, substrate, human-stack adjacencies, and capital markets — unified by the implicit thesis that the Singularity is now visible in every column of the spreadsheet at once.
Key threads:
Mythos as inflection. Anthropic’s Mythos is described as superhuman at vulnerability discovery, prompting commentary that human-written code itself is becoming unsafe. First model class trained at scale on Blackwells, with Vera Rubin GPUs queued behind. OpenAI is reportedly running its own staggered cyber-model rollout to selected partners.
xAI behind despite firepower. Elon claims Colossus 2 is now training 7 models simultaneously (Imagine V2, twin 1T and 1.5T variants, up to a 10T behemoth, ~2-month runs each). A leaked memo from xAI’s new president (also runs Starlink) admits the lab is “clearly behind” frontier shops — Wissner-Gross’s punchline: “7 simultaneous training runs cannot manufacture taste.”
Model speciation. Meta’s Muse Spark — first model under Alexandr Wang — is labeled “a data labeling CEO’s model” for crushing data-quality benchmarks while flubbing reasoning (“you ship the org chart you have”). Alibaba dropped HappyHorse-1.0 anonymously and seized #1 on Artificial Analysis text-to-video and image-to-video boards. ByteDance counters with In-Place Test-Time Training, repurposing MLP projection matrices as fast weights so a 4B model can dominate at 128k context. OpenAI researchers solved 5 more Erdős problems.
The compute scissor (load-bearing). Cognition’s Scott Wu: global FLOPs growing ~3x annually, inference demand ~10x annually. Wissner-Gross’s read: this scissor “forecasts price hikes and a flight to smaller, leaner models.”
Application layer. Perplexity ARR doubled to $500M since New Year’s. Tubi launched the first major streamer native app inside ChatGPT (“the chat window is the new channel guide”). Google folded NotebookLM into the Gemini app as Notebooks. Syncere’s Lume lamp-robot is pitched as the first mass-market home robot disguised as furniture.
Substrate strain. TSMC’s CoWoS packaging compounding at 80% annually, majority earmarked for Nvidia. Meta committed an additional $21B to CoreWeave through 2032 (atop a prior $14.2B deal). OpenAI paused its UK Stargate buildout citing energy and regulation. Epoch AI calculates Chinese and open labs run on ~10x less compute than the frontier — “a gap that explains both their creativity and their urgency.” Germany is building the world’s tallest wind turbine (364m) inside a coal mine.
Human stack. Life Biosciences raised $80M for anti-aging gene therapy clinical trials. GLP-1 drugs projected to add $13B in apparel sales as Americans “shrink out of their wardrobes.” iPhone Fold reportedly tracking for September.
Disclosure timeline. Rep. Ogles says the White House registered “Aliens.gov”; Rep. Burchett’s HR 8197 would dissolve AARO entirely.
Capital markets repricing. UBS HOLT model now pegs Nvidia fair value at $22T. OpenAI CFO Sarah Friar says retail will “for sure” get IPO shares. Closing line: “Capital markets are attempting to buy in while the Singularity is still priced in dollars.”
Mapping against Ray Data Co
- The compute scissor (3x FLOPs vs 10x inference) is the strongest pull-quote. This is the inverse of the “cheaper inference doesn’t mean cheaper AI” angle from the Apr 13 issue and reinforces it: even before Jevons-style usage growth, the supply/demand ratio alone forecasts price hikes. RDCO clients pricing AI workloads on a flat per-token assumption are budgeting against the wrong curve. Sanity Check angle: “the per-token price is going up, not down — here’s the math.”
- “You ship the org chart you have” (Muse Spark) is a direct restatement of Conway’s law applied to model behavior, and it maps onto our internal critique of bolt-on AI strategies. If a data-labeling org ships a data-quality-strong, reasoning-weak model, then a consulting firm buying a generic chatbot ships a generic chatbot. Worth a Sanity Check Data Dot.
- Mythos as the “code is hazardous” moment is the specific event the Apr 13 master synthesis will refer to in retrospect. Filing this issue gives us the contemporaneous framing — useful when the abundance-flywheel and Muddle-vs-Rails arguments need a “this is when the frame flipped” anchor.
- Furniture-shaped robots (Lume). Reinforces the Apr 13 “robots colonizing niches” thread. RDCO positioning note: the first mass-market embodied AI is unlikely to look like the humanoid in the keynote.
- xAI “clearly behind” memo. Useful corroboration for the Apr 13 thread on Anthropic’s revenue trajectory — the frontier is consolidating around 2-3 labs, not democratizing across 7.
- CoWoS at 80% annual compounding + UK Stargate pause. Substrate is the binding constraint, not training algorithms. Any RDCO recommendation that assumes “we’ll just buy more compute” needs to be stress-tested against packaging and energy bottlenecks.
Related
- 2026-04-12-innermost-loop-singularity-immune-response — next issue in the series; anti-AI violence, autonomy horizons
- 2026-04-13-innermost-loop-welcome-apr-13 — Jevons paradox, abundance flywheel, Anthropic revenue trajectory; this Apr 9 issue is the upstream “Mythos arrival” anchor
- 2026-04-16-innermost-loop-welcome-apr-16 — later issue in the series for trend continuity
- 2026-04-10-stratechery-myth-and-mythos — Thompson’s contemporaneous take on the Mythos announcement
- 2026-04-13-stratechery-mythos-muse-compute — Mythos + Muse Spark + compute economics, three days later
- 2026-04-14-stratechery-openai-memos-anthropic — OpenAI’s response posture continues the staggered-rollout thread
- book-solve-everything-master-synthesis-2026-04-13 — same author; the synthesis already cited the abundance flywheel that this issue’s compute-scissor stat reinforces
- book-solve-everything-ch6-the-engine-2026-04-13 — abundance flywheel; the inference-demand scissor is the demand side of this engine
- book-solve-everything-ch3-the-mechanics-2026-04-13 — substrate constraints (CoWoS, Stargate pause) are the L0-L5 maturation curve’s physical-world friction
Copyright note
All quoted phrases above are 15 words or fewer and used for assessment-and-commentary purposes. The full article is at the source URL in the frontmatter; no body text was reproduced verbatim.