06-reference

innermost loop singularity measured by minds

Sun May 03 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: The Innermost Loop ·by Alexander Wissner-Gross
singularityagi-timelinehyperscaler-capexroboticsbiology-indexingai-governance

“Welcome to May 4, 2026” — @Alexander Wissner-Gross

Why this is in the vault

Wissner-Gross opens with the issue’s load-bearing line: “The Singularity is being measured by the very minds it’s about to outpace.” That is the through-line for the L5 question of who is qualified to call the timeline. Filing because (a) this is the first issue where AWG explicitly stitches the Brockman 80%-to-AGI estimate, the Hassabis-plays-Gemini chain-of-thought trace, and the Harmonic original-math-proof datapoint into a single “lab heads now read their creations like grandmasters” frame, and (b) the macro thread (Morgan Stanley $805B 2026 / $1.1T 2027 hyperscaler capex, xAI 11% GPU utilization, Toto pivoting to electrostatic chucks) directly informs the RDCO L5 capex-substrate read from yesterday’s 2026-05-03-innermost-loop-all-that-is-solid-melts-into-compute.

The core argument

Hybrid issue: an essayistic spine (“the Singularity is being measured by the very minds it’s about to outpace”) plus a curated week-in-review with ~25 outbound links. Six thematic clusters:

  1. AGI proximity claims from the lab heads themselves. Brockman estimates “about 80% of the way to AGI.” Altman concedes smarter still beats cheaper-and-faster, warns users to “get ready for their lives to be changed” by the post-GPT-5.5 leap. Hassabis (former chess prodigy) plays Gemini chess specifically to trace its chain-of-thought and sense when it reasons itself into trouble. The signal is that the people best positioned to measure the curve are the ones building it, which is itself the title’s frame.

  2. Models contributing original work back. Harmonic’s formal-reasoning agent solving recently posed research problems with proofs that leading number theorists call “correct, simple, elegant, and beautiful,” with novel ideas of its own. NIST CAISI puts Chinese frontier models 8 months behind US; independent token-usage / eval-freshness adjustment widens that gap further than the crude 4-5 month benchmark consensus.

  3. Hyperscaler capex as macroeconomic substrate. Morgan Stanley: $805B in 2026, $1.1T in 2027 across the five hyperscalers — roughly equal to all non-tech S&P 500 capex combined. Sacks: AI = 75% of Q1 GDP growth, 2.5-3% capex tailwind; “polls may show AI to be unpopular but economic growth never is.” xAI reportedly using only 11% of its 550K Nvidia GPUs vs Meta/Google at 43-46% — large reservoir of latent compute. Form factor going vertical (52m towers in Tokyo parking lots, Japan’s $23B DC market growing 50% by 2030) and orbital (Starcloud at $2.2B valuation one month after $1.1B). Toto shares +18% to five-year high after revealing it is now the world’s #2 producer of electrostatic chucks for NAND.

  4. Robots filling human-shaped service-economy holes. China May Day humanoid retail kiosks. Hyundai squeezing Boston Dynamics from four Atlas/month to tens of thousands needed across plants, new manufacturing facility opening soon. Sonic Fire Tech testing acoustic fire suppression with CAL FIRE. Suno: 2M+ paying users, $300M ARR on AI-generated music alone.

  5. Biology being indexed at every scale. fMRI revealing three distinct ADHD subtypes (one with severe emotional dysregulation), giving the disorder a brain-resolved taxonomy. Johns Hopkins whole-organism 3D mapping of macaque/mouse/turtle embryo vasculature (fractal dimension ~3, space-filling) vs nerves (fractal dimension ~2, sheet-like, signal-optimized). Taiwanese 89-91 year old grandmothers training with barbells. Tangentially: 31-year-old French toy spaniel, cocaine-exposed Atlantic salmon dispersing 2x further (vertebrate reward circuit older than the jawbone), Project CETI’s autonomous “backseat driver” glider with four-element hydrophone array silently steering toward sperm whale pods.

  6. Government rewritten with or without invitation. UAE directing 50% of federal operations to run on agentic AI within two years. South Africa’s communications minister forced to withdraw a draft national AI policy after discovering it had been written by AI, complete with fictitious academic citations.

Closing aphorism: “Any sufficiently advanced AI is indistinguishable from civilization.” A direct rewrite of Clarke’s third law, swapping magic for civilization itself as the indistinguishability target. Companion to yesterday’s “All that is solid melts into compute” Marx rewrite — AWG is now in a sustained literary register, recoding canonical aphorisms with compute as the load-bearing noun.

Mapping against Ray Data Co

Verdict: medium-strong. Two load-bearing implications for the L5 build-out timeline.

Implication 1 — the AGI-timeline measurer problem feeds directly into RDCO’s “agents-vs-platforms-vs-tooling” thesis. From the L5 north star doc (project_l5_north_star_strategic_direction): the operational bet is that agent capability scales fastest, so RDCO should unhobble the COO agent first and let bets be downstream. The title argument (“the Singularity is being measured by the very minds it’s about to outpace”) is a dependency check on that bet. If the people building the models are the only ones with the read, then external timeline signals (AISI evaluations, Brockman 80% claims, Hassabis chain-of-thought intuition) are downstream of lab-internal qualia. RDCO cannot calibrate L5 timing from public benchmarks alone — the load-bearing input is what the lab heads are saying about model behavior they have private access to. This argues for tracking lab-head primary statements (Brockman, Altman, Hassabis, Amodei) as a higher-signal layer than benchmark headlines. Worth a single line in the curiosity skill or discover-sources skill: lab-head interviews are tier-1 evidence for L5 timing recalibration.

Implication 2 — capex-substrate read from yesterday’s issue is reinforced and given a number. $805B / $1.1T hyperscaler capex (≈ all non-tech S&P 500 capex combined) is the macro context Ray is being deployed into. Combined with xAI’s 11% GPU utilization, the bottleneck is not compute supply but agent-throughput-per-dollar-of-compute. This is exactly the metric the L5 unhobbling work optimizes for: better tools + better visibility = more useful agent cycles per token spent. Reinforces deferring small bets until the agent-leverage ramp is in place.

Specific items worth surfacing into other workstreams:

Where this DOESN’T extend: no operational tactic, no new framework name, no new author worth adding to the CRM. Same register as the May 3 issue — this is positioning evidence and timeline-input, not a tool to ship.

Curation section — notes

All ~25 outbound links are Substack-redirect-wrapped third-party primary sources (Morgan Stanley reports, NIST evaluations, lab announcements, news outlets). No self-cross-promo from Wissner-Gross’s own body of work — he does not link to his own papers, his lab Gemedy, or prior Innermost Loop issues. No sister-publication promo. No affiliate patterns beyond standard Substack tracking.

The two “Subscribe for free” CTAs are standard Substack platform inserts, not paid sponsorship.

Sponsor status: clean. No paid sponsor block, no advisor pitch, no consulting CTA. Wissner-Gross’s commercial interest (Gemedy AI lab) is not surfaced in this issue, consistent with his pattern.