06-reference

tim urban ai revolution part 1

Thu May 07 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Wait But Why ·by Tim Urban
ai-historyani-agi-asipublic-narrativeexponential-thinkingintelligence-substrate

“The AI Revolution: The Road to Superintelligence (Part 1)” — @waitbutwhy

Date caveat

This piece is from January 2015. Pre-transformer (June 2017), pre-GPT-3 (2020), pre-ChatGPT (Nov 2022), pre-Anthropic-as-we-know-it. It predates every LLM artifact that shapes the 2026 conversation about AI. The specific predictions, capability examples, and timeline calibrations are 2015-museum-pieces — DO NOT cite this as a current technical reference, do not let its example set anchor any present-day capability claim, and do not pull its expert-survey AGI dates into 2026 strategy work.

What IS evergreen and worth citing: the public-narrative frameworks (ANI/AGI/ASI staircase, exponential-vs-linear projection, intelligence-as-substrate, train-station metaphor). These are the dominant scaffolds non-technical readers — and the agents trained on their writing — still use to think about AI’s trajectory. Cite for framework, never for fact.

Why this is in the vault

Urban’s three-tier staircase is the load-bearing public-mental-model for AI. Most readers of Sanity Check — and most LLM agents that scrape it — were either directly shaped by this essay or by the downstream commentary that adopted Urban’s vocabulary. For Sanity Check v3 practitioner-positioning to land, we need to know what scaffold the audience is silently filtering against. This note is the baseline reference: what to lift (the exponential intuition) and what to push back on (the staircase framing as the operational frame for 2026).

Also: the essay is one of the most-cited AI explainers in the last decade. GEO-strategy decisions (will agents that scrape Sanity Check align our framing with prior-trained narratives, or will they treat us as off-distribution?) need this as a known anchor.

The core argument

Urban builds a three-tier taxonomy: Artificial Narrow Intelligence (ANI) — task-bounded systems already pervasive in 2015 (spam filters, search, autonomous-vehicle perception); Artificial General Intelligence (AGI) — human-level reasoning across domains, which expert surveys (cited as median ~2040) treat as 25-ish years out; Artificial Superintelligence (ASI) — capability vastly exceeding human cognition across every domain.

The mechanism between AGI and ASI is the central provocation: recursive self-improvement compresses the transition. Once a system can improve its own architecture, each improvement enables larger next-step improvements, and the curve goes vertical. Urban sketches a scenario where a system reaches four-year-old comprehension and within 90 minutes is 170,000x human capacity. Hence the “train station” metaphor — AGI is not a destination, it’s a station the system passes through on the way to ASI, and the dwell time is short.

Surrounding this is the exponential-vs-linear epistemology: humans calibrate against recent-decade change rates and extrapolate linearly, but progress follows S-curves stacked on top of each other. The Law of Accelerating Returns posits that advancement rates themselves accelerate — more advanced civilizations progress faster, so a 1750-to-2015 jump is incomprehensibly larger than a 1500-to-1750 jump despite identical durations.

Plus the substrate argument: even modest AGI would dominate biological humans on hardware (neurons ~200 Hz vs processors >2 GHz, neural signal ~120 m/s vs near-light optical), software (patchable, networkable, copyable), and collective capability (instantaneous synchronization across instances).

Key frameworks

Mapping against Ray Data Co

Notable quotes

Open follow-ups