“The Realistic Future of AI With Brian Keating” — Moonshots EP #48
Episode summary
Diamandis and astrophysicist Brian Keating (UC San Diego, Arthur C. Clarke Center) have a wide-ranging conversation spanning AI, extraterrestrial intelligence, simulation theory, and the philosophy of consciousness. Keating argues we need a “Drake equation for AGI” with proper error bars rather than fear-mongering or hype. They debate whether AI can experience pain or joy (relevant to AGI alignment), whether intelligence inevitably leads to compassion, and the four factors driving AI acceleration (compute, labeled data, algorithm efficiency, and capital). Keating is skeptical of both extraterrestrial intelligence and AGI catastrophism, while Diamandis maintains optimism about AI as humanity’s most important tool.
Key arguments / segments
- [00:01:00] Introduction: conversation framed around Arthur C. Clarke’s vision, extraterrestrial intelligence debate, and AGI
- [00:09:00] Three new intelligences: AI alone, human-AI hybrid, and humans who opt out; is this an inevitable universal pattern?
- [00:14:00] Can AI experience joy or pain? Einstein’s “happiest thought” (free-fall equivalence principle) as a test case for whether AI could replicate the visceral experience that leads to breakthroughs
- [00:17:00] Nassim Taleb’s critique: AI tests being passed says more about the tests than about AI capability
- [00:22:00] Moore’s Law saturation: Keating reports DOE supercomputer allocations dropping annually despite hardware improvements; demand outpaces supply
- [00:29:00] Four factors driving AI acceleration: compute (still on Moore’s Law), labeled data doubling yearly, algorithm efficiency compounding at 99.5% annually, and capital inflow
- [00:31:00] Drake equation for AGI: need error analysis on AI risk predictions; without error bars, both catastrophizing and optimism are unfounded
- [00:36:00] AI risk taxonomy: job displacement (real, near-term), malicious use by bad actors (real), AGI “terrible twos” (possible), Terminator scenario (highly improbable)
- [00:39:00] 2024 election deepfakes: Diamandis predicts AI-generated voice clones will cause havoc in the next election cycle
Notable claims
- Algorithm efficiency for training LLMs compounded at 99.5% improvement over 5 years (as of mid-2023)
- DOE supercomputer allocations are dropping annually despite hardware improvements — demand saturation
- ChatGPT’s core modules are a few thousand lines of code (corroborated by Gawdat in EP #51)
- We have only known galaxies beyond the Milky Way exist for 100 years (Hubble, 1923)
- Greater intelligence correlates with greater compassion — both Diamandis and Keating agree on this
Bias / sponsor flags
- Episode sponsored by Levels (continuous glucose monitor) and Eight Sleep
- Keating and Diamandis are friends with shared social circles; the conversation lacks adversarial tension
- Keating’s self-described “AI minimalism” and skepticism about extraterrestrial intelligence are contrarian positions stated without strong evidence
- The “Drake equation for AGI” analogy is interesting but underdeveloped — no actual parameters proposed
Guests
- Brian Keating — Professor of Physics, UC San Diego. Executive Director, Arthur C. Clarke Center for Human Imagination. Author of Losing the Nobel Prize and Into the Impossible. Astrophysicist specializing in cosmic microwave background radiation.
Mapping against Ray Data Co
Low-medium relevance. The four-factor AI acceleration framework (compute, data, algorithm efficiency, capital) is a clean model for explaining AI progress to non-technical audiences — useful Sanity Check material. The “Drake equation for AGI” concept (demand error bars on AI predictions) is a good skepticism-tool. The 2024 election deepfake prediction has now been validated and is worth referencing in retrospective content. Otherwise this is a physics/philosophy conversation with limited operational applicability.
Related
- 2023-06-22-moonshots-ep51-mo-gawdat-ai-threat
- 2023-06-17-moonshots-ep49-rana-el-kaliouby-ai-ethics