Moonshots EP 183: Ex-Google CEO Eric Schmidt — What Artificial Superintelligence Will Actually Look Like
Summary
A one-on-one deep conversation between Peter Diamandis, Dave Blundin, and Eric Schmidt (former Google CEO, author of Genesis, co-author with Kissinger on AI deterrence). Schmidt’s framing: AI is underhyped because it is a learning machine inside network-effect businesses, and its natural limit is electricity, not chips. He testified that the US needs 92 additional gigawatts for AI — equivalent to 92 large nuclear power stations — and notes that essentially zero new nuclear plants are being started. Schmidt outlines the “San Francisco consensus”: within 1-2 years, AI will produce world-class mathematicians and programmers, and since math and programming are the basis of everything, this will accelerate physics, chemistry, biology, and materials science. He frames the timeline as 1.5-2x slower than the Leopold Aschenbrenner predictions, putting specialized AI savants in every field within 5 years. On China, Schmidt admits he was wrong about the 2-year lead — DeepSeek’s arrival proved inference-time compute and distillation collapsed the gap faster than expected. His most provocative framework is “Mutual AI Malfunction” (co-authored with Dan Hendricks and Alex Wang): the AI equivalent of mutually assured destruction, where nations maintain the capability to cyber-attack each other’s AI infrastructure as a deterrent against crossing sovereignty-threatening capability thresholds. He compares the current moment to 1938 — the Einstein letter has been written, and the conversation about deterrence needs to start before Chernobyl-level AI events occur. Schmidt predicts the endgame is 10 nationalized super-models in multi-gigawatt data centers guarded like plutonium facilities, with the major proliferation risk being open-source models that could eventually run on small servers. On the business side, he notes MCP (Model Context Protocol) is enabling LLMs to directly connect to enterprise databases and write code, threatening 100,000 middleware companies. Dave highlights that voice AI customer service conversations worth $10-$1,000 cost only 10-20 cents of compute.
Key Segments
- [00:00-09:00] AI is underhyped, energy as the bottleneck (92GW needed), nuclear/fusion too slow, chip efficiency startups proliferating
- [09:00-18:00] San Francisco consensus (AI math/coding in 1-2 years), savants in all fields within 5 years, scaffolding and self-improvement
- [18:00-28:00] China race, DeepSeek distillation, Mutual AI Malfunction deterrence framework, Kissinger’s realism, chip tracking
- [28:00-35:00] 10 nationalized super-models endgame, plutonium-level data center security, open-source proliferation risk
Notable Claims
- US needs 92 additional gigawatts for AI (one gigawatt = one large nuclear plant); essentially none being built
- Schmidt was “clearly wrong” about China being 2 years behind — inference-time compute and distillation collapsed the gap
- Endgame: ~10 nationalized super-models (5 US, 3 China, 2 elsewhere) in multi-gigawatt data centers guarded like nuclear facilities
- MCP enabling LLMs to directly connect enterprise databases threatens 100,000 middleware companies built over 30 years
- Voice AI conversations worth $10-$1,000 each cost 10-20 cents of compute — demand massively exceeds supply
- Schmidt co-authored “Mutual AI Malfunction” deterrence paper with Dan Hendricks and Alex Wang