“Ex-Google CEO on Government AI Policy & Deepfakes” — Peter H. Diamandis Moonshots EP #99
Episode summary
Diamandis interviews Eric Schmidt (former Google CEO) in a compact 35-minute conversation covering AI policy, national security, and the shift from “language to action” AI. Schmidt argues we’re approaching a world where everyone has access to a digital polymath, and the critical near-term shift is AI moving from text generation to program generation and autonomous action. He outlines threshold danger points (recursive self-improvement, agents inventing their own languages, advanced math capabilities) and discusses the US-China AI competition, deepfake threats to elections, open vs closed model dynamics, and the promise of quantum simulation through Sandbox AQ.
Key arguments / segments
- [00:04:00] Language to action: the next phase of AI moves from text-to-text to text-to-program, enabling autonomous execution of complex tasks
- [00:07:00] AI safety thresholds: recursive self-improvement, agents inventing their own languages, and advanced autonomous math are the key danger tripwires; Schmidt’s group of ~20 scientists believes “we’re okay now but worried about the future”
- [00:10:00] US-China competition: China is stuck at ~7nm chips (US at 3nm heading to 1.4nm), but the gap is temporary; China compensates with 5x spending on training runs
- [00:12:00] Elections and deepfakes: social media companies bear responsibility for misinformation spread; Taylor Swift deepfakes showed that even strong guardrails can be circumvented by motivated actors
- [00:18:00] Ukraine as AI warfare preview: drone ubiquity means tanks/artillery become obsolete; kill ratio ($5K drone vs $5M tank) makes territorial invasion economically impossible
- [00:23:00] Open vs closed models: Schmidt predicts a small number of heavily regulated AGI systems plus a large ecosystem of mid-size open-source models; restricting training data makes models more brittle
- [00:34:00] Sandbox AQ and quantum simulation: using quantum simulators (not quantum computers) to perturb drug molecules for improved efficacy and shelf life
Notable claims
- Training runs approaching $500M each and escalating quickly
- Restricting training data to “only good information” produces more brittle models, not safer ones
- The drone kill ratio ($5K vs $5M) could make territorial invasion economically impossible
- Red teaming AI will become its own standalone industry
Bias / sponsor flags
- Viome sponsorship: mid-roll by Diamandis promoting gut health testing company (viome.com)
- Fountain Life sponsorship: standard mid-roll
- Schmidt is chairman of Sandbox AQ (discussed positively); has personal financial interests in AI policy outcomes
- Short format (35 min) means many claims are surface-level assertions without supporting evidence
Relevance to Ray Data Co
Moderate. The “language to action” framing is the most valuable takeaway — AI shifting from generating text to generating and executing programs. The open-vs-closed model prediction (few regulated AGI + many mid-size open-source) is worth tracking against reality. The red-teaming-as-industry prediction could be a market opportunity signal.