“Who Will Govern the Future of AGI? with Emad Mostaque (Stability AI Founder)” — Peter H. Diamandis Moonshots (X Spaces)
Episode summary
Diamandis hosts Emad Mostaque (Stability AI founder) on X Spaces during the chaotic OpenAI board crisis (Sam Altman firing/rehiring, Nov 2023). The conversation centers on AI governance, safety, and alignment. Mostaque argues that the AI safety debate has been “hijacked” by AGI existential risk discussions, which trigger precautionary-principle thinking and push toward centralized authority — exactly the wrong outcome. His alternative framework: separate governance from safety; focus on inputs (training data quality and transparency) not outputs; establish data standards analogous to food ingredient standards; and build nationally-owned AI models so countries aren’t dependent on foreign black boxes (“not your models, not your mind”). He claims containment is impossible — China/Russia already have GPT-4 weights downloaded via USB — and that giant supercomputers are “a shortcut for bad quality data.” Within 12-18 months, he predicts GPT-4 level performance on smartphones. On alignment, Mostaque defines it as objective function alignment (YouTube’s engagement optimization inadvertently serving extremist content), not just preventing AI from killing humanity. He advocates for “quality and diversity” in AI training as a resilience strategy, arguing monoculture AI is more fragile than diverse AI ecosystems. The OpenAI board crisis demonstrates that “how can we align AI with humanity’s interests if we can’t align a company’s board with its employees’ interests?”
Key arguments / segments
- [00:03:00] Governance vs. safety: safety debate hijacked by AGI risk; precautionary principle leads to centralization; separate the two
- [00:05:00] National AI models: Stability training models for half a dozen nations; plan to give ownership back to citizens; “not your models, not your mind”
- [00:08:00] Inputs matter more than outputs: high-quality data without anthrax knowledge won’t produce anthrax instructions; data transparency is the real safety lever
- [00:10:00] Containment is impossible: China/Russia already have GPT-4 weights; giant supercomputers are shortcuts for bad data quality; 92% of Stable Diffusion data unused 99% of the time
- [00:11:00] GPT-4 on smartphones in 12-18 months; distributed training makes centralization obsolete
- [00:15:00] OpenAI crisis as governance case study: “better politics in a teenage sorority”; unelected board making decisions about technology that could “upend entire society” is anti-democratic
- [00:20:00] Alignment as objective function: YouTube’s engagement optimization served extremists; real alignment risk is persuasive AI serving ads, not killer robots
- [00:22:00] Quality and diversity: merging models from different cultures improves performance; monoculture is more fragile
Notable claims
- China/Russia already downloaded GPT-4 weights via USB; containment is a false premise
- 92% of Stable Diffusion training data is unused 99% of the time
- GPT-4 level performance on smartphones within 12-18 months (from Nov 2023)
- PaLM 540B parameter model recreated in 206 lines of Python (Lucidrain on GitHub)
- Algorithms are not complicated; data structuring and running supercomputers are the hard parts
- OpenAI’s own “Road to AGI” document says democracy likely won’t survive AGI
Bias / sponsor flags
- Mostaque is founder of Stability AI, a direct competitor to OpenAI; his advocacy for open-source models serves his commercial interests
- The “containment is impossible” argument conveniently supports his open-source business model
- The GPT-4-on-smartphones timeline was aggressive and has not fully materialized as described
- Mostaque resigned from Stability AI in March 2024 amid financial and leadership challenges; his governance prescriptions did not survive contact with his own company
- No counterpoint from closed-model advocates (OpenAI, Anthropic, Google)
- The “China already has the weights” claim is presented as fact without evidence
Relevance to Ray Data Co
Medium-high. This is the most relevant episode in the batch for our work. The governance framework (inputs > outputs, data standards as food standards, national model ownership, alignment as objective function) is directly applicable to how we think about AI tooling decisions. The “not your models, not your mind” framing is potent for the Sanity Check newsletter. The tension between open and closed models is a recurring theme in our content. Worth cross-referencing with existing vault content on AI governance.