06-reference

moonshots emad mostaque agi governance

Wed Nov 22 2023 19:00:00 GMT-0500 (Eastern Standard Time) ·reference ·source: Peter H. Diamandis (YouTube) ·by Peter Diamandis / Emad Mostaque

“Who Will Govern the Future of AGI? with Emad Mostaque (Stability AI Founder)” — Peter H. Diamandis Moonshots (X Spaces)

Episode summary

Diamandis hosts Emad Mostaque (Stability AI founder) on X Spaces during the chaotic OpenAI board crisis (Sam Altman firing/rehiring, Nov 2023). The conversation centers on AI governance, safety, and alignment. Mostaque argues that the AI safety debate has been “hijacked” by AGI existential risk discussions, which trigger precautionary-principle thinking and push toward centralized authority — exactly the wrong outcome. His alternative framework: separate governance from safety; focus on inputs (training data quality and transparency) not outputs; establish data standards analogous to food ingredient standards; and build nationally-owned AI models so countries aren’t dependent on foreign black boxes (“not your models, not your mind”). He claims containment is impossible — China/Russia already have GPT-4 weights downloaded via USB — and that giant supercomputers are “a shortcut for bad quality data.” Within 12-18 months, he predicts GPT-4 level performance on smartphones. On alignment, Mostaque defines it as objective function alignment (YouTube’s engagement optimization inadvertently serving extremist content), not just preventing AI from killing humanity. He advocates for “quality and diversity” in AI training as a resilience strategy, arguing monoculture AI is more fragile than diverse AI ecosystems. The OpenAI board crisis demonstrates that “how can we align AI with humanity’s interests if we can’t align a company’s board with its employees’ interests?”

Key arguments / segments

Notable claims

Bias / sponsor flags

Relevance to Ray Data Co

Medium-high. This is the most relevant episode in the batch for our work. The governance framework (inputs > outputs, data standards as food standards, national model ownership, alignment as objective function) is directly applicable to how we think about AI tooling decisions. The “not your models, not your mind” framing is potent for the Sanity Check newsletter. The tension between open and closed models is a recurring theme in our content. Worth cross-referencing with existing vault content on AI governance.