“Why I’m Leaving My Company Immediately (Stability AI)” — Peter H. Diamandis Moonshots EP #93
Episode summary
Diamandis interviews Emad Mostaque five days after his departure as CEO of Stability AI. Mostaque frames his exit through the Japanese concept of ikigai — he excels as a research leader/strategist, not as a CEO handling HR, operations, and business development. The bulk of the conversation is his vision for decentralized AI: every nation needs its own AI strategy, data governance owned by citizens, web3-based coordination infrastructure, and a “human operating system” built on open models. He argues there’s only a 1-2 year window before centralized AI becomes the entrenched default. Also covers the cost deflation of model training (LLaMA 2 from $10M to projected $10K in one year), Intel GPUs running Stable Diffusion 3 faster than Nvidia, and his plans to catalyze the decentralized AI stack.
Key arguments / segments
- [00:02:00] Why he left: ikigai framework; calls for his departure since 2022; company has momentum, now needs a media-focused business CEO
- [00:08:00] AI governance crisis: OpenAI board dysfunction proves centralized governance of transformative AI doesn’t work; “who governs the data that teaches your child?”
- [00:15:00] Decentralized AI framework: three pillars — accessibility (everyone can access), governance (democratic, not corporate), modular infrastructure (not monolithic)
- [00:17:00] Small window: 1-2 years before centralized AI becomes the entrenched default; every government must have an AI strategy by end of year
- [00:24:00] Data as infrastructure: national data sets (broadcast, curriculum, legal) are more important than models; models are “data wrapped in algorithms with compute”
- [00:25:00] Cost deflation: LLaMA 70B training cost from $10M to projected under $10K in one year; Stable Diffusion runs on MacBooks
- [00:28:00] Render network + OTOY: first web3 move; 1M distributed GPUs to create open 3D datasets; goal of 1 billion high-quality 3D assets
- [00:31:00] GPU commoditization: Intel GPUs already faster than Nvidia for SD3 diffusion transformer training; Nvidia’s 87% margins are temporary
Notable claims
- Training cost for LLaMA-class models will drop from $10M to under $10K in one year (1,000x improvement)
- Stable Diffusion 3 runs faster on Intel GPUs than Nvidia GPUs (unoptimized for either)
- Stability AI had 300M+ downloads across models in two years with a fraction of competitor resources
- Mostaque still majority shareholder of Stability AI at time of interview
Bias / sponsor flags
- Fountain Life sponsorship: standard mid-roll
- Mostaque has massive financial and reputational incentive to frame his departure positively
- His “decentralized AI” vision conveniently positions him as the independent voice the world needs, exactly when he lost his corporate platform
- Cost deflation claims ($10M to $10K) are aspirational projections, not demonstrated
- The “Intel faster than Nvidia” claim needs context — specific to one workload on unoptimized stacks
Relevance to Ray Data Co
Moderate. The 1,000x cost deflation prediction for model training is the most actionable signal — if true, it means the moat in AI shifts entirely from compute to data and distribution. The “data as infrastructure” framing and national AI model concept are worth tracking. The founder-CEO separation narrative is a useful case study for startup governance. The web3+AI convergence thesis remains speculative.