Summary
FII panel moderated by Peter Diamandis with three AI CEOs: Prem Akkaraju (Stability AI), Richard Socher (You.com, former Stanford professor, Hugging Face early investor), and Kai-Fu Lee (01.AI). Discussion covers what comes after GPT-4o: multimodal models, protein language models for medicine, AI-generated film/TV, the Jevons Paradox of intelligence, and the US-China AI dynamic. Kai-Fu Lee reiterates 01.AI’s efficiency story ($3M training cost vs GPT-4’s $80-100M).
Key Segments
- [00:04] Prem on James Cameron joining Stability AI board; Avatar 2 frames took 6,000-7,000 hours to render — AI reduces to minutes
- [00:06] Prem predicts 5-20x more content creation; different “time signatures” for media (2-minute to feature-length)
- [00:09] Richard Socher on NLP evolution: multimodal models as next frontier; protein LLMs as breakthrough modality
- [00:11] Upper bounds of intelligence discussion: electromagnetic spectrum, quantum limits, speed of light constraints
- [00:14] Jevons Paradox of intelligence: cheaper AI leads to more AI usage, not less
- [00:15] Richard’s plumber example: domains without data collection are safest from AI disruption
- [00:17] Kai-Fu Lee on why he switched from investor to entrepreneur for generative AI
- [00:20] 01.AI trains competitive model for $3M (vs GPT-4’s $80-100M); inference at 10 cents/M tokens vs $4.40
- [00:24] Career advice split: Prem says don’t learn to code (English is the new programming language), Richard disagrees (learn foundations), Lee says follow your heart
Notable Claims
- 80% of all AI-generated images in 2023 were driven by Stability AI’s Stable Diffusion model
- Richard Socher invested in Hugging Face at $5M valuation; now worth $4.5B
- 01.AI inference cost: 10 cents/million tokens (1/30th of comparable models)
- GPT-5 rumored to cost ~$1B to train (as of Nov 2024)
- Socher’s team created first LLM-generated protein 40% different from any naturally occurring protein, with antibacterial properties
Guests
- Prem Akkaraju — CEO of Stability AI
- Richard Socher — CEO/founder of You.com, AIX Ventures, former Stanford professor
- Kai-Fu Lee — CEO of 01.AI, Sinovation Ventures ($3B AUM), 12 AI unicorns
Assessment
Compact FII panel with three substantive AI leaders. The protein LLM discussion (Socher) is the highest-signal segment — the idea that LLMs can generate novel proteins with specific binding properties, trained via simulation loops, is a concrete example of AI capability expansion beyond text. The Jevons Paradox framing for intelligence commoditization is a useful mental model. Kai-Fu Lee’s efficiency narrative overlaps heavily with EP134 (same data points, same timeframe). Prem’s film/TV predictions are directionally obvious but light on specifics. The career advice split at the end is genuinely interesting — three AI CEOs disagreeing on whether to learn programming. No sponsor contamination (panel format at FII event).