So the number one concern globally is cost of living and tied very closely to that is unemployment. Will I get a job? And then the third [music] concern is poverty and social inequities. And we talk about, you know, a future of abundance. We talk about demonetization. [music] But this is the reality what people are feeling. This is a story about preparing for the worst. Maybe the worst thing any of us can imagine. The wealth isn’t going to go to um the people who are doing the work or the people who get unemployed. It’s going to go to making the rich richer and the poor poorer. >> You got a third of your income going into your phone and your and your data plan and all that money funnels out of the country and kind lands in like you said Silicon Valley and Boston, you know, and then you add AI as a layer on top of that and the gap is going to get really really wide. So that’s that’s the reality of a huge fraction of the world’s population. The question is, how do we help people believe in a hopeful and compelling future? Cuz if they don’t believe it’s a hopeful and compelling future, um,
[00:01:02] >> now that’s a moonshot, ladies and gentlemen. >> Hey everybody, welcome to our episode on WTF Just Happened in Tech. Here with my Moonshot mates, Dave London. >> Hey Dave, Sim Ismael, good morning Salem. >> And Alex Quezner Gross. So guys, uh, it’s been quite a week. Uh, you know, you’ve been in Brazil, Sim, just back. >> Yeah, I was just back from 3 days in Brazil. Turns out we have a massive viewership and listenership there demanding [clears throat] that we be able to convert translate this into Portuguese. So, we should look at that. >> Okay, we’ll do that. I just got back from Milan and Madrid, right, the middle of Europe, talking about AI and uh we’ll talk about it, but there’s a lot of of concern and angst about how far they’re falling behind. Um, and we’ve got a we’ve got a subject to discover on this episode as well on that. Dave, how’s your week been? >> Phenomenal. I got to get to hang out with Alex face to face, which is a rare
[00:02:01] treat. And we brainstormed a ton of things going on. Can’t wait to talk about them today. Awesome. >> I have a general complaint. >> What’s that? >> Um, you know, we we the over the last few weeks since we recorded, there’s so much stuff been happening. I need an Alex, Dave, and Peter AI next to me that just [laughter] real time helps me interpret stories. We should maybe think about creating a GT GT. >> Good news. It’s in the queue. Sm We’re working on it. I budgeted it and we’re doing it. >> All right. All right, Alex. Um, if this is the physical Alex or the AI Alex, I have no idea. But how are you doing, pal? >> What’s the difference? [laughter] >> Probably nothing. [gasps] >> Probably the world’s convinced I’m an AI already. [laughter] >> Well, we’ll figure that out soon enough. So, everybody, this is the news that’s worth listening to. Hopefully, the news that helps you uh keep up with how fast things are moving and keeps a positive vision of the potential in your mind. Um, as we say every week, I know we all
[00:03:01] spend at least 20 hours independently and together getting ready. So, let’s jump in. Um, let’s open up with the hyperscalers. News about Anthropic, Google, OpenAI. Um, still going on. In fact, this week in particular, uh, a lot of news about Anthropic, which hasn’t hit our WTF episodes in the recent past. Here we go. Anthropic overtakes Open AI in enterprise LLM API market share. All right, over to you, Alex. What’s the significance here? We see this chart. Open AI’s dropping, anthropics rising. What does it mean? >> I I I think the central question, Peter, is whether code generation is the critical path to recursive self-improvement. If code generation is the critical path then one can expect amazing outcomes from anthropic which has quite publicly focused its strategy on code generation perhaps to the exclusion of other modalities like say video generation like we see from open
[00:04:00] AI with Sora on the other hand if code generation turns out to be missing some special sauce needed for super intelligence broad super intelligence and recursive self-improvement maybe this trend won’t last but I I think that’s sort of the core question here. >> Well, I have a I have a completely firm opinion on this. I’d love to I don’t want to lead the witness, but I could throw it out there first, Alex, or you can tell me. So, what is the answer? Is the LLM scaling recursive self-improvement loop enough to crack to the singularity and infinite intelligence or not? What’s your guess? >> I don’t know. I I I if I had supreme confidence on this one, uh it would be far easier to to make investments in this space. I can make steelman arguments on both sides. The the steelman argument in favor of codegeneration as critical path to singularity basically uh to the extent it it’s it’s a finite point uh fixed point and and not sort of an extended object. It would be something like code
[00:05:00] generation. We we leverage that to rewrite the the core algorithms and the the key models and architectures and post-training architectures underneath frontier models and and that just sort of spins uh the the flywheel spins faster and faster. The the steelman in favor of code generation missing something looks something like oh maybe maybe we need visual chain of thought or maybe there’s some grounding in the physical world that’s essential for for general purpose knowledge general purpose reasoning that you can’t just get from looking at large source code bases and the the internet of text tokens and maybe a little bit of if of imagery. So I’m not sure. Well, I’m hardcore hardcore in the camp of that that Daario is on the right track. And it’s not just it’s not just anthropic. Look at the chart. Google’s also on this same trend line. Uh and so it’s either Dennis or Daario. And by scaling what they’ve already got and turning a huge amount of the compute internally and having it generate the next test, the
[00:06:01] next test, the next test. And I think that to me the the tipping point there is humanity’s last exam. And what we’re hearing and what you’re hearing is that that will be saturated very soon, which is mind-blowing given how hard those questions are. But to me, the the solution to those questions and the innovation in AI are incredibly correlated problems. >> I took a different take on this one. I I think what’s interesting here is AI is actually showing that it has a real business model and and that will be a really powerful feedback loop going forward. Well, the enterprises that I talk to, the banks and the, you know, they’re all using anthropic because they trust it with sensitive data, while OpenAI is going consumer, you know, and not not kind of going after that corporate market. So, they’re both going to thrive, but a very different >> reliability that they’re providing is amazing. >> Mhm. All right. Well, let’s go to the next story here on Anthropic. Uh, and again, congratulations to Dario.
[00:07:00] Anthropic projects 70 billion in revenue, 17 billion in cash flow uh in 2028. So I mean we haven’t heard a lot about Anthropic over the last couple of months, right? It’s been OpenAI and XAI and Google taking the headlines. Uh but Dario is uh is gaining ground here. How do you see them competing against the other hyperscalers? Dave, >> I think he’s positioned really really well. you know, the the numbers aren’t as big as Open AI, but the enterprise market uh is is wide open. And uh you know, what OpenAI is doing is directly competing with Google, which is very aggressive, very cool, but also risky. Uh, so I I like the angle they’re taking here because, you know, enterprises need AI for, you know, using AI as a management tool is the gold mine of all gold mines. And I’d love to riff on that for hours some other podcast, >> but everybody doing that is using Claude and using anthropic as their backbone.
[00:08:00] >> And so he’s not facing a lot of competition right now in that market. Uh, it’s not as fast growth. It’s not as sexy. Uh, but it’s a really good strategy. And I I think you’ll hit these numbers. you know when I was with my friends at at Google you know there’s interesting point that they view anthropic as the other you know uh friendly AI company right they’re obviously at each other’s throats between open AAI and Google and open AAI and XAI enthropic is sort of the the friendly little brother to the other hyperscalers >> isn’t it ironic though that OpenAI’s original mission was what Daario is now actually known for you know the machines of loving grace and the you know we are if you want to work in AI but you want it to be guaranteed to be good for the world come to anthropic he’s really grabbed that high bar you know that high ground on that topic >> I think there there’s something fundamental though like we’ve seen this happen over and over again that what what becomes ultimately a frontier lab starts as an alignment lab that I I
[00:09:01] think it there’s almost um a perverse duality between alignment and capabilities that if you’re if you’re the world’s best lab at aligning AI with human interests that immediately whether it’s for economic reasons like you need to raise capital in order to train super aligners or just for purely technical reasons that if if you can align a model really well with human intent that immediately itself is a strong capability. I think every alignment project almost inevitably ends up as a capabilities project. So I I think it’s it’s not just coincidence that open AI started as an alignment oriented effort to ensure that there wasn’t just a global singleton in the form of of deep mind for for super intelligence and anthropic similarly in the long tradition of Silicon Valley and the fair children also started as being alignment focused and then almost immediately pivoted to capabilities and super intelligence. I think that’s just the the law of economic nature here. >> Totally right. I have a counterpoint to that. You know, Facebook started off as
[00:10:01] being very aligned to protecting privacy, user privacy and never leaking privacy information and then they sold it for profit. Uh not AI related but just in terms of business model related. So at some point this could become extractive right. >> One of the stories here that I want to hit on is the economics here. So, uh, interestingly enough, when I did some digging here, so Anthropic is projecting 70 billion in revenue by 2028 at a 77% profit margin, right? That’s pretty extraordinary if they can hold on to that. On the flip side, OpenAI is projecting 100 billion of revenue and unprofitable until 2029, right? I mean, just their deployment of capital into into data centers and and model growth. Uh, I’m super curious. The two business models, anthropic versus open AI. What do you guys think about that? >> Well, I think a lot of these companies are capable of having high margins on short notice, including Open AI, and
[00:11:01] they’re trying to tell the market, Open AAI in particular, that I intend to keep investing ahead of the curve. So, if you’ll give me the trillion dollar valuation, and that’s a better, if you can pull that off, it’s a much much better. I can tell you having taken a company public, as soon as you switch to profitability, it’s very hard to go back to your shareholders and say, “Oh, I want to burn a trillion dollars building a data center.” >> So Sam is declaring that up front, which is great strategy as long as the shareholders believe it, which is what might Yes, exactly. >> Do you remember Bezos’s famous newsletter at the beginning when he started? He goes, “Listen, uh, I am not going to be profitable. I’m going to be spending money. if you want a profitable company, go someplace else. Otherwise, I’m buying, you know, customers. I’m buying revenue. And he did. And then he flipped the knob. He flipped the switch. >> Yeah, that’s exactly right. And what Dario will do here in all likelihood is declare this kind of margin, 77% gross margin. But then as the date approaches, he’ll launch a new project under a new
[00:12:01] name and say, “Well, we’re going to consume all that money building Stargate or you know, whatever anthropic Stargate or whatever it is.” And that’s that’s a good strategic shift. But you know before we leave the story, these numbers are much bigger than anything in the history of the world. Much bigger. The growth rates and the scales. And I just want to make that point because we get a nerd to these stories so quickly. And I already heard that >> numb to the trillions, right? Numbillion here. >> Oh, and and also in terms of life plan, you know, anyone who’s doing something not this, >> you [snorts] got you got to consider how do I get into this? This is so much bigger than all other endeavor combined. >> The the next article here which is a fascinating one is Fei Lee’s World Labs unveils world generating AI models and uh and we had this conversation uh with FE backstage at FI. Let’s take a look at the video uh cuz this the implications of this are absolutely huge.
[00:13:07] >> [music] >> Finally. Thought you bailed again. >> Please. I would. [music] >> All right, let’s reset to the next world. All right. If you were listening to this podcast and not watching that beautiful video, what you saw is an extraordinary immersive photo realalistic uh virtual world, a a world model that FE has been building. Uh and I for one uh you know I’m both fascinated and concerned, right? I’m I’m fascinated by these world models because uh it’s magic. I I’m concerned that it’s
[00:14:03] going to be where we spend a huge amount of our time. Uh I just finished reading a book called The Unincorporated Man for the second time. And in this book uh which is a fascinating uh conversation by itself, but in the book the world has a crisis because everybody starts spending all of their time in these virtual worlds to the exclusion of work and the exclusion of eating and it decimates the population. So you know I know you know our our kids uh my boys right now maybe yours your son See as well a lot of time spent in video games but if they become so photorealistic so immersive and you are the god of your world why would you want to spend time any place else >> I think it proves that we’re living in a simulation >> well yeah I mean I I I tweeted that after I had the conversation with FA which was you know I have no question we’re living in a simulation but even if if we are um you know what would you do
[00:15:00] differently and people are just so tired of this conversation about are we living in a simulation um but before we get >> yeah go ahead Dave >> well yeah before we get too deep into the story because I really want to hear from Alex on this is very very different from what you think it is but Alex will explain it to us but I’ll tell you on a on a personal note when we were backstage with Fay in Saudi um anyone who’s a aspiring leader entrepreneur visionary any out there we’re backstage she’s one of the the gods of AI and she she says, “Oh, Peter, Dave, let me I’m so excited to show you.” She whips out her phone >> and she’s showing us the product in like 5 seconds and she’s like a kid in the candy store excited about what it can do. And that that enthusiasm, but also ability to show your thing in under 5 seconds >> is so infectious. [clears throat] And you know, at her level, the the fact that she’s still doing that, >> learn from that. You know, everybody should be able to do that. you know, no matter what you’re excited about, you should be able to project it in 5 seconds or less on your phone, pull it out, have it ready to go. It’s so cool. >> All right, Alex, [clears throat] [00:16:00] talk to us. >> So, the consumer story here is we’re seeing the beginning of the holiday wars. >> The [clears throat] the technical story is underneath the holiday wars, we’re seeing different approaches for generating entire holc type simulations from scratch. We we see on the one hand the Google Genie 3 approach where every pixel is being generated by a single model. On the other hand, I I think what uh what World Labs is demonstrating here with marble, their marble model is sort of the opposite end of the spectrum. It’s generating not individual pixels but so-called 3D Gaussian splats or 3D GS’s which are sort of transparent blobs that cumulatively together build up into what looks like a photorealistic 3D traversible world. But 3D Gaussian splats are so compute efficient that you can just dynamically recomputee visualizations locally on on your computer on your client. whereas the the
[00:17:01] pixel-wise generation approach of Gen3 and other competing models, those require computed serverside GPUs. So I I think we’re starting to see the beginnings of almost an efficient frontier of trade-off between compute versus versatility. And there there going to be upsides and downsides to each of these, but in the same sense that we saw with frontier models, some models live on the edge that are relatively computike, some are server intensive. I think we’re going to see a range of of different levels of of worlds that we’re able to generate. And all of these I mean the the consumer use case I think that this is this is chicken feed compared to the the larger addressable market in my mind which is using these models as for training synthetic data for more capable vision language action models for robots for scientific discovery. That’s the much larger market but this is fun in the short term. Our next story here, uh, new method helps AI forget memorize data
[00:18:02] without losing reasoning skills. This is a big deal. Uh, I’m going to go to you, Alex, first on this. >> Yeah, I I often speak about in the near-term future about reaching a diamondlike perfect micro model of of a frontier model that externalizes almost all knowledge. So it it’s pure reasoning model and all of the knowledge can live outside the weights of the model in in some external database or some external tool call. And I I think this paper by Goodfire such a clever way to externalize all of that knowledge to distill down, no pun intended, to to the essence of of a core model toward this vision. The the basic idea is to look at the weights inside the model and to distinguish which weights represent knowledge versus some sort of general reasoning capability by looking at which weights if uh if training over multiple
[00:19:00] examples and running the model over multiple examples, which weights impact the overall so-called loss uh or or the ability of the model to to match the desired output if they’re changed a little bit. uh and the weights that are important to generalization will have dramatic impact uh on the overall loss of the model if those weights are just tweaked a little bit. >> So can I take this sort of a slightly different frame of this? So if you’ve trained your AI model on all of my company’s healthcare data or all of my company’s financial data and do I trust you then and it has some incredible results coming out of that. Do I trust you to not give up my core healthc care data or financial data while still adding value? Right? So apparently what they’re saying is they found a way to make AI forget specific memorized content without retraining the model from scratch without wrecking its intelligence. So you can forget the
[00:20:01] specifics about all the healthcare data, forget the specifics about all the financial data and the model still delivers the same value. That’s what I understood it to be. Alex, is that correct? >> Half correct. So it it it is correct that this is a pruning or more generally a regularization technique for helping uh pre-trained model to forget knowledge that one might call memorized. But that really the emphasis in the inflection is less about some sort of enterprise privacy or some sort of like ML [clears throat] data firewall feature and more about figuring out which parts of the model and models are huge. Models are in many cases hundreds of billions if not low trillions of weights. That’s very computensive. It would be highly desirable to figure out which of those weights are actually needed for general capabilities and which of them are just AIs memorizing arcane and quite possibly wrong facts on the internet or from an enterprise environment. So this is more
[00:21:01] about generalization capability less about filtering out enterprise privacy. >> Is it also about making it a lightweight model to run on on systems? >> That’s the holy grail here. the the holy grail is could could we have maybe a subbillion parameter maybe aspirationally a million parameter model that’s that’s generally intelligent that would be an incredible outcome >> wow >> you know if we solve memory in some of this that’s a huge breakthrough and then maybe the next thing you could add in is some mechanism for adding in curiosity into the model because that feedback loop would be unbelievable >> yeah it’s a whole cottage industry focused on active inference and and building curiosity from scratch as an instrumentally convergent motivation into these models. >> Wait, instrumentally Can you repeat that? >> Instrumentally convergent. >> Instrumentally convert. So, instrumental convergence very important term. Instrumental convergence is this idea that in order to achieve a variety of different long-term objectives, you’re almost forced to achieve common short-term objectives. For example, if I
[00:22:01] have a super intelligence and on the one hand it’s instructed to build lots of paper clips on on the other hand it’s instructed to cure cancer. Probably both of those long-term objectives require that in the short term it accumulate capital, maybe solve science, maybe build a bunch of factories. So [clears throat] those are instrumentally convergent motivations. >> Instrumentally convergent near-term. >> Yes. Convergent in the near term, divergent in the long term. >> Yeah. Well, high level thought is so ridiculous. I learn uh something magical every single time, every single goddamn slide for God’s sake. >> Well, this this topic is incredibly important and somewhat obscure, but you know, there are many many many people working on outside the box chain of thought reasoning and vertical use cases, but very few dorking around with the weights. But there’s so much opportunity when you’re messing around with the weights. I mean, these 90% reductions in parameter count through distillation are pretty common. And you know, you think about, okay, hey, we need a 100red gigawatts of power. Oh,
[00:23:00] wait. I distilled it by 10x. Now we only need 10 gigawatts. The implications of that are trillions of dollars. So there really ought to be a lot more people playing with the open source weights and trying you because these things are really working. And this is a great case study. >> So this this really gives you the brain equivalent of neuroplasticity. And also you know the the psychology researchers who for years have been trying to deal with you know surveys and outside the body tests you can make so much more progress in understanding the nature of intelligence by playing with the weights of a big neural net and and trying things like this. So a lot more people in that area cognitive psychology types should be working on this too. >> All right here’s our next article which is related but different and I think more about neuroplasticity. So, Google introduces nested learning, uh, machine learning paradigm for continual learning. Um, and as I, as I read into this, I’m like, wow, this this really is a big deal. Alex, I’ll go to you first on this one.
[00:24:01] >> Yeah. So, this is a a pretty dense, if I may say so, uh, Nurup’s paper. Uh, Nurup’s the arguably premier AI machine learning conference. Parenthetically, it’s happening in early December. Definitely I’ll be there. Many folks I work with will be there. Encourage the community to reach out to me if if you’re going to Nurups and and would like to connect. But the the core the core thesis of of Google’s nested learning paper appears to be one focused on higher order metalarning. So metalarning as a reminder is learning to learn. It it’s of core interest to the a IML research community. If you could learn to learn that almost obiates a lot of machine learning research. Uh so this is focused on higher order machine learn higher order metalarning rather so learning to learning to learning to learn and and so on and the the core insight here is that models and model architectures on the one hand and optimizers that right now we use to
[00:25:00] train the models may actually be two facets of a common object. It it’s almost reading the paper it it’s almost aspirationally seems to be fishing for a grand unified theory of machine learning where an M theory of machine learning if you will where all of these different processes are actually just facets at different levels different orders of abstraction of a single common paradigm which is as I I and others have argued in the past compression of information. So I I I like to say if we could send a message back in time and explain how we got to AGI. Turns out it’s very easy. You just take a large amount of of knowledge about the world and you compress it and be if you compress it beyond a certain point, you get some sort of phase transition and general intelligence pops out. That that’s basically I think the story of of AGI that >> you know when I think about when I think about college I think the entire college experience for me was learning how to learn. everything I learned was, you know, irrelevant, you know, some number
[00:26:00] of years later. And so when I read this, uh, Alex, what I’m what I’m seeing, and I’m curious, it’s it’s about enabling continuous human-like learning, right? Sort of a step towards >> lifelong learning. >> Yeah. A step towards truly adaptive lifelong uh, learning for AIS. Because historically what we’ train up an AI system and you would freeze it and then you do inference on it. Here this is something that’s continuously learning in almost a humanlike fashion. Um >> my college experience wasn’t about um learning how to learn. It was learning how to avoid most of the that was taught there. [laughter] Okay. >> Learning how to forget. There you go. Uh >> Dave, any thoughts on this art on this particular article? Uh, not yet. But, um, I think Alex’s explanation was perfect anyway. But I think that one thing people overlook is that the the machines doing this have
[00:27:00] access to incredible numbers of tools. And when you’re learning how to learn, you’re thinking about like never forgetting, but you write things down, you refer to them, you have your laptop, you have your your phone. And the AI version of that has immense bandwidth between its brain and its notes. And so a lot of a lot of the innovation uh is taking advantage of that incredible, you know, it’s it’s clearly super intelligent before it’s even, you know, intelligent. >> On this um that little explanation by Alex, I’m going to have to go back and listen to it like four times over. Just parse all the stuff. We don’t need to get into it here, but we may need a whole podcast episode just on that. Kind of pretty fundamental, right? >> Yeah. One of the things we talked about was also getting uh questions from our our our listeners and doing an AMA session based on those questions. So, if folks are interested in doing that, uh you know, drop us some uh some hints in the uh in the chat >> and and drop us some questions in the chat like what do you want us to focus on? We’ll we’ll do an AMA session.
[00:28:02] There’s just so much news week to week to cover. It’s like um otherwise we were to be publishing an episode every day. So, an Alibaba backed company called Moonshot AI is launching a new ultra- lowcost AI model. Uh, as I read this, I’m like, “Wow.” Uh, this is a big deal. Uh, Dave, >> they clearly watch the [clears throat] pod. >> They watch the pod. Why? Because they called it moonshots. >> Yeah. The name. [laughter] >> Okay. We’re going to we’re going to claim Yeah. I I I think uh I think Astroteller claims Moonshots as the captain of moonshots at Google. We try to enforce your trademark in China anyway. I don’t think [laughter] >> Dave, what do you what is your take on this one? >> I I hope people appreciate what a huge deal this is. This is the in my mind the biggest thing that happened in the last month. So, you know, these Kimmy models actually are top of Sweetbench, right up there with with Anthropic, you know, just 1% or so below. Uh, but they run on Gro hardware, GROQ, not not Elon, Grock
[00:29:02] hardware, uh, which, you know, we learned all about in Saudi Arabia at blazing speeds, incredible performance, and they’re the best open- source models. So, if you want to play with the weights, you know, you got to come with the the Chinese stuff cuz, you know, Meta stopped open sourcing, OpenAI stopped open open weights open sourcing. So, now these are the absolute best models in the world and they’re all coming from China. Uh, but the fact that you can train it, $4.6 $6 million to train it is within the budget of almost any company. Uh and it’s not 10x, it’s more like 30 40x cheaper than what it took for OpenAI and Enthropic to build their original models. And so part of it is you’re drafting off off their innovations, which is why those companies stopped open sourcing. But you know, you can grab these weights, you can grab this open source and you can build on top of it. And so just be careful there’s no spyware or anything in there. But, you know, I checked and I’m using it and uh it seems to be okay. Uh but but do your own spot checking. But this is a huge deal uh in terms of
[00:30:01] the power that somebody wants to play with the guts of one of these things. The amount you can get done on a limited budget just skyrocketed. And >> you know what I found most significant? You know, there’s been a conversation going on. We heard Eric Schmidt talk about this that it’s the US financial markets and their efficiency that are allowing these hyperscalers to, you know, raise billions of dollars to do what they do. But if all of a sudden the cost of training a trillion parameter model is 5 million bucks, um, you don’t need efficient capital markets. That money is available from a lot of locations. Uh, >> isn’t that wild? I mean, you’re so right. I mean, just think about that. You know, I don’t know if anyone appreciates what you just said. The implications are massive. Just absolutely massive. >> Alex, what’s your take on this? >> Yeah, a few takes. One, as Sam and others have pointed out, there’s hyperdelation going on right now on both the training and inference side for AI. So Sam’s number is 40x year-over-year
[00:31:02] hyperdelation. So on the one hand, I >> wait one second. 20x hyperdelation year overyear on the cost >> the cost of intelligence per per unit of intelligence. >> That’s insane. That’s a big that’s so so >> this I I mean I think this is where faster than mors law. >> Yes. This is why why I speak of like the innermost loop as a a catchphrase be because what when you see if if we can see sustained 40x year-over-year hyperdelation in cost of intelligence. Everything else is going to get dragged down. The price of everything else rather is going to get dragged inevitably down with that. So this is sort of this is the nuclear core if you will that’s going to pull down the cost of everything else. That that’s >> because and because the demand is growing like a thousandx a year that’s that’s just where just stuff has the capital expenditure. >> Yeah. Pick pick your analogy you pick. >> Well like when you when you walk walk around academia or corporate boardrooms
[00:32:01] you find deniers everywhere. I mean just absolutely everywhere. And a huge amount of the denial is well I tried this yesterday. It was hard. It didn’t work you know so therefore we’re nowhere near AI. uh when you talk about 40x hyperdelation, deniers are saying, well, look, the evidence is that as we scale these things, they’re getting decreasing improvements in intelligence. So, I don’t think there’s anything, you know, 40x for two years in a row, two backto-back 40x’s, it’s not going to do much. Like, that’s a really really risky position to take, dude. [laughter] Uh because there is there is significant ups slope in the data and you don’t know what 40x is going to do. But if you had to bet, is 40x going to be mind-blowing or is 40x going to be not much of an improvement? You’re crazy to take the position that it’s going to be a little bit of an improvement. Crazy. Especially backtoback 40x’s. >> Yeah. No, I I think we’re going to start to see grand challenges in math, science, engineering, medicine start to fall over the next two to three years thanks to that. >> So, can I take the other side of this?
[00:33:01] >> Yeah, because we’ll beat the crap out of you. Let’s do it. [laughter] >> No, no. I think if you look at the demonetization curves over history with other stuff like solar energy, cost of compute, etc., you would almost expect this because uh you want to see that that curve go in this direction. 40x is like way faster than I thought. But I remember I’ll give you the macro counter example. I remember when the Google car first came out, all the car manufacturers said, “Well, that’s Uh the cost of that lithium ion battery is too expensive. This is never going to work.” And they all ignored it. Um and then literally over the decade the cost of lithium ion batteries dropped 90%. That’s what Elon banked on for building the Tesla was the bank the the banking on the cost of the batteries dropping. And he was right and the the all the car makers and all the broad the typical minder folks. So there’s a powerful lesson here is always watch for those deflationary curves and go where the curve is pointing you. Well, what’s what’s surprising is the world’s most powerful technology, right,
[00:34:02] by far. Uh, you know, if I had gone back five years and described what an AI model could do today and how much would you charge for it, you know, per day or per month, I would have guessed, you know, millions of dollars or hundreds of thousands of dollars. I would never have me, you know, expected it’d be free. I mean, it’s effectively >> eat our own dog food and kind of go, this is where the curve’s going. Let’s expect this at this point and see if we we get it right. >> Yeah. Yeah. I’m I’m constantly running around the office and saying, “Guys, you know, within these virtual areas like computing and AI, it’s really hard to visualize 40x.” But if we made imagine we had a factory that makes widgets or cars and we 40xed our production year-over-year, everybody in the office would be going crazy. You’d see like 40 times more stuff coming out the door. It would be obvious to you that’s what’s happening. And you have to really stretch your brain around the implications of this because >> there’s a great quote on the demonetization and the fact that the energy guys all got it wrong and
[00:35:00] whatever. This is that same issue, right? And to to Alex’s point of the inner loop, if you rolled this out to the broader kind of macro things that we do, you should expect a 40x drop in the cost of healthcare and in food production and everything over time as this bites. Correct. >> Yeah, agree. assuming intelligence can solve the problem. There’s a great quote from like Gordon Moore um and I’m going to botch it. I don’t have the exact numbers. He said, you know, if if uh if cars had improved at the same rate as Moore’s law, you know, a Rolls-Royce would get like we’d get like a million a million miles per gallon, it would uh and you’d throw it away because it’s so cheap at the end of your trip. >> Yeah. The stat I heard was that if if the top speed of a car had all evolved at the same speed, we’d have a car today that went faster than the speed of light. Yes. All right. Uh I want to jump into this next article, especially coming back from a week in uh in Italy and Spain. Uh the story headline here is Brussels to loosen GDPR rules to enable
[00:36:00] and feed the AI boom. So uh you know really important conversation. There was a lot of concern, angst when I was meeting with CEOs uh consulting companies, investors in uh in Europe, in Italy and in Spain, right? The GDP of Italy uh about 2.7 trillion about the size of New York, the GDP of Spain, about 1.7 trillion the size of Florida. But serious concern about can they compete? Um and and this is one of the biggest issues that’s keeping them from competing ability to access data. Um See, what’s been your experience here? >> You know, this is part of the problem with uh overregulation is you just slow this down. The cost of compliance for stuff like GDPR has proven to be a ridiculous thing. And this is a a function of the historical issue with with Europe. And and to be fair, it’s
[00:37:00] not just mindset here. Although there is a far, there’s just a bunch of historical legacy that’s really important. I’ll give you a specific example. After World War II, the German constitution added a clause saying no media organization could cover the whole country >> so that you could never have another rise. And that prevented a regional player from covering the country. And then Google came along and rolled up the whole thing, right? And so they’ve got structural issues going back in history. And you have to figure out how to undo some of those. And that’s a really really hard thing to do. They’re doing kind of the best they can in a difficult environment, but it’s just a massive problem. >> I mean, for the longest time, and you and I remember this at Singularity University, right? They, you know, Europe in general prided itself on having the strictest privacy laws out there. And a lot of that simply meant that, okay, we’re going to exclude everybody from Europe in a US product or service. Um I think what’s interesting is that uh the data showing that venture funding in Europe uh dropped you know up
[00:38:00] to 30% as a result of this and that the AI models in Europe have been 6 to 12 months slower to market than the United States and the the compliance burden. So get this. So uh there is an AI audit that’s required before you you put out a product. uh and those audits uh on average cost €260,000 euros and take 8 to 15 months right delaying 40% of projects. So imagine that and so you have to actually prove during your compliance audit uh that uh you know they review the data sets the model transparency bias uh you know documentation safety standards so some third party is auditing you and just putting sand in the gears. Now I understand why people want that to some degree but you’re trading that against your economy. been really interesting if I if I think about my grandparents you know my my grandparents assumed that there was no structure in the world as chaos and you know the military protects us but other than that it’s just a zoo
[00:39:01] >> and then when you look at the generations if I talk to my kids they assume that there’s some rational thing out there that’s thinking through these issues adults they assume they’re adults in the room you know is that a strange thing when all of a sudden you say oh my god I’m the adult in the room [laughter] >> it’s gets really really freakish when you hit that. >> Yeah, it’s why I’m I’m hoping for a benevolent super intelligence to actually be the adult in the room someday. >> Yeah, we we need Mo Goddat’s thinking move as quickly to AI running the world as possible. That should speed up Alex’s inner loop considerably. >> Alex, any thoughts on on here the GDPR rules being >> being changed? May maybe just a a broad comment that under the current construct it’s up to individual sovereign countries to define the parameters of how much they want to participate in the super intelligence explosion and maybe just leave it at that. >> Yeah. I mean, the conversation I had with a lot of the leaders in in the tech industry in Italy and Spain is, okay, if
[00:40:01] you guys are interested in playing this, you need to build out your energy sources, identify where you’re going to set up your data centers, and I think the time frame, and I’m curious what you guys think about this. I think the time frame for make those decisions and implement those decisions is is the next 5 years. I mean I think the next five years we’re going to set the objective for the next century. >> More like five months than five years. >> Okay. I think it’s a little bit shorter than five years. >> Okay. I was being generous because you can’t do anything in five months. Uh which is a big concern and we can talk about that in the US even when we get to the conversation on energy. This is my chance for a rant. Uh and I want to share something that uh we found when uh See and Dave and I were at FI9. So I’m on the board at FI and at FII and it’s the future investment initiative and one of the things they do every year is they do something called the priority global survey and this is a survey that they do
[00:41:01] in 32 countries. They have over 60,000 respondents and it represents you know 3/4 or sorry 2/3 of the world’s population and you know we’ve had some criticism on this pod that we are you know not focused on the reality in different parts of the world. So I want to I want to discuss the reality. What are people seeing and feeling outside of you know Silicon Valley outside of of Boston? So here’s some of the data and I’d love to discuss it with you guys because the data is is important and it’s it’s concerning. So they surveyed and asked the question about what are your top concerns and we see this across the global south and the global north. So the number one concern globally is cost of living by far. Can we afford to live in this world? and and tied very closely to that is unemployment. Will I
[00:42:00] get a job? Which if I get a job, will it pay me enough to live? And then the third concern is poverty and social inequities. I mean, this is what twothirds of the world is feeling right now. Um here’s the next chart and we can look at it uh by region. Africa, Asia, Europe, Mina, North America, Oceanana and South A South America. And here we see uh you know Africa is number one concern is unemployment. The rest of the world it’s the cost of living. Uh it’s just expensive to live and we talk about you know a future of abundance. We talk about demonetization but this is the reality what people are feeling. What are your thoughts here? Sele >> yeah I mean look this is an extrapolation of the basic nature of human reality. We’ve been living in fear since the beginning of time, right? In the cave dwelling days, you were worried that a hyena will come and steal your baby at night. Uh now we’re worried
[00:43:01] about jobs. The idea I think we need to flip over completely to a UBI type structure to navigate the world going forward because the the concept of a job is is going away. I mean, think about the idea that all of our education systems are designed to take a young child, train them through their early 20s to be ready for a job market. that we have no idea what a job looks like in five years. So this may be the thing that breaks the educational model and it breaks all the other models into a totally new reality where most of this uh mechanisms for subsistence education, healthcare, etc. are basically free. Take that 40x curve uh applied to some of these domains. That’s for me the incredible opportunity. So I’d flip this into the massive opportunity there, but people aren’t seeing the what’s going on. Therefore, they get stuck in that. >> I got it. you know, it’s it’s not evenly distributed yet. This is the reality that people are feeling right now. They’re feeling fear about can I get a job and can I afford to live. It’s a very real. This is our job as leaders
[00:44:02] and podcasters and message conveyors to show that like take Amjad you know little developer out of Jordan boom builds a multi-billion dollar company take Vitalik take Elon coming from nothing to building global paradigm changing things just from mindset and therefore now the the the inner loop I’m going to go back to that again is just literally mindset and entrepreneurship and Peter you talk about this all the time I flip this around and see the opportunity in this >> I’ll give you a little snapshot on on what a big part of the world looks like. One of our summer interns, uh, her she’s from Iran. Her parents are still in Iran. You know, I grew up as a young child in Iran and she said that her parents spend onethird of their annual income on their iPhone and data plan >> and uh like whoa h and but she’s like look you can’t live without information and actually the currency is no good. So everything’s Bitcoin. So how are you going to manage your Bitcoin without an iPhone? So you got a a third of your income going into your phone and your and your data plan and all that money
[00:45:01] funnels out of the country and kind lands in like you said Silicon Valley and Boston. And so that wealth disparity you know just from the phone you know and then you add AI as a layer on top of that and the gap is going to get really really wide. So that’s that’s the reality of a huge fraction of the world’s population though. >> Um I I hear you. I don’t know if you want to add anything Alex on this but um this is one of my biggest concerns right uh you know this data for me is worrisome you know I’m clear that we are going to get to an abundant future maybe it’s a decade out you know we’re going to have a continuous demonetization and all kinds of you know uplifting of healthcare and education by AI but I think in the next 2 to seven years that’s what really concerns me right where if people are If young men are not getting jobs, uh, and if people are losing their jobs as a result of this before we sort of flip the economics into an abundance model, um, you know,
[00:46:02] the question is how do we help people believe in a hopeful and compelling future? Cuz if they don’t believe it’s a hopeful and compelling future, um, they’re going to believe what they see from Hollywood, which is dystopian AIs and killer robots. >> I think you’ve hit the crux of it, right? How do we get narratives out there that demonstrate that future and do it fast and overcome the fact that people are 10x more likely to listen to fewer stories than than stories of on a positive future and that has to be overcome. Therefore, you need 10 times more stories on the positive side. People, you know, people worldwide are are really worried. >> One of the positive consistent pieces of feedback I get about this podcast is the fact that we’re relentlessly optimistic about the future. Why? because technology is a major driver of progress in the world and maybe the only major driver of progress and now that’s moving exponentially. >> Do you guys remember what we were talking about before we hit record on this pod? Um the idea that it would be amazing to bring together a community of
[00:47:00] builders and coders and entrepreneurs uh to work on uplifting humanity in the near term. You know, I I think we should I think we should do that. Uh I I think we should pull together this moonshot community and and see if >> see if they want to discuss how do we make the world a better place, right? How do you you know how do we take uh you know uh how do we build moonshots that really uplift mindset but help address unemployment, cost of living in the near term. I mean Elon’s built an incredible community towards going to Mars. Satoshi, you know, created an incredible community on Bitcoin and crypto. Um, yeah, I’m talking about, you know, do we organize a meetup of the moonshot listeners? Do we pull folks together and talk about uh, you know, solving grand challenges together? >> I spend a lot of time with college undergrads and seniors and they would flock to that mission like you wouldn’t
[00:48:01] believe. you’d get incredible talent coming to that mission. You know, when they’re in their early 20s, mid20s before scar tissue of life has accumulated too much. Uh they are all in on that and so you’d get really really smart people working on it. >> I mean bring bring together the builders, the visionaries, you know, the folks who want to really build, you know, I like to say that the world’s biggest problem is the world’s biggest business opportunities. Want to be a billionaire, help a billion people. I mean, that’s the conversation, you know, and I know none of us have extra time to to actually pull an event together. But, uh, if if folks listening to this, >> I think it’s mandatory because the only way you’re going to change the world is to have people shift their mindset, come to listen to stuff like this, and then actually activate it and go do something. So, imagine we did an event, brought everybody together, talked about things, and then people actually activated on that, formed teams, and went off and did stuff, and then we tracked that over time. That would be kind of pretty cool. >> All right. I mean, so so here’s, you know, Alex, are you in on that?
[00:49:01] >> I think it’s a benchmark problem. I I think it’s less about events and less about teams and more about just rigorously defining benchmarks for all of these problems. How about a a benchmark for cost of living that then the world and this 40x year-over-year hyperdelation of intelligence can optimize towards? Same with crime and delinquency and and healthare cost of healthare. We’re going to be drowning in humanoid robots that are generalist in terms of their capabilities in the next few years. >> We’ll talk about benchmarks. >> We’ll talk about the benchmarks then at this. So, I mean, if you guys are in and my feeling is, you know, none of us have time to put an event together, but if if there’s interest in the community, so to everyone listening, this is what we talked about earlier. If you have an interest in uh joining us at some kind of a moonshot gathering, a moonshot summit, whatever it is, uh if we can get uh enough of you, let’s say a thousand who say yes, we want to do this and you
[00:50:00] want to spend time with the moonshot mates, then we’ll pull this together. Um here’s what I’m here’s what I’m proposing. Uh we’ll set up an email. Uh let’s call it moonshots diamandis.com. If you’re interested in this idea of a a moonshot summit to bring everybody together, talk about these world’s biggest problems, talk about the benchmarks, talk about the moonshots required, uh send us an email, uh and if we can get a thousand people who say they want to be in on this, uh then we’ll pull it together. We’ll bring together moonshot mates. will bring together sort of the uh you know most exciting CEOs and uh and moonshot engineers and have an epic two-day event. I think two days is the right length for this. >> Can I riff on this for a second? >> Yeah. >> If you if you take Hans Rosling’s work which showed that over the last 100 years we dropped the cost of electricity, transportation, telecommunication by thousands of times each, right? And then you say okay uh we
[00:51:02] want benchmarks that in the next 2 3 years drop those by a thousand times each if you extrapolate the 40x that then gives you the target to go after to the benchmark comment that Alex made and then you basically bring in the Sage engine to say okay what policy changes do you need to make if technology can reach this how do you get this implemented you could bring that together and make that a showcase for the world in a very powerful way I >> I I think it would incredible. I really would love to get uh get everybody together and have that conversation and really ignite uh a passion and interest among entrepreneurs to to focus on this cuz there’s real challenges out there in the world. All right. So, so here’s the deal. Uh if a thousand of you who are listening, you know, uh want to join us, uh let’s say sometime next fall, uh then send us an email moonshots diamandis.com. And if there’s enough interest, we’ll pull this together. Uh,
[00:52:01] all right. Uh, let’s get back. There’s a lot a lot more to cover. All right. Uh, our next segment here is data centers, energy, and space. So, multiple data centers are reaching 1 gawatt in 2026. Uh, you know, we’re tiling the world in data centers. Enthropic and Amazon, XAI, Microsoft, uh, Meta and OpenAI, Stargate. Alex, what’s the story here? >> Well, I I think the trillion dollar question, Peter, is will we see a peak in the amount of coherent power needed for coherent training runs of large frontier models? If we do, one could imagine, as as incredible as it may sound, looking at this curve where everything’s going up and to the right in terms of total facility power, we might actually see a peak maybe in a few gigawatts sometime over the next few years and then decline if there are algorithmic innovations that enable us to do distributed training runs rather
[00:53:00] than needing one large power inensive coherent supercluster to do it. tiling the earth could look like tiling the earth with relatively lower power density compute. Totally imaginable. On the other hand, if this looks a little bit more I spoke earlier about AGI being essentially as it turns out compression of information. If it turns out that there are further phase changes that we can achieve by compressing more and more and more with larger and larger facilities, then maybe eventually an extremist as we end up in sort of as I’ve spoken about on the pod previously, maybe more of a black hole desktop black hole computer regime where we’re just building these incredibly power dense facilities to train more and more and more. Again, I could go either way on this, but I I think that’s like the trillion dollar question. Will this peak or not? Besides peak data centers, a question is are we going to see peak energy? Uh that’s a question for for you Alex. So the US government, Brookfield and uh Kamico have launched an $80
[00:54:02] billion partnership to build nuclear reactors. Um, you know, as I researched this, uh, what I found frustrating is the time frame for building out these nuclear reactors, uh, is still, you know, an order of 5 years to 10 years. Um, Alex, what are you seeing here? Yeah, I’ve gotten probably quite a bit surprising amount of feedback from the the community and and the audience reminding me that I I shouldn’t ignore existing generation 3 plus nuclear reactors in favor of SMRs and fusion reactors. So, I I want to make sure I I just nail this point. There are right now at at least six AP-1000s. These are made by Westinghouse, which ironically went bankrupt in in 2017 building a couple of these in in Georgia and South Carolina. Now it’s hot again because Super Intelligence is is hungry for power and now it’s incredibly valuable. They cost
[00:55:01] maybe about $7 billion to build. So $80 billion partnership to to build these maybe build 10 reactors all across the US. This is going to be a very big deal. And critically, unlike SMRs where there are maybe only two or three and they’re relatively emerging technologies, this is by comparison it’s a relatively mature format for nuclear power. And I I think when we talk about the bridge to power for super intelligence from NAT gas to nuclear vision to nuclear fusion with solar sprinkled and solar plus battery sprinkled throughout. I think generation 3 plus reactors like the AP-1000 have a very important role to play. So ask me to appease the audience that I’m not ignoring generation 3 plus. >> No, but I and I and I think that’s really important. These these AP-100s are 1.1 gawatt power plants. You know, when when Eric Schmidt was testifying in front of Congress, he said we need 92 gawatt by 2030. Right? So, this
[00:56:01] particular deal might put 10 of these 1.1 gawatt uh data centers on the map, but they’re not going to be coming online uh until into the early mid 2030s. So, the question is how do we build out an additional 90 gawatt in the next four years, right? Where is that going to come from? >> Yeah, I think the deal structure behind this is worth understanding too because it generalizes to solar and to, you know, to fusion and to everything else. So what’s happening here is a company Westinghouse that got bought by Toshiba. So this is part of America de-industrializing very stupidly for decades. Toshiba buys Westinghouse. Westinghouse tries to build nuclear facilities. The government is so bureaucratic and so ownorous that it goes bankrupt. So then in 2017 the private equity guys come in Brookfield le and say okay we have very smart business school majors here. We’ll try and revive this thing. And the timing is 2017 right when the transformer comes
[00:57:00] out. So the timing turns out to be perfect. So now what’s happening is the private equity firms and the you know the econ majors from all these schools are going to the government and saying give us 10 2030 billion in loans uh guaranteed loans and we’ll use that to build these facilities and then if they’re successful we make a huge profit and if they fail you know we write off the loans so there’s not a lot of downside. Many many many econ majors and business people should be shifting into this area because [snorts] the the government is open for business now. But that’s the structure. You you go to the government, you get the loan, you build the next big thing, the next big thing if it succeeds, you get the profit, you get the margin and the government subsidizes it. So it’s a golden era era because a lot of people, you know, when I’m lecturing on campus, all the AI people, all the computer science people, they know exactly what they want to do. But then all the econ majors and business majors like how do I get in this? How do I get in this? This is how you get in this. The flow of capital is in the it’s going to be $1.2 trillion a
[00:58:01] year by 2030 coming just into data center construction and power for it. There’s nothing even close in the history of the world to that scale of money movement. So just building out the railroads, right? Building out the telecom networks those were those were significant just not these dollar figures. >> I have a clarifying question. I have a clarifying question to ask here. In our last part, we talked about the fact the US is building 5,000 data centers, 10x more than anybody else. Um, does that is that uh inconsistent with the amount of energy available or >> we have 5,000 energy? >> No, it’s today. It’s the data centers we have today of all types, not just AI data centers or 5,000 more, you know, cumulatively more than the rest of the world combined. >> To me, this is one of the few things that’s easy to predict. You know, everything is is changing so quickly. But the chip fabs are exactly what they are. We’re building them at a certain rate. Every chip is going to get used. The chips have a certain power consumption. That’s very calculable. That you can assume that they’re all
[00:59:00] going to get flat out sold out, but we can’t make any more of them than 20 million GPUs this year and then it’ll expand at some rate. So working back from that, you can exactly predict the flow of capital required to build out this entire infrastructure. golden age of of of of category 3 nuclear reactors 20 years to >> three three plus >> three plus fair enough and and for what it’s worth for for those who are looking at this uh on YouTube the the visual format the the form factor of of these plants actually looks like some sort of hybrid between what you’re seeing here with the the conventional older generation cooling towers and the the newer SMRs you’d be you you could be forgiven for mistaking it for a normal building. >> One of the issues here is is the US public with uh the old the original Gen One and Gen 2 3M Island Fukushima being worried about these plants not in my backyard, but the the three plus are failsafe nuclear systems that again I’m
[01:00:00] I’m happy to have in my backyard. Uh we we’ve got to change the narrative and we’ve got to accelerate this. Even companies that are are bringing back online previous nuclear power plants, it’s taking 5 years plus to get them online. The timelines are just too long. >> And Dave, the point you’re making is even with this, we’re like onetenth of the rate that we’re really needing. Therefore, this is a guaranteed boom. >> This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands [music] of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while
[01:01:00] providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preide development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC [music] into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today. >> All right, let’s jump into a conversation on robotics and in particular robot drones. So, China just broke the world record on the number of flying drones. 16,000 AI powered drones flying together controlled by a smart AI system. Let’s take a look at the video. >> [music] [01:02:02] >> So, if you were watching, uh, you saw a beautiful drone show in the sky. If you were listening, you heard some music. But here’s one interesting, uh, concepts. Um, imagine being able to have 16,000 drones up in the sky. you can actually create a giant TV screen and watch a a television program or movie across an entire city. Uh what’s the significance of this for you, Dave? >> Oh, big time. I think that people visualize robots in human form, you know, just cuz that’s what’s in the movies, constructing buildings and cleaning your yard and whatever. But I think the swarm version of it is actually just as big a deal, if not bigger. And it’s been very hard for the Hollywood studios to create swarm visual effects. So they don’t use them much in sci-fi and therefore people don’t really think about them. But and and then when you look at a flock of birds or a bunch of bees, they’re actually not coordinated. You know, they’re a little coordinated, but they’re not really coordinated. Um the AI version of it, as
[01:03:02] you saw in that video, is perfectly down to the millimeter coordinated. And it’s a very effective way to do things like construction, yard work, you know, cleaning your gutters, whatever, because, you know, you can you can put 50 drones in if you need to pick up something heavy, you can put two in if it’s light. So, it’s very very flexible. So, I [snorts] think that, you know, it’s it’s going to be a big big part of taking AI and making it affect the physical world much more than people are currently predicting because if you look at our podcast, we must have had 50 videos on different humanoids dancing and fighting and whatever. But I I really feel like the drone part of it or the the interactive thousands of drone part of it is way underappreciated as as a real world thing to do right now. >> I I still loved the episode last uh last time last week about a mosquito killing drone just flying around and zapping mosquitoes in your backyard. You know, drones are becoming the front line for the warfare in uh in Ukraine. >> Um
[01:04:00] >> I mean it is already. I mean there’s that that that war is being prosecuted by half a million drones on either side. It’s not a men thing. It’s just drones killing >> and those drones are being drones killing men >> to a large degree being manufactured in in Ukraine. And one of the things that’s interesting is when the war is finally over, you know, God and the leadership of countries willing, uh Ukraine as a nation is going to be sort of Europe’s drone manufacturing capital. There will be world’s best army. Yeah, because they’ve had to be and and just as a crow bring it back to the US, there’s 10,000 drones a month crossing the Mexico US border. And I keep telling government people this that that drone technology beats wall technology. What the hell are we doing? [laughter] >> Oh my god, there’s probably a good slogan there someplace. Uh, okay. Here’s a fun article. Uh Elon Musk, Tesla might unveil a flying car. So, we’ve been
[01:05:01] talking about this is the next generation roadster. God knows what it’s going to be charged and and this is not a uh you know a Model Y mass production. I remember being backstage with Elon when he had funded the Global Learning Prize and uh we’re we’re backstage talking about about Tesla before we’re going out and talk about the guy’s level of intensity, right? He’s like worried. He was launching um uh it was there was a Falcon 9 launch that night with a going to the largest uh largest payloads. He was concerned about center of mass issues and vibration issues. And then he was concerned about whether Tesla could in fact uh survive the next quarter. This is like 2017 2018. And I said, “Well, are you going to put out the next Roadster?” He goes, “Man, oh man, nothing matters other than the Model Y and the Model 3.” He said, “That’s our mass production car.” And so, he was always looking at that. But, uh, here we are. And I think the
[01:06:01] interesting conversation here is, will this Roadster include, uh, you know, jet or rocket propulsion from SpaceX that will give it the ability to hover or hop, right? That’s, uh, that’s the that’s what people believe will be uh, materializing. Any thoughts here? >> I I’d like to point out a historical irony if I may. There was a lot of hand ringing about 8 years ago, circa 2017, about how we were promised flying cars and instead only got 140 characters. But if you play the tape forward, as it were, the 140 characters was Twitter, that became X, that became XAI, and that’s that’s now the integrated technical and capital structure that’s poised to give us a flying car. So, bit of historic irony that actually maybe in some sense the 140 characters actually gave us a flying car after all. >> So, >> that’s such a great that’s such a great connect. >> Um, you know, he’s uh he’s gone Elon’s gone on record saying by the end of the
[01:07:00] year there will be an unforgettable demo. So, excited to see what that look like. I mean, the current gossip is that SpaceX is going to provide cold gas thrusters, right? It’s propulsion systems that might allow the car to sort of hop or hover. >> Are you serious? >> Yes. Yeah. Seriously. So, cold. So, imagine you have like, you know, you have 30 seconds of hovering time um [laughter] over the until you recharge them. So, I mean, interesting, you could be recharging these these thrusters by just compressing air uh as you’re driving along. Who knows? Well, I’ll tell you, you know, the the highways were originally designed to go 120 mph, but then turned out to be way too dangerous. >> Uh, but if you added that cold thruster to the car and it has accident avoidance built into it, its ability to jump and hover over an accident scene is incredibly valuable. >> Well, I’m curious how high it can it can hover. I’m guessing we’re talking about like a foot. Like I I think once you get
[01:08:02] out a ground effect, you’re not going to have much uh much hovering capability. You you guys have heard me talk about this before, but I just spent the last 4 days uh commuting from cities into JFK or Gerillos in Sao Paulo. This cannot arrive fast enough for my taste. And if a Tesla is able to unlock this world the way it unlocked electric cars, massive. >> Okay. Okay. Let’s let’s point out right now these are not EV talls. These are these are not quote unquote flying cars. These are at best, you know, short hopping hovering cars, right? So we still have Joby and Archer Aviation, E-Hang and all kinds of other companies out of out of China that will be multicopter electric transport vehicles, right? So uh Joe uh I’m sorry, Archer has the contract for here in LA for the 2028 Olympics. So that’ll be fun. But I think this is more of a fun, you know, the kid with all the toys wants a flying a hovering car. Well, part part of the
[01:09:01] brilliance of Elon too is that car companies will spend something like 7% of revenue on marketing and he spends zero, but he takes that money and does cool cool things that are far more valuable than marketing. And so it actually is net profitable for the underlying companies to pursue these crazy interesting, you know, science projects, some of which turn into real, you know, products and keeps the market cap high, but it also uh it it replaces the marketing budget brilliantly. And a lot of people should be copying like what I what can I do to be inspirational and cool and use that to drive people to my product to my company to my team to my mission. >> It also motivates the team right people want to come work at the coolest places >> helps recruiting you get the best people it all work it’s just Elon has invented the formula and perfected it. I think Steve Jobs kind of invented it but but Elon has taken it to the next level and everyone should just be studying whether you like Elon or not study it. It clearly works you know it’s the right plan. All right, I love this. Uh I don’t know if this I guess it was a tweet. So
[01:10:01] Elon, how to prevent global warming. Quote, “A large solarp powered AI satellite constellation that would be able to prevent global warming by making tiny adjustments in how much solar energy reaches the Earth.” So Seel, we’ve talked about this at the X-P Prize for ages. I call this a a solar sun shade. uh being able to have something out maybe at the Lrangee point that’s able to reflect a quarter of 1% of the uh the sunlight impinging on the earth and basically uh sort of a a thermometer to titrate solar flux on the planet. Uh the challenge with this is that it’s a tragedy of the commons. There are going to be some countries like Russia that wants global warming because it opens up the waterways north of their of their country. others that it’s decimating their agriculture like Africa and uh parts of of Europe and no one can take action. Um Selene, what are your thoughts here? >> Yeah, there’s an old story called the
[01:11:01] Pinatubo effect when Mount Pinatubo in the Philippines erupted in the early ’90s. Uh the ash covered the whole atmosphere for a while and it dropped global temperature by 2°. And one of the thoughts I’ve been thinking over the years is there’s like about seven majorly threatened areas with uh sea levels rising. Um the Washington DC river delta, Bangladesh, little country, Florida, etc. And I thought they would actually just start launching up rockets without telling anybody to do something like this because they have existential threat. They’re going to do it and the cost is not that heavy. But this gives you a computable capability and and you can calibrate it much more effectively. This is reversible is the most important thing, right? That’s that’s really >> it’s geocale engineering. Alex, you probably thought of this >> as [clears throat] and by the way, just to point that out, um people complain, “Oh my god, you can’t go around geoengineering.” And the response is, “We have been geoengineering by default by throwing up all this carbon. We have to figure out technological ways of doing it.” The COP 23 and all that
[01:12:00] stuff. Nation states will not solve this. >> Mhm. Good point. Yeah, this the idea of a global weather grid is one of the biggest ideas I’d like to push forward that I I haven’t seen either the political will or the the technical will to push forward. It doesn’t even need to necessarily be a sun shade per se. It could be as simple as satellites via microwave heating or some other mechanism increasing the local cloud cover in some areas, reducing it in in others. that could be enough to to solve a broader problem than just global warming, which is weather control. Wouldn’t it be wonderful if we could steer hurricanes in one direction versus another direction or or mitigate storms? I I think this is from a technical perspective doubly so in the era of AI when we can have planetary scale weather models, including more recent strong ones out of Deep Mind that that can solve this problem. It’s more, I think, a political problem of just deciding that we want as a planet to do it. >> Well, it’s it’s an insurance problem, too. Oops, we steered the hurricane in
[01:13:01] the wrong direction. Um, >> right. It’ll be a social social problem, less a technical problem. >> Yes, >> it’ll be a heyday for conspiracy theorists, [laughter] >> but it’s inevitable. I mean, a topic like that, you have to have global consensus and that’s always been impossible. But it’s inevitable that we need to get over that hurdle in the next, you know, few years, not 20 years, you know, because there are many of these topics coming up concurrently. This is a really good one to force the agenda, but these things are all global. And so if there’s no mechanism for global consensus, we’re screwed as a world. So we have to get over that hurdle. >> Yeah. Well, we don’t get to Cardesev type one civilization status without a global weather control grid. It’s as simple as that. >> All right, here’s the the next article. This is a fun one. This is Blue Origin lands Glenn rocket booster for the first time. We see a video here of uh uh of what we’re used to seeing SpaceX with Falcon 9 do, but this is a Blue Origin
[01:14:00] vehicle. So, uh Blue Origin had launched a mission to Mars called the Escapade. Congratulations to Jeff Bezos. And the booster touched down on the recovery ship called Jacqueline, which Jeff named after his mom. Uh how’s that? Hey, Mom. I got a gift for you. I’m naming the recovery ship after you. Uh so this is a big deal for me. This is doubling our chance of getting humanity out into space, not being overly dependent on SpaceX, which is still, by the way, launching over 90 plus% of the uh America’s uh spacecraft and something like probably 70% of the world’s launches right now. Uh any any thoughts on this one, guys? >> I just think it’s great that we have a second cap capability aside from SpaceX. I think it’s good for the world. >> Yeah. And Jeff’s been spending about a billion dollars of his Amazon stock per year to fund this. Uh it moved a lot slower. I used to bug him about why
[01:15:00] isn’t he going faster, but hey, he’s here now. Uh which is great. And of course, they’re going to be using Blue Origin’s going to be using their own booster now to launch their competition to Starlink, uh which is uh which is already being deployed. Alex, anything you want to add here? Yeah, I I think having multiple reusable railroads, if you will, to orbit is, I think, exactly the sort of space race we want to find ourselves in. And I I think if we’re going to colonize and develop the solar system, we’re going to need multiple routes to orbit. >> Yeah, it was nice to see Elon congratulate Jeff on this. Of course, Starship uh puts all of these other launch vehicles to shame. Uh, you know, and Elon very famously said once Starship is up and operating, he’ll shut down the Falcon 9 line. Uh, and it will out compete Blue Origin and Rocket Labs and everything else. It’ll be the big sucking sound. >> Can I double down just on that just for a second? >> Sure. >> I think it’s so awesome that he tweeted
[01:16:00] the congrats because it just shows that they’re all focused on the bigger picture. This is not about competition. This is about solving the problem. And I think that’s just fantastic. >> Yeah, agreed. All right, talking about uh the opposite end of the spectrum, this just made me mad. Um but you know, it’s the conversation we had earlier uh coming out of FII, people are concerned. So labor unions in Boston are fighting Whimo. So the Boston unions formed a labor united against Whimo. Uh and the approach here is they’re going to force Whimos to put a human uh safety driver in the right seat or in the car in the left seat. guy’s nose. Um, we’ve seen this before, right? When uh when France made Uber illegal, uh lots of places were fighting to retain uh you know these unions in place. Dave and and Alex, you live in Boston. How do you feel about this? >> Well, it’s it’s a little disconcerting that our tech hubs, you know, the best
[01:17:01] and biggest tech hubs in the country are also the most dysfunctional governmentally in terms of like this. This is utterly insane, right? And it’s it’s obvious to anyone involved in it, but the populist uprisings are going to be all over the place on all kinds of topics. You we’ve seen the pickers outside the front of Open AI, you know, and so this is going to happen all over the place. But if the governments of those regions don’t get on top of it and and put some kind of rational system together, then people are just going to leave, you know, like Whimo will go elsewhere and and it is already going >> and that’s just going to be really bad for for our Silicon Valley and for Boston and for New York, you know, it’s just they got to figure it out. Alex, >> one of the things that one of the things that that keeps me up at night as it were is this sort of regressionist approach where people, unions, organizations that are worried about employment fight the advance of technology that will save lives, increase economic wealth, and just make
[01:18:00] quality of life radically better. And so I I think one of the this is almost a meta technology that we need to develop is is a way to maintain social cohesion while at the same time radically accelerating technologies. We haven’t cracked that yet. Maybe maybe social cohesion tech needs its own benchmark. And if we solve that, there’s there’s almost an optimal trajectory where we get our acceleration and our social cohesion at the same time. But we haven’t cracked the social cohesion part of that. And I’d love to solve that. >> Sure. I mean, we can talk about that at the summit if it comes together. But here’s the deal. I mean, people are worried for their jobs. Um, number one, I mean, that’s it. It’s it’s survival. I need to feed my kids. I need to be able to afford my home. And this is going to take it away from me. Uh, how can you possibly do that? And until we uplevel our capability to provide people that that safety net, uh whether it’s universal basic income, which I’m I’m much more interested in universal basic
[01:19:01] services, right? Um anyway, See, you were going to say, >> yeah, I want to grandstand just for a little bit. Um one of the things we noticed after the EXO book came out was you’re going to see this massive let out type stuff against new technology, right? Because people would much rather be comfortable than happy. Um, and we actually focused on this. So, we solved this what I call the immune system problem. Uh, we created a 10-week engagement with big companies that solves cracks this in big companies. We’ve done it a 100 times. We even have a nonprofit that does this in the public sector where you need to change. We’re regulatory and this type of construct is the immune system. Takes 16 weeks, but it works. We’ve done it a bunch of times. So, anybody facing this, just give us a call. We’ll show you how to do it. We found a way of hacking cultural where do they reach departments? >> Just ping me at sele.com and we’ll show you how to do it. We’ve open sourced the methodology to doing because you know a few years ago when the book came out with all of this technology if we don’t solve the cultural resistance to this. It doesn’t
[01:20:01] matter what the breakthroughs are. We’re still going to be fighting this political problem. And the next level we were going after is how do you solve the immune system problem in an institution like healthcare or journalism or education? They each have their unique immune systems. And >> I mean, we’re going to see we’re going to see this uh across every industry as white collar AI, you know, super intelligence comes in, humanoid robots come in. This is just a small peak at what’s going to be coming. We’ve got to solve it now. All right, let’s go into our our final segment here, which for me is one of the more uh important and exciting ones, which is what’s going on in the world of science. And I’m going to start this conversation with a video clip from Salem Alman on his thoughts about GPT6 and uh the science leap that’s coming. If GPT3 was like the first moment where you saw like a glimmer of something that felt like the spiritual touring test getting passed. GPT5 is the first moment where you see a glimmer of AI doing new science. It’s
[01:21:01] like very tiny things but you know here and there someone’s posting like oh it figured this thing out or oh it came up with this new idea or oh it was like a useful collaborator on this paper and there is a chance that GPT6 will be a GPT 3 to four like leap that happened for kind of touring test like stuff for science where five has these tiny glimmers and six can really do it. >> All right. Uh Alex, let’s open up with you. Yeah, I I’ve gone on record as saying I I think we’re going to see many if not most grand challenges in math, science, engineering, and medicine start to fall to AI over the next three maximum years. So, I I I think this is very much on my anticipated trajectory. Science is going to get solved and all of its disciplines are going to get solved and AI is going to do it. And I I for one am super excited about finding myself in a future near-term Star Trek type future where it turns out that centuries of human capital or the the equivalent of
[01:22:01] centuries of human capital just get solved overnight at at bulk at scale by AI. >> Yeah. Um love it. I I think the amount of uh of patents being filed, the number of Nobel prize winning science being done uh is going to skyrocket. You can actually see there’s an interesting uh chart I saw where if you look at patent filings post uh chat GPT there’s like just exponential growth immediately thereafter uh where it’s an aid to humans but all of a sudden if it’s autonomously doing the science uh in sort of closed loop cycles it’s amazing Dave >> this >> yeah those are two things really worth tracking the uh the AI generated patents and also the agentto agent transactions part of which are licensing the patents but that that whole agent to agent intellectual exchange world is starting to really take off and you can just track it by transaction count and see the shape of the exponent. That’ll be something we’ll track really closely.
[01:23:00] >> Sim, >> this justifies why I didn’t pay attention during my physics degree. The AI will do it for me. [snorts and laughter] >> Oh my god. Well, look, if history if history is consistent, GPT6 and Gemini 3 will be about the same. you know, they’re they’re just leaprogging each other and and Gemini 3 from Google is within a week, we think. >> Yeah. >> So, we have to carve out probably, you know, a big chunk, maybe a full day just studying its capabilities. So, we’re ready. >> We will when GPT3 comes, I’m sorry, when when uh Gemini 3 comes out, expect us to go live with an analysis of it as soon thereafter as possible. Seeing a lot, right? We just saw OpenAI uh chat GP chat or GPT 5.1’s come out. Mirror Morati’s uh company has just gone from like nine or 10 billion valuation to 50 billion valuation. There’s a lot frothing right now. All right, let’s move on to the next one. Uh Zuckerberg and Chan bet AI can cure all diseases.
[01:24:00] Zuckerberg believes AI could make cures much sooner while empowering scientists to take uh to take risks. So, the Chan Zuckerberg initiative to boost compute 10x 2028, shifting all science work under their Biohub brand. Uh, so this is great. Uh, I I love that. I mean, they’ve had an interest in medicine and biology for some time, but now they’re doubling down and focusing. Uh, Alex, let’s go to you first. >> Yeah. So you’ll remember when CZI launched in 2016, the goal was to cure all disease or most disease by the end of the 21st century. And now the messaging has radically changed. Now the messaging is we’re going to have generative AI based virtual cells and presumably virtual organs and virtual organisms built on top of those enabling AIs to search intervention space for cures to all disease. So now I I I think the the subtext is you don’t have to wait until necessarily the end of the 21st century to cure all disease. This
[01:25:01] could happen in the next 5 years. Call it 2030. And I I think all of the timelines, not just CZI, but other nonprofits that are working on AI for broadspectctrum sort of generalist cure all diseases have similar timelines. You see similar messaging out of anthropic as well. 2030 cure all disease with AI. Yeah, we saw that from uh from uh Demesis Abis within a decade cure all disease, right? So, we’re seeing a huge amount of talent, compute and capital going towards that goal, which is good news for everybody. But >> what I loved what I loved about this is that that to Alex’s inner loop point that uh instead of working on specific cures, they’re just work focusing just generate more compute and make it available to everybody. I think that’s great. >> Mhm. >> Yeah. Some [clears throat] sometimes it’s and in my experience sometimes it’s easier to solve the more general problem than the more specific problem. It may perversely end up being the case. It’s easier to just cure all diseases with AI than to cure artisally diseases one by one.
[01:26:00] >> Yeah. I mean that’s the concept around around age reversal. If you didn’t have the disease when you were in your 20s and 30s, but it develops when you’re 40s or 50s, how do you turn back your epigenetic clock so that your cells are younger and thereby not expressing the disease since you didn’t express it in an earlier state of your uh of your biology. All right. Uh next one uh in this area and and this is a conversation about one of the first real what people are calling longevity therapies. So the US government uh has slashes a price on GLP-1 drugs and we finding that uh GLP-1 drugs are lowering the risk of repeat strokes. Uh so you know one of the challenges has been GLP-1 drugs have been expensive uh and they are sort of a go-to for most uh most physicians if someone has a particular especially obesity related or um you know uh diet
[01:27:00] related issues. Uh here we see uh trumprx.gov gov looking at bringing this down to 149 bucks per month. Uh which would be pretty amazing. Uh we also find that the GLP-1 drugs in particular are able to cut uh the incidence of strokes by as much of a h as a half in a 3-month period of time. Who wants to jump in? >> I I’ll comment on this one. I mean it’s it’s so exciting. I guess the elephant in the room is is the outstanding question in biology. Why are GLP1 uh class drugs so seemingly miraculous? Why are they able to treat so many different forms of biological dysfunction, not just the metabolic issues that they were originally intended for? Putting the the question of biology and and mechanism aside, I I I think when we talk about universal basic services and abundance of healthcare, I think this is the beginnings of that. I I think offering
[01:28:00] for $150 odd dollars per month to to US persons who need it GLP-1 class drugs that starts to look like universally basically abundant health span drugs. And I I think this is a major step in the right direction. I I want to put out the warning again uh just because I I’m in this world, right? GLP-1 drugs are not a panacea. Uh if you are obese and using these drugs, it’s important to use them as a means to change the way you eat, to change your diet, change your habits, because if you stop the drug uh during this period of time, you’re losing weight, you’re also losing muscle. And you need to be exercising throughout this process. And if you stop taking the GLP, uh what happens is you gain the fat back, uh but you don’t gain the muscle back. And and that’s a problem. Sarcopenia is a true issue as we’re getting older. You know, your muscle is your longevity organ. Super
[01:29:00] important to have that that realization. Make sure you keep exercising uh you know, muscle building while you’re using a GLP-1. >> I just love the all the side effect benefits we’re seeing without even realizing it. I think that’s so great. >> Edison launches Cosmos, the AI scientist. So, uh, this is this seemed like a really, uh, a big deal to me. Alex, do you want to walk us through it? >> Sure. So, this is another scaffoldingbased approach to agentic science. This came out of Edison as mentioned, and I I I think this is almost a preview of of the age we’re about to find ourselves in. Maybe we’re a few months in at this point of bulk discovery. It’ll look a little bit like, if folks remember AlphaFold 3, where essentially almost overnight a large chunk of structural biology was more or less solved. It’s going to look a little bit like that except much much broader where with with this particular agentic AI researcher,
[01:30:00] there were discoveries across a number of different subfields of biology, not just structural biology. So we’ll see uh as as was published in this paper discoveries relating to uh potentially helpful for Alzheimer’s some other factors but the core technical advance here that that is claimed is effective increases in context length that that’s the key so the the frontier models right now have context lengths in the millions usually of tokens but if you wanted to develop the world’s strongest AI scientist ideally naively You’d want a model that has a context length in maybe the trillions of tokens so that you could in principle feed it the entire internet and every paper ever published and then just [clears throat] ask it what’s the solution? What’s the solution to Alzheimer’s? So the approach that that Edison adopted here was was a little bit more practical than some sort of algorithmic advance that advanced the
[01:31:00] context window to trillions of tokens from millions and was focused on more like knowledge graphs and and other scaffolding techniques to achieve effective context lengths that are much larger. But the the end result is is still essentially the same. You put as much information, as much scientific literature into the context window as you practically can and then you crank it and you ask for discoveries and discoveries and innovations pop out and I I think one can imagine near-term future where we can just scale our way scaling style to major discoveries across all of the important biological subfields. >> Yeah. Here’s one of the uh metrics they threw out. completes four to six months of expert human research in 12 hours. >> Uh and can read 1500 papers and run 42,000 lines of code per experiment. >> Um yeah, it’s uh it’s it’s it’s almost brute force like research. Would you say that before? Because what happen what happens when you take all the millions of research papers that have been written in the past where people missed
[01:32:01] findings and now run them through find incredible things. >> Yeah. Not just the papers, but the raw test results that are in like digital form, you know, just just the the incredible amount of information this can assimilate. Because when I look at my biology friends, talking to them last night, actually, and they’re all like, well, you know, these things always take a lot longer than you think. Like, how do you get so cynical at a young age? This is a completely new approach. It’s it’s it’s brand new green field territory. And if I look at what they actually have been doing for the last 3 years, they try and tease apart a single chemical reaction or a single test and then they run it through mat lab or mathematica to try and tease apart and then they draw these plots that say well you know we we have statistical p tests here that have significance just barely and it’s like what a waste of time man all these are interacting and if you take the neural net approach and you just bombard it with raw information it’s really good in these multiple things going on concurrently. Try to try to try to find the conclusion without having to tease apart every single
[01:33:00] element. It’s a brand new way to do things and it could do anything. It could be mindblowingly capable and and quick. You don’t know because it’s a new thing in the world. So, you know, put all that cynicism behind you and think about like the rate of improvement that might be possible. Just embrace it. >> Yeah. [snorts] All right. Our final article here, uh, one for a fun conversation. and genetically engineered babies are banned, but tech titans are making one anyway. All right, so uh you know this is worth the conversation. So there’s uh a few companies that are being now funded uh that are building crisper capability for embroediting. Uh so Wall Street Journal reported one out of San Francisco called Preventive that’s backed with SAM. I mean Sam is backing an incredible number of companies, right? So a crisper company, a brain computer interface company and amongst probably dozens of others. Uh there’s another company called Manhattan
[01:34:01] Genomics that was just covered uh in Wired. And so this in my mind it’s a regime change, right? We’re going from selection to alteration. So IVF clinics uh already allow you to screen uh your embryo, right? You can fertilize number embryos and then uh you can do single cell sequencing and find out which of those embryos are safest to implant. But you can’t edit your embryo. Uh that is verboden under FDA rules. The FDA is blocking any of that. They won’t even support any research in that area, let alone um allow it to be done commercially. So these companies are beginning to look outside the US. Where can they go and do it? And there’s some conversation that this is happening in or will be happening in the UAE. What do you guys think? >> I remember Braymond McCauley getting up and saying, “Look, the human genome is essentially software and we have, you know, 50 trillion cells in the human
[01:35:01] body. Essentially, the human being is now a software engineering problem.” Uh, and when you can edit the embryo, you’re basically starting it from scratch. It seems inevitable to me. >> And we get again the normal thing. The question is what do you want to design for is the big question. >> Yeah. I mean we give our babies the best we can, right? It’s like you start genetic engineering when you pick your spouse, right? Do they are they successful? Are they intelligent? Do they look good? Uh and that’s the first step that you take. >> This is this is can I just go back? This is the old a really important point. We used to talk about the shift from film photography to digital photography and all the implications of that. Essentially we’ve gone from breeding and genetic evolution to a digital model which is just accelerates the whole thing. >> Yeah. And then when the baby’s born just one other thing you know you give it the best health care you can the best education you can the best clothing you can. You’re giving your child the best you possibly can. So the question is why
[01:36:00] not start with the best genetic uh stack. Uh, and it’s, you know, the fear is the whole eugenics conversation. Alex, over to you. >> Couple comments. The first, one of the elephants in the room, the movie Gatka, arguably one of the best cinematic depictions of germline editing of of human babies, or at least germline selection, I I should say. >> I have to watch that one again. Yeah, >> it it’s it’s it’s an amazing movie. Many view it as a dystopian future. I I think if if you look at it the right way, it’s it’s arguably a more utopian future in the sense that >> I’m with you. >> Yeah, we get space colonization. There’s a SpaceX type movie that that is named Gatka in the movie and also we get healthy babies. But the second point is is more historical that the Oyomar guidelines that that were arguably the inception of of many of these bans soft and hard against germline editing, those are in 1975. There’s a historic argument
[01:37:00] that Islamar was actually triggered by the Watergate scandal. Uh, and we’re yeah, we’re we’re there there was a concern at the time. Some historians argue that the Solommore guidelines were originally proposed or at least motivated in part because Watergate was fresh in everyone’s mind and there was a concern by scientists that if there were if there was recominant DNA experimentation that was not very well advertised or not forthright uh according to some sort of public guidelines that something bad would happen to the scientific community in in whatever form. This is one historic argument that Watergate helped to precipitate 1975 guidelines. I >> I remember I was in I was in grad school. I was doing my my joint MIT uh med school degree at the Whitehead Institute uh on recombinant DNA, right? The first restriction enzymes had come out that allowed you to edit, you know, DNA uh in a somewhat precise fashion. Nothing compared to what we have now
[01:38:00] with crisper. And the headlines of the magazines, the you know the cover stories were designer babies and there was so much fear and that was god knows 40 years ago. Um >> we’re 50 years on from asyomar. >> Yeah. >> 50 years this year on fromar. And I actually had to to go back after I saw the story and look to check to see is there even at least in the US a single federal statute that bans germline editing. I couldn’t find one-, which is a little bit surprising. There’s there’s a patchwork of federal and state laws and and regs that certainly deter germline editing, but not a single one that actually bans it. So, I I wouldn’t be surprised if in the near-term future we we find a generational conversation about whether germline editing should in fact be allowed. >> And we have to remember, right, in 2018, there was a Chinese scientist uh he Jian Key uh who did this kind of crisper editing. uh he was trying to target the CCR5 cell surface receptor that would
[01:39:03] prevent a child from getting HIV and the guy was just decimated uh in China in the press condemned by the world jail didn’t >> yeah they arrested him >> so that put a kibash on this idea >> seriously you know >> well these are these are really complicated topics and they all need thought leaders but the number like if just in this podcast alone between fusion and AI I breakthroughs and driverless cars in Boston and now this they all need thought leaders so that the number of people that need to rise to the occasion and say here’s what we should do here here’s an idea you know ethical trustworthy people who know what they’re talking about the need for that that is backlogged so deep now and this is just one of those one of >> those shout out here to Hank Greley who’s been working on bioeththics for a very [clears throat] long time um you know the long-term effects of this are really really consequential >> one of my biggest concerns is that Hollywood is decimating our future,
[01:40:01] >> right? I mean, this is my pent rant. You know, every movie out there is dystopian genetics, dystopian [clears throat] killer robots and AI systems. I mean, no wonder that people fear the future. If in fact the only futures they see in in TV series and movies are you know in one that they don’t want for themselves and their kids. How could you know you all you see this and you immediately go to this negative vision of the future uh which is pervasive in society. We need to retool that. We need to reset that. We need we need more Star Trek in our lives or the following. You’re you’re giving a call of action call for action to yourself to start a new Hollywood studio this time powered by AI that that paints a much more optimistic future. >> Be be the change agent. >> There is there is there uh well I can’t say much about it yet but there is a project in works uh that >> the fundamental problem the fundamental problem is human nature is so fearbased per my earlier comment today and so
[01:41:00] you’re fighting against that. The way to solve for this is to give those embryo psychedelics and solve this right. [laughter] That’s the way to solve for this. >> No, we’re going to reduce the size of their amydalas. So they reduce their >> I’m serious. That’s you edited out. You don’t need an amigdula in today’s world. [laughter] >> You know what’s funny about this whole storyline is that that the birth rate in Korea now is.7 per couple. 7 children >> Crazy. >> Like one topic is editing your baby. The other one is like well no one’s having any babies at all. Like so so doesn’t that seem like a more urgent issue? >> Harder [clears throat] problem. hired a bit problem. >> Oh my god. All right. So, we’re going to we’re going to wrap this episode. I’ll remind folks if you’re interested in the idea of the Moonshot Mates pulling together a couple of day amazing event in the fall of next year. Uh send us an email um moonshots diamandis.com. Let us know you’re interested. Uh and uh we’re going to try and get to a thousand people interested. If we can get there,
[01:42:01] uh then we’ll pull the trigger and we’ll make that happen. Um, uh, Salem, you have something you wanted to mention? >> We have our next 10X shift workshop happening on November 19th. Um, so we’ll be talking about immune systems and organizations and giving people directions on how to build the EXO. So, come join for that. It’s a 100 bucks. People love it. It’s it’s limited seats, so come and join. >> Amazing. Uh, uh, Dave and Alex, any closing thoughts today before I play uh, our outro music today from Adam 8:22, which is amazing. I love the fact that our our subscribers are sending us uh uh you know their so good as well blowing my mind. >> Yeah. So >> I have a closing thought which is you know we’re we’re closing out our best venture fund year by far. I mean just crazy what’s happened this year. You know portfolio gained about $12 billion of value the companies uh within the year. Most of those ideas come from Alex. So, I wanted to throw a shout out
[01:43:01] to Alex for the the the vision. But, you know, one of the themes in this podcast has been that all these topics, you know, if you look at what Elon’s been able to do, like should we have a satellite at the Lrangee Point, you know, shadowing the Earth by 0.01% or something like that. If your track record of being right is perfect or very very good, then you actually get to say, “Here’s the answer, guys.” And people will will flock to it just because your track record is right. I think Alex is right on that cusp now of just and and that’s why he’s a little cautious on the podcast sometimes too because he doesn’t know. He actually says he doesn’t know unlike me I just say something anyway but but [laughter] Alex is uh he’s you know really really um his vision for what will and won’t work is just becoming so honed and so so beautiful. So I really wanted to thank you for the the gang buster year we’ve had. >> Yeah. Thanks Alex. Alex >> Yeah. Um, in addition to thank you Dave for for the very kind comments, I’ll just say I I spend substantially all of my time thinking about how we solve the hardest problems on Earth with AI. So,
[01:44:02] if folks listening are are interested in connecting to talk about problems they have or the the hardest problems on Earth they want to see solved with AI, definitely feel free to reach out. >> Amazing. All right, guys. Uh, this is uh label >> unbelievable episode. I’m going to have to listen to this at least three times. >> I know. do too. And we listen, we read your comments. So uh first off, please subscribe, tell your friends. Uh one of the things I was so uh heartfilled about when I was in Mexico City, when I was in Italy, when I was in Spain, when See was in Brazil were all of our fans there uh telling folks that they share the episode. So thank you for listening. We do love the comments. Uh if you have questions, I I really want to do some AMA episodes with our subscribers. So, drop them in the comments. Uh, our team will aggregate them. And, uh, this is, uh, a piece of music labeled All right, folks. From the moonshot to math, the WTF just happened crew by Adam 822. All
[01:45:03] right, let’s enjoy. >> Across the time, [music] they’re riding high. The moon’s crew reaching for [music] the sky. Peter’s yelling amazing. Every single show [music] we all laugh cuz man we know Seline still shouting WTF is a GI trying to pin [music] down that reason why Dave’s bouncing like a [music] kid toys filling the mic with star [music] of noise. It’s the moon shots through changing the game. X [singing] prime dreamers know their name from AI rockets and maps unsolve. [music] They’re building futures deeply involved. Yeah. Peter shouts amazing. That’s our cue. [music] That’s the WTF just crew.
[01:46:05] >> And that is amazing. All right. Thank you, Adam. Uh if you’ve got a uh piece of outro music, uh send it over to us and uh if it’s amazing as that, we’ll go ahead and play it. Gentlemen, um happy Saturday and uh wishing you guys >> what’s left of it. I mean, Jesus. >> Yeah. Listen, I was up at I was up at uh 3:30 this morning prepping for this episode. Just so much to cover. Um and we still have another episode we’ll record. need to basic essentially make this a full-time thing because the world is happening so fast that we just can I mean just trying to process this episode is going to take hours and hours and [laughter] hours. >> Uh all right guys uh love my time with you as with each other. Be well. Every week my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI and quantum
[01:47:01] computing to transport energy longevity and more. There’s no fluff. Only the most important stuff that matters that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, this report’s for you. Readers include founders and CEOs from the world’s most disruptive companies and entrepreneurs building the world’s most disruptive tech. It’s not for you if you don’t want to be informed about what’s coming, why it matters, and how you can benefit from it. To subscribe for free, go to demandis.com/tatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. [music]