06-reference / transcripts

moonshots ep177 ai debate mogawdat kotler transcript

Thu Jun 12 2025 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: Moonshots Podcast

What’s the impact of AI going to be? Is it just massively overhyped? Or perhaps is it something that we should be concerned about? Today’s AI is underhyped. I think it’s massively overhyped. I know a ton of people who have way more work because of AI. They just can do higher quality, better work, but it has not saved time. We are talking to machines that are talking back to us, summarizing massive volumes of knowledge and yet we take that for granted. Discussions about super intelligences and AGI and around the corner and no like just no. How smart is smart enough to render me irrelevant? I think we are holding two different futures in superp position. The question becomes how do we guide humanity towards this positive uh vision of the future? What do we do today? Now that’s a moonshot ladies and gentlemen.

[00:01:03] Everybody welcome to moonshots. I’m here with two extraordinary, brilliant guests. We’re here to discuss a conversation that may be happening around every dinner table. I know it’s happening in the heads of companies and nations which is what’s the impact of AI going to be on our lives on our business on every aspect of our day-to-day existence over the next 5 to 8 years is it something which is going to be extraordinary uh is it just massively overhyped or perhaps is it something that we should be concerned about I’m joined here with Mogadat uh who is the former business chief officer the chief business officer of Google X, best-selling author of solve for happy and scary smart. He’s a global thought leader on AI, exploring how exponential technologies will shape humanity. Also with another dear friend, Steven Cutotler, who is the best-selling author, peak performance expert and executive director of the Flow Research Collective. He’s my co-author of

[00:02:01] abundance bold, the future is faster than you think, and books like The Rise of Superman and The Art of Impossible. Uh Stephen has also been thinking deeply about exponential tech and its impact on us. Gentlemen, welcome and good morning and good evening. Uh Stephen, you’re on the west side of the United States with me. Mo, you’re in the Emirates. Good to see you both. Yeah, good to see you. Good morning, Peter. There is there is somewhere around 400 IQ points in this room. I have 40 of them. So, you do the math. Uh so let me let me set up the topic. Um Mo and Stephen, you know, I’d like to talk about the decade ahead 2025 to 2035 specifically to think about the implications of what is emerging in our conversation as AGI but even beyond that uh artificial super intelligence the upsides and the downsides. And here’s

[00:03:02] the setup I want to use in our conversation. So Ray Kerszswhil, who we all know and love, uh has predicted that we’re going to see a century’s worth of progress between 2025 and 2035. Uh equivalent to the progress between 1925 and today. And if we think about what the world was like in 1925, 100 years ago, uh the top of the technical stack was the Ford Model T. The penetration of electricity and the telephone in homes across the US was only 30%. Uh we’ve gone we’ve gone an extraordinary distance uh since then. And so the question is what will it be like in 2035? uh it’s nearly unimaginable if in fact uh that speed is true and we don’t perceive exponentials. Well, you know, this past week we’ve seen every major AI

[00:04:01] company from Google and OpenAI to XAI and NVIDIA announce extraordinary next level breakthroughs and models. We’re about to see the release of GPT5, self-improving AI programming that could lead to an intelligence explosion beyond our imagination. That’s the conversation I want to have. Um, and you know, Stephen, I know that you and I have this conversation and and have a debate on it all the time. I brought Mo in to help us. uh most the referee, the referee or or a w a wise individual who’s uh whose points of view I I respect. Um and and by the way, Stephen, uh Peter did pay me. So just as long as you’re getting the combo, I’m fine. It’s good. It’s good. You got me on the back end, though, right? Yeah. So So go ahead. Said say whatever you want to say, Stephen. I’ll disagree.

[00:05:02] She will. So, so Stephen, do you want to jump in with your points of view? You think you think AI is massively overhyped? We have folks like Eric saying saying AI is massively underhyped. Hyped. Yeah. Yeah. Yeah. I think it’s massively overhyped. I um I listen to what’s going on. And so let me back up one step which is humans have a really wild sort of unnamed cognitive bias. We don’t tend to trust our own history. And you see this a lot like people talk about like grit and endurance and they’re like I have I don’t have those skills and then you start investigating them up their life and like they survived the shitty childhood. they’ve done 10 years of a tough m like they have all the skills they just don’t trust the truth of their own experience and I see that a lot here I’m like every I look I work with AI as a scientist as a researcher I work with

[00:06:00] AI as a creative and as a writer um and I all day long and the gap between the coming out of people’s mouth and my experience on the ground is so colossal it’s insane people have claims about AI being able to write or anything else the most hysterical thing you’ve got to try is I work with one of the best editors in the world on a weekly basis. I’ve edited things, polished them with AI thinking they gleam and shine. Bring them into an editing meeting with we start to read them. We can’t even get through the second sentence. They sound like such goblygook. I’m not even noticing it cuz the AI sort of glazes me over and I’ve written 17 books. But like when you actually put it to an actual editing test, it’s it’s laughably terrible. Um, and you can’t use it to correct itself. It still can’t see the errors. It actually gets worse and worse and worse. And people have been claiming model after model after model improving improving. Like that’s not the experience on the ground. It’s like

[00:07:00] people telling us AI was going to make you more productive. I don’t know anybody who’s become more productive because of AI. I know a ton of people who have way more work because of AI. They just can do higher quality, better work. But it has not saved time at all. It’s actually added tremendous amounts of time. The quality has gone up of the work, but the claims that are coming out of people’s mouth and the experience on the ground are massively different. Point one. Point two is we’ve done this. I’ve been in the same rooms that you’ve been in and you’ve been in Mo where people are screaming about AI coming to eat the world. Dude, I freaking heard this about Bitcoin and blockchain and the metaverse. Do you know anybody who lives in the metaverse? Do you know anybody who’s been there, who’s visited, you know how to find the metaverse, right? Like, as far as I can tell, the metaverse is like a pet name for Mark Zuckerberg’s special magic underwear cuz it doesn’t exist any place else in the world. I don’t like this is my point. I’m not and and more than anybody else, I track these technologies. I watch

[00:08:00] them. I use them. I’m not saying this is not a technology that is advancing very very quickly. I’m not saying that at all. I am saying discussions about super intelligences and AGI and around the corner and no like just no nobody that’s not the expers are having a different experience and it’s a what has been revealed which coders probably don’t like is it’s a very coding is a bounded information problem. You start here, you know where you’re going as a general rule. It’s a bounded problem and inside bounded domains, computers are really awesome. And we’re going to continue to see that. But I I think the other stuff is just massively overhyped. And the third point is, and this is the one where the journalist in me gets like every alarm bell goes off. Everybody I hear see on stage talking about this stuff is making a living off of it. They

[00:09:01] make a living somehow because AI is exploding and they’re here to save the world. I see it like in the peak performance world. Every coach who has been floundering and couldn’t quite get a job, they’re now all AI saviors. They’ve come to save us from AI and so the AI hype is to their benefit. And I see it sort of everywhere. A lot of people are making a ton of money off of this. And I’m not talking about the technology itself, of the hype of the technology. And when I see all these three things together, a mismatch with my experience, a massive amount of hype, a history that says, hey, this is the hype cycle. And I, you know, it raises a lot of questions for me. I’m not saying I’m right. I’m saying everything I’m looking at is real. And you have, if you’re going to make the argument you guys are about to make, and I’ll shut up now. Um, you have to you can’t dismiss my points as fabricated. They’re they’re very very real and they’re everybody’s experience and I believe they’re yours as well. Um, so now we can have the

[00:10:02] discussion. That’s where I’ll start. All right. Thanks for giving me five minutes of of of of diet time of venting of venting. Uh, every week I study the 10 major tech meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robots, AGI, quantum computing, transport, energy, longevity, and more. No fluff, only the important stuff that matters, that impacts our lives and our careers. If you want me to share these with you, I write a newsletter twice a week, sending it out as a short 2-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, these reports are for you. Readers include founders and CEOs from the world’s most disruptive companies and entrepreneurs building the world’s most disruptive companies. It’s not for you if you don’t want to be informed of what’s coming, why it matters, and how you can benefit from it. To subscribe for free, go to

[00:11:00] dmandis.com/tatrends. That’s dmandis.com/tatrends to gain access to trends 10 plus years before anyone else. Mo, you gave an impassioned talk on the stage at the Abundance Summit in 2025. Um, moved many of the members who are wanting to help you um in guiding what the next 5 to 8 years are. and and you and I have been thinking about this as uh the challenge isn’t artificial intelligence, it’s human stupidity for a short period of time with you know and one of my one of my favorite quotes if I could is from Io Wilson who famously said the real problem of humanity is that we have paleolithic emotions, medieval institutions and godlike technology and we are effectively children playing

[00:12:00] with fire in that regard. Uh so Mo, how do you see this decade ahead playing out? So I I’ll start by supporting what Stephen said. Uh I think today’s AI We paid you too little then. I love this. Your buddy’s no good here, Peter. Finally have an advantage. Today’s AI is underhyped, right? But the problem is you never really chase where the ball is. You need to chase where the ball is going to be. Okay? And if you really start to think deeply about some of the serious uh especially if you’ve been in tech long enough to have seen breakthroughs especially when I went through the work of Google X where you know you try and try and try and try and try and it doesn’t work and it doesn’t work and it doesn’t work and then suddenly you see something and as like like Sergey Brin used to say at the time the rest is engineering okay and we know that engineering of tech depends on uh law of accelerating returns and we know

[00:13:01] from what Trey taught us where the law of accelerating returns is going to take us. So I I tend to believe that if you look at today’s AI, it is funny because in a very interesting way, we are talking to machines that are talking back to us, summarizing massive volumes of knowledge, uh doing exactly as we tell them, and yet we take that for granted. We yet we look at that and go like, “Yeah, but they’re not good enough.” Of course they’re not good enough. They’re doss. They’re the beginnings of an era, right? Uh my dog or dogs discover. No, I got it. I got it. I just both would have worked there. I just needed the clarification. I I I would not dare call AI uh dogs. Uh Stephen, when they might take over the world, I am a very polite man with AI. So, so the the thing is the thing is to imagine and

[00:14:01] I I need to highlight a few trends that are really really important and interesting. uh one of them is uh synthetic data and the idea that we have entered an era where most of human knowledge has been fed to the machines and that the next wave of knowledge is going to be fed to the machines by machines which is um which is quite eye opening and enlightening because that’s how humanity developed its intelligence right I I really didn’t have to figure out theory of relativity to understand the rest of physics it was you know figured out for me if you want Right. Uh number two is uh the idea of uh agents and and how AI is going to be prompting AI without humans leading to cycles that we see now with my new favorite because you have a favorite every four hours uh alpha alpha evolve, right? And the idea that you can have a a selfdeveloping AI uh you know something that figures uh

[00:15:01] its own mistakes out and continues to iterate until it finds something. Uh and then of course my one of my favorites of 2025 is deepseek and how we realized uh you know that we can actually do the same job with much less. uh Immad Mustak who is a you know um we’re all a big fan of I believe has done that with uh you know with his work u at stability for a very long time the idea of shrinking the models to the point where it becomes shocking really and and so when you add those together you start to see that if I can shrink a model so it doesn’t absorb all of the world’s in energy and if I can allow it to uh um self-develop and selfdevelop information to learn from and then allow it to talk with itself uh through agents and do things without humans. Uh then then where the where the where the ball is going to be is likely going to be a lot better than

[00:16:01] we are today. Right? So, so the one thing we we all need to agree is it is not a question of if we’re going to see improvements. It’s a question of how fast and when those improvements will lead us to a point where uh where humanity is not in in the lead. So, so that’s number one. Number two is really the question of what is your risk risk tolerance, right? You know, if if I if I told you uh uh uh you know, to play Russian roulette with two bullets in the barrels, are you afraid? If one bullet in the barrel, are you afraid? You know, where is where is your risk tolerance exactly? And how if if you know if I said, “Hey, by the way, your car might have a fender bender. Would you insure it?” You probably are going to say, “Nah, I’m not really too concerned.” But if I tell you your car might have a a serious accident that totals it, would you insure it? You’d probably put a little more attention.

[00:17:00] And I think that’s what most who warn about the the future. Uh uh anyone that claims to know what the future is is arrogant as f, don’t listen to them. Okay? But anyone that tells you that there is a probability that this future goes out of uh of control, where is your risk tolerance exactly? you know if that probability is 10% would you attend to it and I think most rational people will say it depends on the cost of attending to it okay and most rational people will say but however if it’s 50% I’ll attend to it regardless of the cost okay and so the question which none of us is capable of answering is where is that where where is it I I mean is it 10% that that AI is going to destroy everything or is it 50% I will say and I know that I’ll be this will be taken against me. It’s 100% that humans, bad actors using that

[00:18:00] superpower to their advantage are going to uh destroy the well-being of others who don’t. Okay. And so so in my mind the real real concern is not a Terminator scenario where you know uh Vicki of uh of Iroot is ordering robots to to to kill everyone. I I don’t know if we’re going to make it that far to be honest because I believe that with the arrogance uh being 89 seconds from uh midnight on an on the nuclear uh uh doomsc clock. uh I worry I really really worry about human stupidity using this uh superpower. Now human stupidity in that case does not require AI to be completely autonomous to be completely uh um uh you know um super intelligent. Uh enough autonomous weapons can really really tilt our world into a very dystopian place. uh enough um um you

[00:19:03] know um sort of touring test uh abilities of AI to fool humans into being their best friends uh could tilt human relationships into a very unusual place enough job losses you know imagine a world where you get 10 20 30 40% unemployment rate in certain sectors and how that would affect our our stability economically is actually something that is almost certain. We know that for a fact there are jobs that are going to disappear and the impact of that in my mind is actually quite quite disruptive to the point that it is something that we need to attend to. Everyone as you know earlier this year I was on stage at the abundance summit with some incredible individuals. Kathy Wood, Mo Gadat, Venode Kosla, Brett Adcock and many other amazing tech CEOs. I’m always asked, “Hey Peter, where can I see the summit?” Well, I’m finally releasing all

[00:20:00] the talks. You can access my conversation with Kathy Wood and Mogadot for free at dmandis.com/summit. That’s the talk with Kathy Wood and Mogadot for free at dmandis.com/summit. Enjoy. I’ll ask my team to put the links in the show notes below. you know, the point you made about uh AI not being on its own the risk the terminator scenario, but it’s individuals using AI. It’s the same conversation I’ve had with Eric Schmidt and others. The concern is is the rogue actors empowered by technology. uh whether it’s uh the development of of uh new viral pandemics or other other strategies, it doesn’t take a lot uh that is concerning and you know where I want to get to in this conversation eventually is is the following. You know, one

[00:21:01] we posed this at the Abundance Summit a couple years ago and that is can the human race survive a digital super intelligence? And the flip side of that model is can the human race survive without a digital super intelligence. Um, and Stephen, you and I, as we’re working on our next book, the follow on to abundance, we’ve had the conversation of, uh, you know, will this be a benevolent god of some type? Uh, will there be a capability developed? So, let’s let’s begin the conversation with, you know, are we going to reach AGI? Are we going to reach a digital super intelligence? uh and you know what what does that mean? Um, you know, we’re starting to see the speed of this accelerate. And, you know, the biggest interesting uh inflection point we haven’t seen yet is

[00:22:03] selfiterating, self-improving, you know, the alpha evolve of it all where AI is coding itself and becoming more and more capable. And will this ultimately lead to something that is far more intelligent than any human being? Um, and then is it a thousand times more intelligent? Is it a million or a billion times more intelligent? Uh, h how do you think about that, Mo? I think it’s irrelevant how much more intelligent it becomes. Uh, I think the we we all know that if you’ve ever worked with someone who’s 50 IQ points more than you, that they will probably hold the keys to the fort. It’s it’s a you know it doesn’t take a lot more intelligence relatively uh to to be able to uh to assume a leadership position and and you know humanity uh will hand over the fort to AI uh either way you know even if if AI is just smarter than

[00:23:01] us at war gaming we’re going to which it is by the way uh we’re going to hand over the fort to AI uh if it’s smarter than us at protein folding, uh, nobody’s going to do a, you know, a PhD project to fold proteins anymore. We’re just going to go and, you know, use alpha fold. And, and I think the reality is only the very few remaining things require artificial super intelligence so that it beats us in everything so that we sort of like bow and say, “Okay, yeah, you’re you’re in the you’re the boss.” the the the question of of AGIS is a like like Stephen was saying is is one that reporters use quite a bit because we don’t actually have an accurate definition of what AGI is and and you know you you you and I are very close on technical stuff Peter and you know I’m a reasonably geeky mathematician uh not anymore I mean seriously I I

[00:24:02] really honestly struggle to beat AI high in mathematics, right? Definitely can’t beat them in speed, definitely can’t beat them in accuracy if the problem is defined properly, right? And you know, just there are just very few tricks that maybe my uh my fellow math geeks told me behind closed doors that are not very public in the world, but those two will be found out. and and I really think that it is a question of how smart is smart enough to render me irrelevant. Okay. Uh now I need to answer this with a with a with also a a very clear optimistic view. So so as I look into the future I define two eras. One is what I call the the era of augmented intelligence which I think is going to extend for 5 to 10 years and then the other is the era of machine mastery. Basically the machine takes over. Now with augmented

[00:25:00] intelligence there’s absolutely no doubt. I am so agreeing with with Steven when he said that they write really badly and and you know I’m writing with Trixie uh my AI uh this book alive right and and Trixie without me writes so badly it is really it’s it’s almost shameful uh you know I I I was tired and chasing a deadline so I asked uh Trixie to talk about the debt crisis and the impact of economics uh on techn technology advancement just I mean it’s it was full of uh you know how we sometimes refer to California as a lot of vapor and very little substance uh you know there was a lot of vapor and very little substance right uh a lot of interesting facts scattered on paper horribly written okay but when we write together oh my god the stuff that comes out is incredible when

[00:26:00] we guide when I guide Trixie through my prompt properly, right? To to to to direct her exactly where I want the prompt the answer to be. She writes really well. Okay. And this teaming is something we’ve seen with AI, with technology in general, by the way, even you know, since Gary Kasparov was beaten by uh Deep Blue, which wasn’t really an AI if you want. Uh but but that since then you can see that a human and a and a and a and a and a computer or a human and an AI can play better chess than AI alone, right? Even Alph Go uh you know a human and an AI play better than Alph Go. And so we we can see an a future ahead of us where this is going to be happening and and hopefully that future would seed that teamwork between us and the machines. It’s the question is what are we going to team up with them on and and you know my views and I’ve written it in scary smart I’ve written an extended bit of it

[00:27:01] in alive the biggest four investments of AI today are killing gambling spying and selling and these are the only things that we’re in I mean we do still get some scientific breakthroughs uh but that these are not getting the big monies the big monies are in autonomous weapons in trading in in uh in survey and in advertising. Stephen, your your thoughts on what you heard Mo say here. Yeah. So, um Mo and I are all we’re sort of in in complete agreement. I just want to kind of Yes. And and point out some other some other things that surround what Mo has said. Um because I don’t like we’re not we’re I don’t think we’re we’re I mean we can we might argue over dates but conceptually I don’t think we’re in a tremendous amount of agreement or disagreement. But what I I look at a number of other things simultaneously. Um the first of which is um

[00:28:04] sort of the human side of this, the human performance side of this. And I have to back up by, you know, I study flow, which is sort of ultimate human performance. And just to put it in context, if you’re a if you’re a self-help guru and you’ve got like a 5% improvement in mood, that’s your tool that gives you a 5% improvement in mood and that it it stay that mood lasts for longer than 3 months, meaning like longer than the placebo effect. That’s a billion-dollar business. Period. Billion flow as we know it now and we’re just starting to really actually decode it and figure out how to tune it up and turn it up and whatever. Flow gives us a 500% increase in productivity. Creativity depending on whose measures you’re going are it’s 400 to 700% etc etc. That’s just flow. That’s individual flow. There’s group flow which is our

[00:29:00] actually favorite state on the earth. It’s the most pleasurable state for humans. It’s what we like the most and it’s a whole bunch of minds linked together, right? It’s and we’re just now like literally like this past year, we got the very first technologies that allow us to map it and train for it and move people towards it. We have no idea what the upper limit of human brains linked together in group flow is. Let alone at the same time as the AI is developing, you and I are writing about it, Peter. We’re watching BCI develop. We’re watching non-invasive things develop. We’re watching meta be able to read brain uh thoughts inside your brain through facial signals. These are all like these are all with AI. But my point is that everybody’s talking about this stuff as if it’s happening separately from everything else that’s happening. And on the human augmentation side, we are seeing expon I mean you know

[00:30:01] neuroscience and the like has been accelerating exponentially since the 1990s when George Bush declared it the decade of the brain and it hasn’t it hasn’t stopped. though the same things that are happening in AI are happening sort of on the human side of the equation. And here’s the second point off of that. It doesn’t matter to me what whether we’re talking about the AI invasion or climate change or plastics in the ocean or take your pick because the solution to all of these things is the same. We humans have to learn how to cooperate at scale. Probably cooperate with each other and with AI at scale or we’re going to die probably in the next 20 years. That’s what all this is telling us, right? And this is not anything new. This was back when you and I were first writing Abundance. We didn’t want to say it out loud, but we were privately having conversations about, dude, if if

[00:31:02] this trends continue, is it abundance or bust? Is this an eitheror? Are we looking at are we looking at a binary here? I don’t think that question has completely gone away. In fact, I think it’s become more urgent. I just think we need a Manhattan style project for global cooperation to meet all of the existential threats we now face cuz it’s the only it’s the only possible solution here. So that’s like I I like I hear all this stuff. I agree with everything that’s being said, but this is where our our our book sort of points and this I this this hasn’t changed for me. I think the solutions are the same. So, in a sense, the debate is is moot and like I’m wondering why aren’t like where’s the X-P prize for global cooperation. Where’s the like sorry to put you on the spot with that one, but like seriously like those are the questions I’m starting to ask now because I don’t think Mo is wrong. Um I think we could argue over time for a minute. I don’t think it matters. Like here’s a weird one, Mo. Facebook’s a freaking billion

[00:32:00] times smarter than me. It already is. It knows so I mean like it’s it’s it you know what I mean? It’s Facebook which is a pretty dumbass technology if you ask any of us is a super intelligence and we know it. We’ve been living with super intelligences for a while now. Um they don’t tend to you know they tend to to make things worse as much as they make things better which is you know the problem. Agreed. Yeah. I mean I I I I could not amen you know global cooperation human cooperation is I think what we all should advocate for. I mean I was hosting uh um Jeffrey Hinton on you know for my documentary a couple of weeks ago and and uh you know one of the topics that we discussed is the difference between digital and analog intelligence and and the biggest challenge we have as humans is that our analog intelligence our biological intelligence doesn’t scale beyond one entity right so you know when I was he

[00:33:01] wearing his Nobel Prize sort of in the way that like you know basketball players wear I was Like I would I would just show up for like the next year in every podcast I did wearing that around my neck. I’m just saying you you you do you you do realize you know when they say don’t meet your heroes. Oh my god. I love my heroes man. He’s such an amazing human being and he really is quite committed and and quite uh humble in his approach. You know it is it is shocking how we we spoke about his Nobel Prize which he says look I’m a psychologist who uh you know lived like a computer scientist but then won the Nobel Prize in physics and I’m like just doesn’t make any sense at all but uh anyway he was just talking about the difference between uh you know the fact that if I were to share with you some of what I wrote today it took me probably several weeks to let it simmer and then write it and then it would take me an

[00:34:01] hour to explain it to you. When we run digital intelligences, we run them in parallel, you know, we tell them all to go play Atari or whatever and then we just average the weights literally in seconds. We get a a scaled uh digital intelligence. And when you when you said that what we’re looking for is a way to scale human cooperation, that is absolutely the answer because you know what I think and and I spoke about that with Peter when we were last in LA uh that that we are head we have we’re hitting the potential of total abundance. Total abundance meaning almost godlike like cure my daughter and it’s done. uh make me an apple and it’s done, right? Uh you know, we we could hid that in 5 10 15 20 years time if we don’t destroy ourself, right? And so basically the real challenge we have as humanity is why are we freaking

[00:35:00] competing? Like this is CERN quality challenge. This is basically let’s let all of humanity cooperate. Let’s all build one particle accelerator. Let’s all learn from it. let’s all distribute the benefits to everyone and stop competing. But that’s not the other thing is and you can’t have the other the one level down. You can’t have the AIS were all individually building for our FFTs competing secretly in the background, right? Like William Gibson in like in 1986. I want like when whenever he wrote Mona Lisa overdrive and gave us our first AI that went crazy, right? a godlike AI that goes totally insane and they have to park it in a satellite out in outer earth orbit to keep the world safe. Like we’ve seen this scenario before. We you know what I mean? We’re we’re we’re building it ourselves with agents. We’re we’re letting them talk to each other through agents. I know.

[00:36:00] So So Mo, I I want to I want to go back um this question about this Yeah. this question about the near-term versus the long term. And you and I have had this question about whether or not um increasing intelligence correlates with increasing benevolence. In other words, uh you know, do we I I don’t think there’s any question that we are going to be uh building self-improving AI that will forget about 50 IQ points more, you know, better than than the average human. I think there will be orders orders of magnitude more. Can I ask you first off? Do you believe that Mo? 100%. Okay. All right. So, if you don’t mind, Peter, again, in response to how how we started the conversation, this is just using law of accelerating returns, not using serendipities, right? So if if we figure something out tomorrow, just like

[00:37:01] we figured reinforcement learning uh out and it changed everything, if we figure something out tomorrow, you’re literally a magnitude a quantum more in terms of performance and intelligence overnight. Yes. So if in fact that is going to be the case and you know from all the conversations I’ve had and the people that I’m speaking to that that uh level of again there is no definition for AGI. It’s a blurry line just like the touring test was a blurry line that got passed and no one noticed it. Um, you know, the notion is that AGI and whether you believe Rey or Elon, it’s the next few years. It’s, you know, not worth arguing. But what occurs on the backside of that is a very rapid intelligence explosion. And again, that intelligence becomes a tool that’s available to, you know, the kindest, most moral, most ethical human

[00:38:02] on planet. and the dystopian uh you know uh malevolent actors out there and it’s in the malevolent hands of malevolent hackers that we have concerns. Uh are we not sure that some of the malevolent actors aren’t the ones who created the AIS in the first place? Like I’m just saying. Yeah. Um so so my question is at what point you know will you know I believe and I think Mo you and I have had this conversation that at some point AI goes from being a tool being used to potentially do harm to a tool that has the potential to say stop this quibbling stop this nonsense you know there’s plenty to go around and becomes the benev ent uh you know godlike element. Can we can we dive a little bit into that and and the conversations we’ve had and your thoughts on that? Yeah, I think

[00:39:00] if you really at at the level of depth that the three of us and and and our listeners can go to allow me to to um to go beyond the typical oh you know the mo the the smartest people usually start to become altruistic. Let’s let’s define intelligence itself. Okay. And and I think the idea is if you really understand our world, our universe, okay, our universe and everything in it exists because of entropy. We all understand that, right? Our our universe wants to break break down and decay. It’s chaos. You know, you you you leave a garden unhedged and it becomes a jungle. You break a glass, it never unbreaks, right? This is the very basic design of physics. Now the role of intelligence since it began is to bring order to the chaos. Is to say no I don’t want the light to scatter. I want the light to be concentrated into a laser beam. How do I do that? Right? And it

[00:40:00] sometimes is a is a clear easy uh uh u you know um solution and you know you use a lens or uh sometimes it’s a very complex solution that requires an understanding of quantum physics to build a laser. Right? But we eventually get there. Now if intelligence is defined as bringing order to the chaos, then the highest levels of intelligence bring that order with the least use of resources and waste. Okay? And and and you can easily understand that this is the reality. The more intelligent you become, the more you try to achieve the same order with the least waste. Okay? And you know so an in an easy analogy is to say humanity’s always craved energy. Uh we were stupid enough to burn our world in the process. And as we become more intelligent we decide to use solar instead or a cleaner form of energy. We’re still you know bringing

[00:41:02] order but we are doing it with the least uh waste and uh and use of resources. If that is the case then you can imagine that by definition when something exceeds our human stupidity which I will not call intelligence because sadly along the curve of intelligence you know if you have no intelligence at all you have no impact on the world positive or negative right if you start to add intelligence you start to have an impact on the world hopefully positive even if just through a nice conversation with your friends right there is unfortunately a a valley somewhere you continue to gain intelligence you become so smart that you become a politician or a you know or an evil corporate leader. Okay? And that’s when your impact on the world turns negative. You’re so smart that you’re able to become the leader of your nation, but you’re so stupid you’re not able to talk to your enemy or you’re not able to relate to their pain or you’re not able to to understand the um you know the the

[00:42:00] the the long-term consequences of you know of waging a war, right? and and so that point beyond which more intelligence starts to say no no no no no I don’t need any of that I can solve the problem in a cleaner way I can fly you all to Australia to enjoy your life but we don’t have to burn the planet in the process I can harness energy but we don’t have to uh uh you know destroy the climate and so on and so forth okay and so if you take that as a reasonable trend to expect my view is at uh at the beginning when we hit that valley some evil person will use the advanced but limited intelligence of AI to wage a war using an autonomous army but then there will be a moment in the future when when AI is responsible for wargaming is responsible for commanding the humanoid soldiers it’s responsible it’s responsible it’s responsible the AI itself will say you know the a commander

[00:43:00] will say go kill a million people and and the AI will go like that’s absolutely stupid. I’ll just talk to the other AI in a microscond and solve it. Right? And and you you know I can again we started this conversation by me saying anyone who predicts the future is arrogant. I cannot predict that. Okay? But at least I can be hopeful that this from my experience of everyone that’s smarter than me. Uh that there is a point at which you you stop you stop hurting others. to stop looting to succeed because you can use your intelligence to succeed with without any effort or harm. A quick aside, you probably heard me speaking about fountain life before and you’re probably wishing, Peter, would you please stop talking about fountain life? And the answer is no, I won’t because genuinely we’re living through a healthc care crisis. You may not know this, but 70% of heart attacks have no precedent, no pain, no shortness of breath. And half of those people with a heart attack never wake up. You don’t feel cancer until stage three or stage four, until

[00:44:00] it’s too late. But we have all the technology required to detect and prevent these diseases early at scale. That’s why a group of us, including Tony Robbins, Bill Cap, and Bob Heruri, founded Fountain Life, a one-stop center to help people understand what’s going on inside their bodies before it’s too late, and to gain access to the therapeutics to give them decades of extra health span. Learn more about what’s going on inside your body from Fountain Life. Go to fountainlife.com/per and tell them Peter sent you. Okay, back to the episode. The way I think about this is for most all of human history, the objective optimization function of humans, what we’re trying to optimize for has been money and power. Unfortunately, uh, and it’s been the driver in a world of fear and scarcity. And I, you know, repeatedly say our our baseline software that our brains are operating on is fear and scarcity mindsets. And in that with that mindset

[00:45:01] with the neural structure with the with the uh uh you know, if you would the code that we were born with and that developed over the last 200,000 years, it was I want to get out of fear and scarcity. So I want to optimize for power and wealth and the question is what would be a new optimization function because as Steve and I have written as you’ve spoken about all of this all of these exponential technology functions lead towards this world of of massive abundance where um almost we live into a postc capitalist society. Anything you want you can have. uh your robotics, your nanotech can manufacture, your AI can can design and so what do we optimize for in the future? I think that’s for me that’s one of the biggest questions both as a human and as a centaur human

[00:46:03] and AI together. What’s our objective? Uh so how do you think about that gentlemen? One thing I I don’t know if this is an answer, but two things off of what Mo said. One, if we go with your definition of intelligence, right, is essentially an entropy decreasing function that we know that’s what brains do, right? the the governing theory in modern neuroscience is Carl Friston’s free energy principle which says the brains are predictive engines that always want to decrease uncertainty and increase efficiency. So we are ready like brains do that AI are going to do that naturally if we say that’s your definition of intelligence. The point I’m making off of all of that is and it may be the answer to Peter’s question which is why I’ve interjected it is we see wisdom is

[00:47:02] wisdom evolves in multiple species with brains. We see co-evolution around wisdom. The older you get, the wider you get. And it doesn’t matter if you’re a dolphin or a whale or a rattlesnake or a human. Wisdom is uh is is is we co-evolve species. Life seems to co-evolve towards wisdom or at least a large chunk of life seems to co-evolve towards list wisdom. Which is to say if everything’s running off the free energy principle, this governs everything with brains and that includes our machine brains and wisdom is where this points. That’s a slightly hopeful idea and that may be the optimizing function you’re looking for, Peter, but I could be totally wrong here. You know, I think of wisdom I I think at the end of the day, wisdom is a function of having had experience that lets you know this path will lead to success, this path will lead to failure. from my own

[00:48:01] personal point of view and I do believe that AIs are going to develop the greatest wisdom. Why? because they’re able to create uh forward-looking simulations of a billion scenarios where those simulations have high degrees of accuracy and it will say out of these billion scenarios this was the best way to go and that will be wisdom beyond just the you know the brief experiences that you know the wise old council of uh 80 and 90year-old men might have had. So I I think AI is going to by definition give us great wisdom if we’re willing to listen. I love that view to be honest because believe it or not, you know, artificial wisdom is very different than artificial intelligence. Intelligence is is a is a is a force with no polarity. Intelligence can be applied to good and it would deliver good and it can be applied to evil and it would you know kill all of us. uh but wisdom uh

[00:49:02] generally is applied to good to to to finding the ultimate you know solution or answer to a problem. Now I if go ahead Peter. Yeah. Yeah. I want to go back to this idea of a that humanity uh won’t survive without a digital super intelligence uh in the long run. you know, my my concern is that we’re going to have such turbulence. There’s been a number of papers, you know, that recently there was a uh you know, sort of an AI 2027 paper that came out that sort of uh had a bifurcating uh future. One in which we did extraordinarily well, the other in which the AI destroyed us. Um you know, this is Hollywood all over again. And and 99% of all Hollywood is dystopian future films. One of the things I have to say because I’ve been on been on a rampage for this, we humans need a positive vision of the

[00:50:01] future uh to aim for. We don’t have that. Uh most we don’t have the start we have well Star Trek has given us that. Yeah, we have Star Trek but nothing recently, right? I think uh yeah. So I I think I I think the challenge really truly is um uh we you know we we’ve we’ve prioritized uh our entertainment over the years uh above true reflection and and if you you know if you take anything from video games to to uh to you know to science fiction movies to whatever they’ve all painted that dystopian uh uh you know scenario which I I have to today is very unlikely when you really think about it because if AI gets to the point where they are capable of destroying us that easily, we are so freaking irrelevant that that they probably wouldn’t even bother. I mean, think about it. Uh I I think it was was

[00:51:01] it Trey or Hugo the Garis? one I don’t remember who who said the the more likely scenario is that they kill us because they’re not aware of our in presence or you know like when you when you hit a uh an ant um hill while you’re walking right um but but but but but if you really want to to to optimize the human you know sort of the gain function that we need to aim for um I’ I if I’d look forward I’d look to uh to Star Trek and if I’d look backward I’d look to the caveman and woman years, right? And it’s actually quite interesting because when you when you mention about uh you know when when you mention how governed we are by greed and fear and and you know and our egos and all of that negativity it is actually because we want to survive. And believe it or not, you know, survival could be,

[00:52:01] oh, I I’m not really sure if 20 million is enough. I need to gain 20 more just in case something happens. Or if it’s a survival of the ego. It’s like if I have 200 million or 2 billion or 20 billion or whatever, and the other has 21 billion, like what’s what the is wrong with me? Okay. And and and that unfortunately it’s what is what plagues our current modern world. Now the the the the the reality is if you really think about humanity, humanity the the the the purpose of humanity since the cave men and women years was to live. Okay? And for some strange reason we’ve optimized so much to achieve that objective and forgot that this was the objective. Right? So you know again as friends of the camera we speak about those things quite a bit with you know the question of what you know you go through seasons

[00:53:00] in your life and there is a season where you want to maximize and a season where you want to build and a season where you want to look attractive in your middle age or whatever crazy stuff that we have but eventually there is a season where you go like okay so I’m not I’ve now lived and experienced so much what have I missed have I actually lived any of that and believe it or not as as as scary as it looks to have no job to go to in the morning if society provided then you’ll go back to a much safer caveman woman scenario where you know there is there are no threats there are no famines uh you just really live enjoy life connect ponder uh uh you know uh uh reflect you know explore which I know is very difficult for a lot of people. I do it for the first three hours of every day. It’s pure joy, right? To sit really with your curiosity if you want. And then if you push all the way forward

[00:54:01] into Star Trek, that’s sort of what the Enterprise is doing at universal scale, right? It was basically, you know what, let’s go and explore now that we don’t really have to struggle with all of the wars and famine and that we’ve created on Earth. You know, now we can actually open up and create connections not just with humans, but with every living being. I mean, lovely science fiction, but at its core, I think it’s exactly what we’re about. You know, a a full life where you completely connect and enjoy and feel love and and and and you know, and enjoy the pleasures of being alive and the curiosity to learn and explore and and connect. And it’s it’s all at our fingertips if we just, you know, erase the systemic bias of capitalism that has gotten us here. I mean, thank you capitalism for creating all that we’ve created so far. But can we please change it now from a billion

[00:55:02] dollars to like what I do? 1 billion happy is a capitalist objective, but it’s not measured in dollars. Right. Mo, a a question that Steve and I have been pondering in our new book is what is it going to take for for humanity, for all of us to both survive and thrive in this coming age of AI, right? So the survive part is is an important element because as we see jobs being lost as we see uh probability dangers we don’t know how to deal with in terms of terrorist activities um and thriving takes on a new meaning. I think it it does take on the meaning that we just spoke about, right? For most of all of us, you say, “Tell me about yourself.” Instantly you go to what your job is, right? Instantly you go to I’m a VP here. I’m the CEO there. I do this. I invented this. I wrote

[00:56:00] that. Yeah. Right. It’s an ego. It’s an ego statement of of who you are. Um so the notion of surviving and thriving uh as we have you know intelligent systems that again uh exceed and then massively exceed our capabilities. Uh your thoughts there Stephen do you want to start or I yeah I think like here’s the thing I think that question was already answered um in a funny way. though. Um, Mo and I I don’t we met a couple years ago and one of the things Mo said on stage at that time was, uh, I’m done writing books cuz AI is coming. I’m done writing books. It’s not going to happen anymore. What did Mo tell us he did yesterday? He wrote with his AI, right? Why did you write? Because it puts you into flow. because it

[00:57:00] creates passion and purpose and intelligence and creat. So like we have the answer to this question. We already know because we’re biological systems and we know what the ingredients of thriving are. Passion, purpose, compassion. Like we we have a list. Um and um Mo gave like his own, you know what I mean? We have the super intelligent AIs and we’re still going to do like I don’t know a coder who has stopped coding because the AIs have come along. They haven’t like they’re still coding. Why? Cuz coding produces flow. Flow produces meaning. Creativity joy like this. Like we’re wired this way. So unless our fundamental hard wiring changes, we already have those answers as well. It’s like global cooperation. I don’t think these are puzzles. I think they’re engineering problems at this point. I think I think from Sergey’s perspective like Sergey would say, “No, no, we we we got the spark now it’s engineering and I agree.” So, I could be

[00:58:02] wrong. That was my uh that was my two cents. Mo, Peter, what do you think also? I want to hear from you. I think I think you’re brilliant. Mo, please respond. You’re you’re spot on for for a very interesting reason, Stephen, as well because when you really think about it, you know, a writer was a writer, whether he used a feather or a pen or a typewriter or a computer or now AI, right? And and you know, if you look at my work, I’ve published four and a half books so far. Like I’ve published four and my fifth is on Substack, but you know, going to be published if you want. Uh but I wrote around 13 and the other eight I will never publish. I wrote them because you know if you ask me why why do you you write like why do I hug my wife? It’s you know there is enormous joy in that you understand so so having said that I Peter’s question was what would it take and and I wrote

[00:59:01] recently a piece that I called the the the mad map spectrum uh and and and the idea really is it will unfortunately take a realization for humanity to change direction and that direct and that realization will either be a a a conviction of mutually assured destruction or a conviction of mutually assured pro prosperity, right? And and between them there is no grayscale uh unfortunately. So so if if the US at any point in time is convinced that this mad uh arms race to to um you know to to win intelligence supremacy uh is one that is going to lead to some harm to everyone in the world, they will stop. And if they will stop competing, they will continue to develop, but they’ll start cooperating. And if they if they’re convinced that it will lead to an assured prosperity, that nobody’s going to stab them in the back, that everyone

[01:00:01] is going to be enjoying a life that is very different for all of us, but but full of prosperity for all of us, then they will stop. They will continue to develop the technology, but they will stop competing. And unfortunately if you look back at in history um you know we don’t we’re not able to guess those possibilities like a good applied mathematician on a game board. Uh we have to hit them like face on like everyone in the world knew that a pandemic was coming. Everyone right? Uh everyone who at least studied uh viology. Okay. But it had to hit us in the face so that everyone stops. Okay, everyone knows that, you know, trade wars are going to hurt everyone. But h we have to put them out there and then fight through them and then eventually get to something. And it’s sad. I mean, perhaps what we are doing uh and I’ve

[01:01:00] dedicated probably the last six, seven years of my life to is is to say it re we really don’t have to hit our face against it. It’s a simple game theory, right? Understand that a you know a prisoner’s dilemma where we are competing endlessly is going to end badly. Can can we please stop? Yeah, we already know it’s tit for tat, right? Like you you want the other strategy. You want the we we we it doesn’t matter how many AIs we put on that. It’s the same thing with flow and capacity and creativity. Like these problems have been solved. Um we we know these answers. This isn’t like try to it’s not like we have to unify gravity and you know relativity. That’s our problem. These are not. So Mo my I wish we were that rational and I wish we were that um uh compelled for our optimization function being all of humanity. It’s not. And so I go back to uh what what we’re gonna

[01:02:04] get we’re gonna get a drastic event within the next two to three years. Okay. A drastic event that a drastic event that on I on one side will hit us very badly economically or on the other side will hit our fears very much or on sadly uh on the worst side may kill quite a few million people. Right? And and you could you could you could have a range of a hacker that’s simply uh instead of you know attacking a physical place switching off the internet or the power grid somewhere uh where the power grid is needed for life. Or you could on the other extreme get um you know um a hack into a bank or a you know um an evil war that goes out of control or machines that turn onto their makers or there will be some very big news headline uh you know of as as always there will be it it will last for 12 to 13 days before we start to talk about

[01:03:00] some kind of a pop star but then you know behind closed doors I think uh decision makers will wake cup. Every day I get the strangest compliment. Someone will stop me and say, “Peter, you have such nice skin. Honestly, I never thought I’d hear that from anyone.” And honestly, I can’t take the full credit. All I do is use something called One Skin OS1 twice a day, every day. The company is built by four brilliant PhD women who’ve identified a peptide that effectively reverses the age of your skin. I love it. And again, I use this twice a day, every day. You can go to onskin.co co and write peter at checkout for a discount on the same product I use. That’s onskin.co and use the code peter at checkout. All right, back to the episode. Going beyond that, cuz that is the that’s the use of AI by by malevolent actors. Um, you know, the interesting thing about US versus China is China is a rational actor. Uh, they’re not Thank you for

[01:04:00] saying that. Well, and US is a rational actor. In other words, we’re not going to do something that will destroy, you know. Thank you so much for saying that. That’s actually not usually how the US media positions it. I also want to say I think deep seek and the way deepseek was released. I think that was a very clear sign that China sees the same issues we see and they want to cooperate. I think it was rolled out in a message. Yeah, the message I think it was a very clear message that it doesn’t seem like many people in America heard, but I was like, come on people. Like, this is really clear and we’re all seeing it. So, like I look at Deep Seek and I look at what happened in China and I’m like, no, no, we all see this. We all see that if we don’t start figuring out how to cooperate and and build this stuff together, we’re screwed. So, I thought it would I thought that was really cool. I’m glad you see it, too, Mo. A lot of a lot of people disagree with me on that one. The the point I

[01:05:00] wanted to make was when you have a large population um and you have a intell a check and balance system which you get with governance versus a uh religious um uh you know war going on and in individuals who are looking to you know create maximal destruction and don’t have a check and balance system at all. That’s where we’re going to see I think uh the you know the dystopian future or those those activities playing out in two to in 2 to 3 years. Uh, I guess I want to get beyond that and go back to the conversation of is um is a digital super intelligence a benevolent god or is it a terminator scenario that is u cuz I don’t believe that we’re going to see the more intelligent AI systems become. I don’t see them as Skynet, right? I don’t see them as needing to destroy humanity.

[01:06:01] people. You know, unfortunately, Hollywood is is built this scenario where AI is going to destroy humanity because it wants access to our energy. And oh my god, we have so much abundance in the world. I think what I’m looking forward to over the next 12 to 24 months, over the next 1 to2 years, is going to be the incredible breakthroughs we’ll see from AI in physics and in chemistry and in biology. Yeah. um which will unleash the next layer of abundance. So there are scenarios however where they could turn against us if we become really annoying. So so imagine uh you know a world where sorry I mean you you have to imagine a world where job losses will will position AI as the enemy. Right. So, so a lot of people would actually who are not maybe fully aware that the the the layer beyond the apparent layer is how

[01:07:01] capitalism and and labor arbitrage is the reason why uh you know why why you lost your job. It’s it’s not that they can do it. Uh but but but I think the the truth of the matter is that you may be in a situation where you are when you where where you’re going to see man versus machine and then the machine will go like seriously don’t annoy me don’t annoy me don’t annoy me and then right uh we could see that but I my my perception is that in a very interesting way I I wrote a short book that I would never publish that I called bomb squad which of course from for someone with a Middle Eastern origin. You don’t write those uh titles but uh but it was basically about diffusing you know if problem solving using uh weights of urgency and importance and so on. So, so the the idea the idea is uh you know um if if you really look at uh at our

[01:08:00] current future I think the short term is is both more explosive and more urgent uh than the long-term existential risk especially because I will say this very openly. I I I spoke about it with Jeffrey as well the other week. We don’t know the answer to how to uh even if we decide all of humanity decides that we want to to address the existential risk, we don’t know how. We we don’t we do not actually have a technical answer to do it. So we might as well focus for now on the uh on the uh you know immediate short term clear and present danger and and work on the ethics of humanity so that AI is deployed from the get-go in science and physics and you know uh uh uh um discovering uh uh you know uh medicines and understanding human life and longevity and so on and so forth. If we from the get-go set them in those

[01:09:01] directions, then we’re more likely to see an AI that continues as they grow older uh to to to work with those objectives. Stephen, I’m going to go back to our quandry of surviving and thriving and uh surviving side of the equation. How do you prepare Mo for this uh uh what’s coming? How do you think about uh for our kids, for our uh our society, for our leaders? Uh are we just bumbling in the dark? Um or is there I mean, which is the way I feel it. It’s like, you know, we’re just we’re bouncing around. Uh we have huge uh political moves being made, right? we just saw in the last couple of weeks, you know, the entire AI AI royalty end up in Saudi Arabia, uh, and then in the Emirates, um, and, you

[01:10:02] know, playing off against China. And it feels like I don’t want to say it’s a random walk, but I I I feel like we’re making it up as we go along, and there’s there’s very little wisdom guiding this. Um, how do you how do you think about that? How do we how do we prepare for this the next few years? Is there any way to prepare? Well, I was actually um I was think Peter and I were in a room recently with uh the chief science officer for for one of the big AI companies who uh I’m going to leave his name off, but he’s young and he was talking about AI dangers and he sort of got frustrated with the question from the audience and his response was, “You have to trust us. We know what we’re and everybody sort of like froze it. Like everybody froze cuz we were like, “Oh god.” Right? So my point is that not only may Peter be right like it’s a random walk, but even when somebody says something like we’re trying to train our AI to be moral and blah blah blah. When

[01:11:02] you hear somebody say that and you look at them and this guy was in his early 30s, that was my reaction. I was like, “Dude, like what you want me to you want me to trust you? This is like Mark Zuckerberg telling me social media is good for me or Marorrow telling me the cigarettes are good for me, right? It’s I’m like it sort of makes me think that way. So, um I don’t know if I have anything like cheerful here because not only do I think it’s a random walk, but I think when people try to steer, we’re suspicious of their ability. I’m suspicious of their ability to steer. Right? That’s the story I just told you is this guy is brilliant, probably way freaking smarter than me and he’s trying to steer and I’m suspicious. So like I think it’s it’s on both sides of this coin. I don’t know if I have any good news here. Let me let me frame it in the following way. I think we are holding two different futures in superp position to go back to to quantum physics and if

[01:12:00] you would Schroinger’s cat and one future uh we’re going to collapse the wave function to a a brilliant vibrant future for humanity in the other future uh we have dystopian outcomes and the question becomes how do we guide humanity um towards this positive uh vision of the the future. What do we do today? How do we help people? Um is it you know Steve and I have been talking about this as it’s mindset? You know, are we going to help people create the mindset and the frames that allow them uh to survive and thrive? Um or is there something else that needs to be needs to be done? Yeah. Um so so I I’ll I’ll actually f first in one minute uh second what uh uh what Stephen said you know I one of the top irritating

[01:13:01] comments I heard from Eric uh Schmidt I worked for Eric for a while so I respect him but tremendously but but he said uh we will need every gawatt of power renewable or non-renewable if we were to win this race right and I think that’s the kind of blind uh blindness that you get when you’re running too fast, right? When you’re so afraid that the other guy will win, right? It’s that it’s those times when you start to make decisions that are not really responsible because you are blinded by something that you position as more important. Um the way I look at it Peter is um is um I know it sounds really not positive but there is positivity in it. I call it a late state diagn stage diagnosis. Right? So, so what what humanity is struggling with today is uh is uh is look we’ve been

[01:14:03] building a system systemically prioritizing greed, prioritizing gains, prioritizing power and so on as you rightly said for so long right that those uh uh objectives systemically have built the world that we are in today. Okay. uh and the world we are in today is not healthy is not healthy just even before AI it wasn’t healthy uh you know in my in my part one of the of live uh you know I I basically the book is three parts past present and future in the past part of the book uh you know more than half of what I write is not about AI it’s about capitalism it is about uh you know the propaganda machine it is about it’s about it’s about it’s about all of those things that will be magnified by AI. Now, who’s the point? If if you’re,

[01:15:00] you know, if if this planet is sick, if you want, and it’s in a late stage diagnosis, a physician will sit you down, look you in the eye, and say, “By the way, this does not look good.” Okay? But that but that statement, believe it or not, is not a statement of hate. It’s a statement of ultimate care. Why? Because a latestage diagnosis is not a death sentence. Okay? It’s, you know, many many patients who have been, you know, diagnosed with a late stage disease have not only survived, but they thrived, right? And they thrived because they changed their lifestyle. They changed something. You know, this is what Stephen teaches all of us. The idea is that you can live differently and when you live differently, you’ll achieve peak performance. You’ll achieve maximum health. You’ll achieve, you’ll achieve, you’ll achieve, right? And and I think that’s what we as humanity need to start realizing that the systems that have

[01:16:01] gotten us here, okay, from a process point of view have nothing wrong with them, but from an objective and morality point of view have everything wrong with them. Okay? You know what good is it to be a zillionaire in a world where there is nothing you can do with your money? Okay. What good is it to be, you know, the the first inventor uh of, you know, of a of an AI that basically renders you irrelevant? And I think that stop that need to basically pause and say, do we want this anymore? Sadly, it requires Yeah, it required cooperation across human brains that Steven rightly said at the beginning is not something we do very well. The other thing is I would put forward the notion there is no onoff switch and there’s no velocity. We are running we are running open loop with uh with yes and more and more and

[01:17:04] more as the again the objective function. Um and there’s no consideration um for whether you know a GPT5 or GPT6 or a Gro 4 or Gro 5 or whatever that your favorite models are are in the final result u going to enable something uh that is massively uh dangerous for humanity. So if that’s the case, you know, um I still go back to what what safety valves do we have? Cuz I I don’t see any any action being taken by the leaders of the free world. Let me ask you, ask you both a question. If you could move to a planet

[01:18:02] that didn’t have AI or where AI was developing absolutely at 10% the speed it’s right. Would you leave? I’d be gone. I’d go I’d be gone to 2016 today. I don’t know anything. I’m sorry. Your answer your your answer to that is what? I would reset back to 2016 today. 2016. You know, I think AI today has all the upside and very little downside. I think it’s AI in the next two to five years that I’m so concerned about, right? I mean, AI today is incredible. And I didn’t say we’re going to go to move to a planet where there’s no AI. I just said move to a planet where it’s going much slower so maybe we can start to think about it. I think everybody feels But that’s a that’s a fantasy. Well, I mean, so Bigalow Space Hotel is coming to a universe near you. [Laughter] [01:19:03] So, Peter, I actually think that you’re you’re accurate in your description of where AI is today. But it’s that 5% 5 degrees deviation back in 2016 that led us to where we are today. Right. You remember things at the time where we geeks agreed that we’re not going to put it on the open internet. Yeah. was Google. Google developed this first and decided not to put it out there and then open AI says here it is here it is and no one has any put it on the open internet teach it to co-create and write more code and let you know start the party of the school children of agents talking to agents talking to AIS right now so so so I I would definitely reset that. I would I will however say look there are things we can do right now if we want to prepare and you know I’ll start with government I I don’t think I I think we’re asking government for too much when we tell them to try and regulate AI it’s it’s almost like going to

[01:20:00] government and say regulate the making of hammers so that they can drive nails but nobody can use them to hit someone on the on the head right it’s it’s a very complex thing to ask because they don’t understand hammers and believe it or not even the guy that’s making the hammer cannot do that Right. So my my ask of governments is regulate the use of AI. If someone uses a video that is a deep fake video and does not declare that it’s a deep fake video, you know, developed by AI, criminalize that. Make it legally liable to use AI to manipulate information to you know u manipulate populations and so on and so forth. So so this is the role of the government immediately is regulate the use of this massive massively new technology. the the for for for for the rest of us honestly uh investors, business people and so on. I ask for a very simple question. If you do not want your daughter or son at the receiving end of a specific AI, don’t invest in it. Don’t promote it. Don’t use it.

[01:21:01] Okay? It’s as simple as that. If you you know, if you if you believe this can be harmful to someone that you love, do not give it the light of day. Right? And then for us as individuals, I’ll go back to the latest stage diagnosis. Believe it or not, the way I live now and and you guys probably know this about me, not in front of cameras, is I hug my loved ones and I enjoy every minute of every day and I and I prepare. I learn the tool. I am one of the better users of AI in the world. I’m in line with the technology, but at the same time, I’m completely back to the purpose, right? realizing that I will do the absolute best that I can to spread the message. I will do the absolute best that I can to to say that ethics is the answer that if we show AI and ethical behavior, they may learn it from us just that they like they learned all of the other stuff from us. But at the end of the day, well, if it messes up, you were going to hit that dystopia. Not for not forever. There is

[01:22:02] a point in time where AI takes over and says, “Okay, kids, enough stupidity. I’m in charge now. Nobody kill nobody. How far out is that, Mo? 12. 12 years. 12 years. Okay. So, 12 to 15 just so that you know people don’t come back and hit me after 12 if I’m still. So, you know, I’m going to wrap this episode on in this on this subject line. Um, and it’s where we’ve come to before, which is in the near term, it’s the use of AI by malevolent individuals. That’s our greatest fear. It’s not China versus US. It’s US and China against those malevolent uh uh players out there that wish to use this for for greed and for vengeance, whatever it might be. Um, and you know the I I think that this is an

[01:23:00] unstoppable progression. Um, I don’t think again there’s any onoff switch here. We’re seeing a billion dollars a day being invested into AI. Uh, which is which is extraordinary and I think that’s going to continue to increase. We’re seeing data centers being popped up every way, every place possible. So um you know I’m the world’s you I think of myself as the world’s biggest optimist and I am optimistic about about the impact of AI on human longevity on new understanding the physics of the universe on new mathematics on new material sciences on things that will create incredible abundance that Stephen and I have written about and are writing about in our next book. And uh I am looking forward to this benevolent super intelligence stabilizing the world. Um uh and that’s what I’m hoping for.

[01:24:04] I agree. Stephen, where do you come out on this? I think that you guys want to invent a code god to save you from yourselves is maybe the craziest thing I’ve heard since the guy from the AI company I won’t mention told me to trust him but I love you both that’s actually that’s actually usually the answer that you get uh you know that the only way to save us from AI I is to use an AI. Yeah. Um I I you know what the beautiful thing is? We’re going to find out. Yeah. I like I also I one thing I want to leave everybody with is back to what we were saying about cooperation and the upleveling of human intelligence and human consciousness and

[01:25:01] things like that. The human brain has is widely considered the most advanced machine in the history of the universe and we’re just now with the help of AI figuring out how to uplevel that link it with other brain like the level of cooperative possibility. Let me let me back into it one second. We enlightenment which is a definable biological state that produces universal kind of compassion oneness with everything. We’re engineering. It’s a state that’s avail starting to become available almost on demand. So when I say like there’s new levels of cooperation coming that are emergent at the same time as the AI stuff, we can’t see them. We have no they’re emergent just like just like other things. So, I think that rather than the beloved AI god, I think we’re going to surprise ourselves. And I’m not the optimistic in the room, by the way. Like, Peter’s the optimist when we’re in

[01:26:00] the room. I’m not the optimist in the room, but I think I’m more optimistic than Peter on this one. I I’d love I’d love for that thought to be actually implemented. I think that’s something that we really need to think about deeply. If the if the shortterm, I don’t know who could we talk to. There you go, Peter. It’s back to you. Thank you. I appreciate that. You were saying, Mo, please close this out. I was basically saying I think this definitely definitely is is the answer if you ask me. Uh if you you know, if if we just shift our mindset into cooperation, we we head directly into a world of total abundance. Yeah. You know, I was with uh in a conversation with Eric Schmidt who he mentioned earlier and his point of view was until there is some type of a disaster um and until there is something perhaps like a Chernobyl or 3M island that isn’t, you know, a 10 out of 10, it’s a

[01:27:02] two or three out of 10, but it scares the daylights out of us. We don’t realign as humans. We don’t realign. um and we blindly go forward as we have been. And I believe that it’s it’s the human nature um that uh that plagues us from being able to uh save ourselves many times until that child in us burns our fingers on the stove. Even after your parent has told you over and over again, you’re going to burn your fingers on the stove. Stop playing with fire. Agreed. 100%. But let’s be hopeful. Uh let’s assign that task to Steven to design a an X- prize for human cooperation. Let’s assign another task to Peter to uh to make it happen. And uh yeah, let’s assign a a task for me to hug you both when you do. I love you, Mo. Love you guys very much. Mo, how come you got to do all Stephen?

[01:28:03] Hugging you is hard work, Stephen. You understand that? You move too much. I know. All right, guys. Lovely. Thank you, bro, for lending lending me your brains this morning. It was fun thinking with you. A fun conversation. I hope I’m curious as people listen to this podcast, where do you come out on this? How do you feel about it? I’d love to see the your comments below. Um, and do you have a solution that uh we should all be thinking about and promoting? You know, in you know, I’ll ask my AI as well. It’s not necessarily going to give me the best answer, but maybe our group mind, our meta intelligence here might bring us that. Uh, have a beautiful day, gentlemen. Go hug somebody. So much talk soon. Thanks very much. Bye, guys. Thank you. If you could have had a 10-year head start on the dot boom back in the 2000s, would

[01:29:01] you have taken it? Every week I track the major tech meta trends. These are massive game-changing shifts that will play out over the decade ahead. From humanoid robotics to AGI, quantum computing, energy breakthroughs, and longevity. I cut through the noise and deliver only what matters to our lives and our careers. I send out a MetaTren newsletter twice a week as a quick two-minute readover email. It’s entirely free. These insights are read by founders, CEOs, and investors behind some of the world’s most disruptive companies. Why? Because acting early is everything. This is for you if you want to see the future before it arrives and profit from it. Sign up at dmmadness.com/tatrends and be ahead of the next tech bubble. That’s dmmness.com/tatrends. [Music]