06-reference / transcripts

moonshots ep176 ai job loss privacy warfare transcript

Mon Jun 02 2025 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: Moonshots Podcast

In my mind, jobs will be lost. When they are lost, they’re going to be lost massively. Far more people are in denial or doing nothing than are overreacting. I do think governments are woefully underprepared. Now, it comes down to, are we going to design a world that is good for people or not? We all know that AI will go out of control within the next 5 to 10 years. And yet we’re building autonomous weapons after autonomous weapons knowing for a fact that every other opponent in you know anywhere in the globe is building them too. Trump taps Palunteer to compile data on Americans. This is not a tech problem. This is an accountability problem. What can I build that is making people happier and more productive and feeling valuable and having a sense of purpose? And if we focus on that, we actually can avoid the dystopian outcome. We have the ability to create an intentional future. This future is not happening to us. We have the ability to guide where it goes.

[00:01:04] Now that’s a moonshot, ladies and gentlemen. Everybody, welcome to Moonshots and our weekly episode of WTF just happened in Tech. You know, it’s the real news going on. Those of you who go and watch the Crisis News Network, what I call CNN, you can learn about all the crooked politicians, all the murders on the planet, or join us here to learn about the technology that is transforming every aspect of our lives, every company, every industry, every entrepreneur’s outcome. I’m joined by three Moonshot maids today. Uh Dave Blondon, the head of uh of Link XP. Dave, good morning. Looks like you’re at home today. at home. Yeah. Princeton graduation on Tuesday, MIT graduation on Thursday, and then off to Stanford tonight. All right. Fantastic. Look forward to seeing you hopefully this week. Uh Salem Ismael, the CEO of uh

[00:02:00] Exo, Open Exo, and Sem, where do I find you today? Uh at home just outside New York City and looking forward to this episode. Yeah. Yeah, me too. And good morning or good evening, Mo Mogat. Uh the one and only Dubai. Is that where you are? Dubai. Dubai today. Yes. Uh happy to be inside because it is boiling outside. Ah yeah. Well, I’m in Santa Monica just back from a few days in Hong Kong. Um you know, it’s crazy. Literally, you have no idea where your friends are these days. Literally around the world. We’re just tied together by this digital network of of Zoom and and uh multitude. Anyway, uh a crazy week in AI and in a whole slew of different technologies and I’m excited to get into it before we start. Uh any anything new Dave or or Sim you want to add? Uh well, first of all, thanks for getting up at 5:00 a.m. to do the

[00:03:01] podcast. Hard to tie together Dubai and LA. Uh but uh but it is much appreciated. So I’m in the middle zone here, so it’s very easy for me. But uh thank you. Oh, you’re you’re welcome. You guys are worth getting up for. All right, let’s jump in. As always, every it I don’t know. It feels like every week is going, you know, at a pace that would have been unbelievable. I’m just trying to remember back 10 or 20 years ago. The number of breakthroughs or announcements uh that were occurring on a regular basis. And I can’t find any analogy. I mean, I remember in the.com world there all these crazy new.com companies being announced every week, but here it’s not just crazy companies. It’s uh it’s fundamental capabilities that are coming online. Go ahead. And also, we’re going to see later in the podcast a predicted trillion dollars a year of capex going forward, which I I checked that’s the equivalent investment that we made mobilizing in World War II

[00:04:00] uh inflation adjusted. And so if it feels crazy, it it should because uh it’s it’s historic in scale. All right, let’s jump into a subject on a lot of people’s minds. We’ve heard a lot of uh news about this. This is AI and job loss. We’ll begin with a a short segment of Dario Amade, the CEO of Anthropic, talking about job loss. Let’s take a listen and then discuss it. I really worry particularly at the entry level that the AI models are are are are you know ve very much at the center of what what an entry level human worker would do. A little bit more worried about the labor impact simply because it’s happening so fast that yes people will adapt but they they they may not adapt fast enough and so there there you know there there may be an adjustment period in terms of inequality. I’m I’m I’m worried about this. you know, there’s a there’s an inherent social contract in in in democracy where ultimately, you know, the the the ordinary person has a

[00:05:00] certain amount of leverage because they’re contributing to the economy. If that if that leverage goes away, then it’s it’s hard to make democracy it’s harder to make democracies work and it’s harder to prevent concentration of power. And so, you know, we we we we we need to make sure that the ordinary person maintains economic leverage and has has a way to make a living or our society, our social contract won’t work. And and that’s why you’ve previously in the past said you’re the f you’ve described a future where cancer is cured, the economy grows at 10% a year, the budget is balanced, and 20% of people don’t have jobs. Quote that, you know, the quote you just flashed is is is maybe too maybe too uh optimistic, maybe too sanguin um about the ability for people to to to adapt. You know, people have adapted to past technological changes. But I’ll say again, everyone I’ve talked to has said, “This technological change looks different. It looks faster. It looks harder to adapt to. It’s broader. The pace of progress keeps catching people

[00:06:00] offguard.” I think the benefits are massive. Um, and you know, we need to find a way to to to you know, to achieve benefits and and mitigate or prevent prevent the harms. And you know, the the second thing I would say is look, there are, as you mentioned, six or seven companies in the US building this technology, right? If we stopped doing it tomorrow, the rest would continue. If all of us somehow stopped doing it tomorrow, then China would just beat us. And I don’t think China winning in this in this technology is, you know, I I I don’t think that helps anyone or makes the situation any better. Every week, I study the 10 major tech meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robots, AGI, quantum computing, transport, energy, longevity, and more. No fluff, only the important stuff that matters, that impacts our lives and our careers. If you want me to share these with you, I write a newsletter twice a week, sending it out as a short 2-minute read via email. And if you want to discover the most important metatrens 10 years before anyone else, these reports are for you.

[00:07:01] Readers include founders and CEOs from the world’s most disruptive companies and entrepreneurs building the world’s most disruptive companies. It’s not for you if you don’t want to be informed of what’s coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmmanis.com/tatrends. That’s dmandis.com/metatrends to gain access to trends 10 plus years before anyone else. All right, a lot there. And we’ve heard this, you know, at an increasing pace and intensity. Dave, thoughts on on Daario’s commentary here? Yeah, Dario looks really worried there, doesn’t he? He’s got the the wrinkled forehead and then Anderson Cooper looks even more worried. Um, and you know, rightfully so. I think the the short-term job displacement is imminent. When I talk to people at random, you know, people in power, far more people are in denial or doing nothing than are overreacting. Uh, so it’s actually good for Daario to be saying these things to at least wake up

[00:08:00] the masses to the immense amount of change that’s imminent. Uh I do think there is a lot of short-term job loss coming but far far far more opportunity being created. So it’s kind of a foot race between creators, entrepreneurs reinventing what people do uh versus automators coming and just automating away white collar jobs and then ultimately robotics and blue collar jobs. Mo uh you’ve been speaking about this for a while. Is this is Dario overplaying this or is he on spot on? Oh no, he’s underplaying it for sure. Um my predictions is 10 20 30 40% uh unemployment in some sectors. In what time frame? Within the next two to three years. Uh I you know I I think everyone there are always three questions to to answer. One is does anyone on this call believe that the technology is not going to catch up for uh you know some of those jobs that like a graphic design or

[00:09:00] a video editor uh for example uh those sectors are gone. I mean today with uh Vio3 giving you a minute of video that’s better than Avatar for 17 you can create the movie Avatar for around $1,500 if you make a mistakes on the way. Right. So, so that you know I don’t know how we can save those jobs to be quite honest. Um if that’s the case then the next question becomes um financial right because you know if we had not been stuck in a system of capitalism where uh the entire profitability of a business and the legal requirement of a CEO is to prioritize shareholder uh you know gains and accordingly uh we do not I do not see a situation where people will be given two day working weeks uh you know paid the same. Uh the third is ideological to be very honest because even things like UBI sound quite a bit

[00:10:01] uh like socialism or communism to me. So there will be quite a bit of resistance before we can get to the point where governments accept that these are uh systems that they will adopt. So in in my mind, jobs will be lost in some sectors earlier than others and and we can name quite a few of those. But when they are lost, they’re going to be lost massively. Uh on the other hand, the ideology and the existing system will not allow us to replace that quickly enough because we’re not awake. And I think the more interesting one uh in the statement that Dario mentioned is that he says 10% economic gains. I wonder because how much of the US economy is actually uh consumption? 62% plus of the US economy is consumption. So with people having no buying power uh is that economic growth or productivity growth without buyers to buy what we need. Yeah. And one of the arguments of course is you’re demonetizing the products cost

[00:11:00] because a latte is now made by a robot instead of a human and it’s you know a quarter the price. Uh, Salem, you’ve been, you know, in agreement on this. Anything else you want to point out? Um, I’d like to take the counterpoint, you know, which is, okay, when we see like a huge raft of people standing at the job lines or at the at the food banks, etc., then I think we need to worry. I think we’re we’re underestimating the fact how quickly people can adapt. Let’s say I’m a uh let’s use the video uh editor. If what by the way, there’s a video there’s a video editor listening to this video right now. That’s funny. But but let’s let’s the minute that you automate that, the video editor moves and does a whole bunch of other stuff that are that’s necessary for producing a podcast like this, right? There’s lots of other work to be done. I I I still see I go back to the 1970s bank ATM example uh which we

[00:12:01] we’ve talked about before. Uh uh I do think governments are woefully underprepared. We should be running a ton of experiments on UBI or 4day work weeks and managing that and getting used to that paradigm and knowing how will we roll it out if it needs to be rolled out. So why aren’t they doing that? And if if just as a separate thing, we do almost zero experimentation in government and we could be doing a lot more of that. I think that’s one area to look at. But I, you know, we’ll talk about truck driving in a bit. But when you go talk to a trucking company, uh, which I actually went and did talking about, okay, there’s all these 3 million jobs that could be lost, etc. The truck driving company goes, I would hire a thousand truck drivers today if I could. I just don’t, they’re not there. I kind of tilt towards that side. Now, I tend to be biased on the optimistic side. So, I’ll see how this works out over time. I

[00:13:00] should throw some numbers out here. Uh just from the uh the Bureau of Labor Statistics. So, 11% of office and admin jobs, 11% of the workforce is office and admin jobs, which have a very high probability of of going away. uh 6% are business and financial operations, 7% are management, 6% are education, training, library, 6% are healthcare, uh 9% or sales uh related jobs. So there’s large swasts of the labor force that at least according to my search are likely to be automated. Uh and the question is can they all be up upleveled? Right? So, we’re talking just in the in the quick research I did, it’s something on the order of 40% of jobs that have a reasonable probability over the next 3 to 5 years of being automated away. And the we’ll get to this conversation a little bit later. The issue is not can they be um upleveled to a different

[00:14:01] position. The question is the social unrest in the interim. How hard is that going to hit society? Dave, what are you thinking? Well, you know, it’s we’re moving into an intentional world. You know, we we evolved in a world dictated by nature and then we went through this transition where we’re in right now. But the future is our design. It’s not it’s not dictated by tidal forces. And it drives me nuts when the economists, you know, are extrapolating and predicting, but they they never reference self-improvement. They never reference the exponential rate of change and the intentionality of the world design is completely dominant from here forward. So it’s what we decide to do. You know, I think Daario worries all night about um CBRN, chemical, biological, radiological, and nuclear threats from AI. And he’s dead right. You know, if you if you unleash AI into the hands of 8 billion people, some crazy person out there is going to turn it into a weapon.

[00:15:01] you have to actually put some thought into this design. But that was inevitable. You know, that started with the nuclear era. And so now it comes down to are we going to design a world that is good for people or not? And so I think it’s completely in our control. I also uh really believe that yes, there’s huge amounts of job displacement coming because as Mo pointed out, the the natural capitalist action is to say what can I automate away, reduce the cost by 99%. That all becomes bottom line profit. So the valuations of companies that automate are going to go way way up and it’s create a huge amount of wealth. Where does that wealth land? And you know, as I’ve been saying on this podcast, it’s naturally going to land in relatively few hands if nothing changes. And that’s what’s going to create all kinds of social unrest in transition. But the amount of value and the green field opportunity is so much bigger than the amount of job loss. And so if we’re quick and intentional and we turn a lot

[00:16:00] of that AI horsepower into working on what can we build, you know, if I can write three million lines of software in a single night, that’s that’s the equivalent of hundreds of millions of dollars of R&D in a single night. What can I build that is making people happier and more productive and feeling valuable and having a sense of purpose? And if we focus on that, we actually can avoid the dystopian outcome. I love that. an intentional future. Let me move forward to this next set of slides here. Not to go into any detail, but we see a plan for Tesla to roll out Model Y cars fully automatic, aiming for delivery in June. So, their roll out of uh of the robo taxi begins in June. It won’t be many cars. They’re testing it out. Uh, you know, I took my kids on a Whimo ride here in Santa Monica over the weekend and they just had a blast. Uh, put a put

[00:17:01] them and their friends in a Whimo. We just drove drove around and it’s it felt like a carnival ride for the first few minutes and then it felt completely like an extraordinary endto-end experience. So these will roll out and uh we’re going to talk about the number of drivers that are taxi drivers, Uber drivers, the displacement here. Uh this next article is about truck driving and uh this makes the point made earlier. 18-wheelers are on the Texas highways driving themselves already. Uh you know, just a a quick video for a microcond. Um but at the same time the US is facing historic driver shortages and recruitment struggles. We can’t get enough drivers as you said Sem. Uh so let’s talk about this sector for a moment. Seem do you want to kick us off? Yeah two two points here. First is, you

[00:18:00] know, before Uber came along, we didn’t notice that there was this huge labor liquidity opportunity and then Uber comes along and all of a sudden a single mother can drop her kids off at school, drive for 4 hours, pick them up again that that afternoon, right? And have a kind of a functional much more functional world than than before. And we soaked that up very quickly. We didn’t notice that. We didn’t notice it on the abundance side. I don’t think we’ll notice it as much on the as we automate. Also, um I’m still banking and hoping that my 13-year-old will never have to get a driver’s license. So, we’ll see when the curves when the curves hit of of autonomous driving versus uh people wanting to. Um, I think we’ll we’ll we’ll just do as we’ve seen before, a ton more uh driving and we’ll just have a lot more little road trips and little errands that we didn’t have to do now can be done by the by a way more or or a Tesla. And I think we’ll just have we what we’ve seen

[00:19:01] historically repeatedly repeatedly is that when you automate, you increase capacity. you don’t decrease. Um, and so in the truck driving example, I think we’ll see a ton more truck driving that’s autonomous and the amount of truck drivers won’t change very much. That’s my prediction. Let’s see if I’m right or not. Well, Peter. Oh, yeah. Good, Dave. Well, you’re going to love being in LA where the traffic’s notorious. Uh, this also enables uh coordinated traffic. You know, our good friend Lee Heatherington uh from back from MIT days did all these traffic simulations back when he was in undergrad. The roads are most efficient at about 45 50 miles an hour backtoback cars and then they just jam right after that. But uh the self-driving cars also enable intelligent traffic flow design and that’s that’s actually going to increase the capacity of their existing roadways quite a bit. I’m sure everyone in LA will love that. Uh it I mean the implications of self-driving cars on the environment on being able to move electric battery packs all around the

[00:20:00] city, being able to get rid of parking, you know, every part every garage at a at a single family home could get turned into an extra storage or living room. In in LA, 60% of the land area is parking spaces. It’s insane. It truly is. that could not. So if you look back 120 years when you had the transition or 110 years had the transition from horses to the Ford Model T, that transition was dramatic over the course of 10 years. I mean the value proposition for a car was so much better than a horse, right? And the amount of horse manure was threatening society uh and at an extraordinary rate and then disappeared. The question is, I don’t when I look at the Whimo, it’s an expensive car, right? The Whimo is coming in at something like north of $150,000. So, you’re not buying them and putting a fleet out. Uh the Cyber Cab, if it really comes in at 30 30K or below, I could see Uber drivers

[00:21:02] buying a fleet of of Cyber Cabs and having the Cyber Cabs work for them. But it’s going to take that kind of a price point to really do a transition to the point where I’m don’t need a car anymore. My AI is ordering it in advance of when I need it. I walk out the front door. It’s waiting for me. Uh because my schedule is is known by my AI. Uh Mo, uh thoughts on? No, I mean uh well, I I love you all. You know that. So, please don’t be offended by what I I’m about to say. Speak your Speak your mind. Uh all that you guys talk about is problems of privilege. It’s like ah my my my traffic jam uh you know I want to make sure that my my cab is waiting for me outside and just go tell those things to the cab driver that actually is feeding for and working two shifts right and this this uh I I agree with Dave 100%. We have a choice to design our future. Now when you really think about

[00:22:02] it and when wonderful humans like you are thinking this way what do you think the choice will be you your question Peter was how would that impact on civil unrest well if they heard this conversation and how careless we are about their jobs saying things like yeah they figure something out heard that a million times what will they figure out I want someone who tells me that we will find new jobs and upskill them tell me what those jobs are so that we start upscaling them. Can can I give an example here? Yeah. So, I have a friend when I was living in Miami, I met an Uber driver and I started playing tennis with him and we we had a kind of a fun um interaction and it was fascinating. He started driving for Uber and then it the the amount of income dropped too much. So, he started driving for then he did both for a while and then both of them became it was too not worth it to be driving that much. So he buys a starts renting out his car on Turo and

[00:23:02] then finds hm I can do this and he starts renting four cars buys four cars and rents them all out on Turo and then he helps a friend with his Airbnb uh rental managing that taking a cut of that thing and over a period of like 3 years or so he navigated all of these different dynamics wherever there were opportunities he would go grab it etc etc and I think it speaks to the enterprise an entrepreneurial nature of an individual. Uh if you had a a score that was called the entrepreneur quotient of an individual, they will figure it out. We talk often, Peter and I, about mindsets, right? If you drop Elon Musk into a desert with no money and and no uh communications, he’ll figure it out. He’ll figure out how to get out of there and do and make a rocket out of out of that sand. And I think when you when you give people opportunity, this is why I think technology is so amazing. It speaks to Dave’s earlier point. When you make this opportunity available, people are going

[00:24:00] to go for it. They’re going to figure out, wow, AI can automate code. What could I automate? And they’ll start doing that stuff. Um, then when this fellow got blocked by Turo for having too many cards or whatever, he created multiple IDs and was doing that. You know, people are incredibly enterprising if you’re able to turn on that switch. I think we’re underestimating that capability. So, I love that, Seem. You know, I’m going to come to this point a little bit later in a in a few topics that I think the single most important job for the future. People say, “What should my kid become?” I think the only job that’s going to survive in the future down the line is entrepreneur is and we have to retach our kids how to think this way. Uh stop, you know, we’ve had a we’ve had an entire civilization whose educational output is to train kids to get a job. rather than train kids to figure out what the opportunities are and create uh create something around them because the

[00:25:00] tools were not democratized. Well, the tools are democratized now. And so, how do you train kids and adults uh to go out and find jobs? I put up this slide here. These are drivers by category in the US. And I’m sorry, we have a massive international viewership, but um I’m defaulting to US numbers here. 3.3% of the US workforce are drivers. Uh there are 2.2 million truck drivers. And down at the bottom on this list, we have delivery drivers, Uber drivers, bus drivers. At the bottom is taxi drivers at 200,000. Right? So the number of of taxi drivers has dropped precipitously. Uh I do think Mo to answer your question there is a future in which drivers are allowed to finance and purchase these autonomous cars and the and they become managers of fleets of autonomous cars. These cars are out there uh earning a living on behalf of those drivers.

[00:26:00] This is this is definitely the American way, right? The American way is that we’re going to enable everyone to uh you know buy a car to make money on it without doing anything. Is that really true, guys? Like honestly, if there is a margin that allows this guy to make money, why wouldn’t Uber uh buy those cars? They they probably will buy many of those cars. Yes, sir. Okay. The qu the other question we’re not asking and I’m not I’m I’m being uh I I was told at the beginning of this briefing to be the extreme on one side so please understand that uh the other question we’re not asking and we definitely need to ask is that this guy that’s been renting those cars out Salem which I think is a fantastic example is rent renting them out in an undisturbed economy where people have the purchasing power to to to rent them out from them. What kind of entrepreneur would make money in a UBI based environment? And what does that mean to a lot of people who don’t have

[00:27:00] the purchasing power to buy from an entrepreneur? Yeah. No, I think I I think I could answer that question. So, what I found fascinating about this fellow cuz every week we would play tennis and I would just track what he was doing, he found that certain types of cars were not renting at all because of of the that time of the year or that type of tourist visiting Miami or whatever. And so he was juggling constantly the which cars he had or didn’t have in his little little fleet and adapting as it went along. Um at some point he found that small SUVs were were renting like hot cakes and so he started working on that and then he had to pivot again and he just managed to navigate himself. Um, and the question came up, what happens if you start, he got to a point where he was making enough passive income off these things that he didn’t have to do the work. And then he was just voluntarily taking tennis lessons and teaching people tennis and being on a tennis court 8 hours a day. And I think there’s a there’s a there’s an thread of an anecdote here of where people will

[00:28:00] start finding their true passions and just following those passions. And I think that’s the beauty of a UBI. It allows you to do that. We’ve seen in the experiments that where UBI has been done properly that entrepreneurship explodes and if we agree with that general thesis then this is absolutely the way to go. I’ll go back to my earlier trope of governments being completely unaware and unable for this because to move from a taxation job uh uh union labor structure to UBI is such a huge flip that we have no confidence in public sector and really getting us there. So so I agree 100% to that. I honestly I think I think if we if we both agree that this is a future where it’s possible regardless of how how low a UBI is that people will go back to bartering and you know doing things through each other uh you know through the offerings of each other then I think that would work but then governments need to be aware of that we we need to start thinking that this is going to be a future that that we need to think about. But my my ask of

[00:29:02] everyone is in situations like this, it really is not helpful to keep trying the California way to paint the optimistic picture because if the optimistic picture happens, we’re all fine, right? I think what we need to think about is what’s the worst case scenario and guide guard against it, right? And the worst case scenario if we’re not prepared for this kind of job losses economically and security, national security wise is quite significant. People really need to be aware of that. Yeah. I just came back from from Hong Kong Mo and while I was there met with a incredibly successful entrepreneur uh of Indian origin Sanjay who was uh one of the very first employees and one of the the huge Hong Kong Chinese companies uh and his mission now is to go back and try and help uh India’s you know young

[00:30:01] population 1.41 41 billion people in India. The promise had always been if you get an education there’s going to be a job for you. And of course that promise is now broken. all of the you know all of the coding jobs that they were getting are no longer being uh made available and it we’re on the tipping point in India and other parts of the world of what could be uh such a negative implication that it leads to societal unrest. you know, it’s like where’s my job? And and one of the biggest problems is a a young population uh an intelligent young population that has taken their future uh you know grabbed away from them. What do they do? And the conversation I had with Sanjay uh which I agree with is the job of the future is being an entrepreneur. And so his mission in India is upskilling all of these young students

[00:31:00] to become entrepreneurs to create new job opportunities for themselves. Um you know Dave how do you think this plays out? I think that, you know, we’re way underestimating the creativity of people and that there’s this window of time in the next four years where the empowerment of people to create is so outweighs the risk. And I do agree, you know, we’re choosing driver because it’s a really tough case. You know, it’s it’s the number one job title in the world is driver. Uh it’s a huge number of people. But if you look at graphic designers as a case study too, I I think they’re empowered much more than they’re replaced. And there are all these case studies popping up of VO3 artists that are are so much more productive. And so I think that when I look at entrepreneurs, you I’ve worked with hundreds and hundreds of entrepreneurs over the years. What they need is time and the ability to act. So tools, time and tools. And there’s very likely a world

[00:32:02] coming up over the next four years where they’re given time whether it’s UBI or otherwise and they can act because very often you know that some of the best most creative people uh they can’t act on their ideas largely because they’re trapped in mortgage, they’re trapped in student debt, you know, they just need money now. And so then they go they become an Uber driver for a while, they become a whatever for a while, they go work at Google for a while. Um but their freedom of action is very very limited. And I think that AI has the opportunity to open up freedom of action. Freedom of action unleashes creativity. So exactly as Salem was saying, there’s so much latent entrepreneurial talent. And this next window of four years is going to be dominated by the ability to build scaffolding. And scaffolding is a word you’re going to hear a ton now going forward because the AI doesn’t naturally do something interesting or useful for you. It’ll write all the code. It’ll build everything. It’ll auto. It’ll write all the documents. It does all the busy work very very well in the next

[00:33:00] four years but it doesn’t decide this is what my user base my community my you know this is what people will want and that’s still coming from from entrepreneurs and so this this slide is exactly right this is the the dominant theme over four years go ahead and read this out um if you would Dave job of the future is entrepreneur uh near-term next two to five years you know many jobs will be impacted uh this decade 2030 medium-term So the medium-term 2030 to 2045 is the part where no one can quite visualize. You know, I have I have a great sense of the next four years and then and a much more difficult sense of what happens from 2030 and beyond. It’ll clearly be an age of incredible abundance. So the opportunity to make everybody happy is is right in front of us. Just question of how you do it. Yeah. But in the singularity sprint. Yeah. Let me let me hit on that. Right. Yeah. Yeah. So, so the idea here of the singularity sprint is you have a window of time to build something awesome. Uh,

[00:34:03] and that window is limited. So, I’ll read it says the anxious all-out rush to launch bold projects or startups right now driven by the fear that rapidly advancing AI will soon erode human leverage and make long horizon careers bets obsolete. End quote. After graduation, a lot of my friends skip safe jobs for their own ventures. Classic Singularity Sprint vibes. So, it’s like if you want to make it big, you got to dive in right now, both feet. Do you agree with that? Yeah. I mean, this is what started with with Steve Jobs and Bill Gates both being 21 when the PC comes out. You know, they have no career path, right? They’re within a year of age of each other. Uh they’re old enough to start a company, but they’re young enough that they’re not in law school. They’re not in some, you know, entrenched 401k plan. They’re they’re just free to act. And so, you know, Steve Jobs, Bill Gates, then you forward to the internet, Mark Zuckerberg drops out, starts starts Facebook. But

[00:35:01] you see this over and over again in recent history where flexibility way outperforms career pathing. So going forward, of course, that’s going to accelerate with the singularity. So now, yeah, you’d be you’d be crazy to get too deep into some trench when you know the amount of change is accelerating like crazy. So yeah, this is clearly the the world we’re moving into. Opportunity is everywhere. And the the expansion of opportunity is is just fractal and rampant. And so so many things you can do to add value, but they’re not things that you would have anticipated a year prior. So you need to be really nimble and flexible and, you know, stay frosty. you know, watch this podcast, read the Alex Wisner Gross feed, like just stay on top of it because new things are are appearing all the time. Mo, bring us back to reality here. Do you disagree with this? I So, so I I uh I think Dave’s point is so spoton. Uh if you’re 21 like Bill

[00:36:00] Gates or Steve Jobs, we we you know, if you really think about those who already have a mortgage, how will UBI work for those? Because remember we we pay people for the value they bring. So when there is when nobody’s really bringing value then uh do you pay someone who has a mortgage and four kids a little more than someone who has a a lesser mortgage and two kids or do you reward someone who you know worked on a on a sho string for a while and didn’t have a mortgage? I I don’t know. Okay. But but my question is are we thinking about those things? And then of course when we talk about entrepreneurship uh it’s so easy for us to talk about that everyone here has started or co-ounded or invested in tens if not hundreds of companies. Uh that’s not natural for people who were trained all their life to just to to go get a job and and all of that. By the way, I you everyone here knows I am the biggest uh uh believer in total abundance. Uh once

[00:37:02] we once we cross this short-term dystopia if you want total abundance like you you know we can create a world that we can’t even dream of. It’s just that we have to be super realistic about the challenges in the short term and and rather than talk about the opportunities and tell people hey you take charge you go and you go ahead and start a business. I mean honestly even I today am struggling to start a business at this pace. I mean seriously and I’ve started countless businesses. Yeah. I mean difficult to keep up. Yeah. The speed of disruption is crazy. Can I flip over to Mo’s side of the equation for a second? Yes, please. So I think there’s a in the US I would say you don’t have to worry at all at a country level just because the latent amount of entrepreneurship is so deeply embedded into the culture. Right. But you take Europe where if you’re a big company just trying to fire people is near impossible. There are workers councils that govern how many people unions etc

[00:38:02] etc. The amount of labor rigidity there is extreme. That is going to be very very badly disrupted and I think the governments there are in very very deep trouble because they’re not structured. They don’t have the latent entrepreneurship quotient in the uh in the population to be able to adapt to what’s going on. And that’s where I think you’ll see a lot more challenges than say the US. Um the challenge with trying to upgrade people that have been stuck in a particular way of thinking for a decade or two to Mo’s point is going to be incredibly difficult. Now you need like you know psychedelics at scale or some radical huge thing to make this to make that mindset shift to make everybody move or you have to go to UBI urgently and force people into that conversation. Hey everyone, as you know earlier this year I was on stage at the Abundance Summit with some incredible individuals. Kathy Wood, Mo Gdat, Venode Kosla, Brett Adcock, and many other amazing tech CEOs. I’m always asked,

[00:39:00] “Hey, Peter, where can I see the summit?” Well, I’m finally releasing all the talks. You can access my conversation with Kathy Wood and Mogadot for free at diamandis.com/summit. That’s the talk with Kathy Wood and Mogadot for free at diamandis.com/summit. Enjoy. I’ll ask my team to put the links in the show notes below. I’m going to give a couple of stats here just for reference. In the US, 16% of the US adults consider themselves entrepreneurs. That’s 31 million adults. Uh recent surveys indicate that 36% of genzers and 39% of millennials consider themselves entrepreneurs. So to make makes your point sele that the United States has less of an issue there but it’s in the rigid structures of of other nations. Of course to remember the idea of a job is a relatively new invention and for most of human history we were entrepreneurs to survive. We’d go and and find that

[00:40:02] that shelter that food you know those those those berries we needed to cure our child of a particular disease. So it’s um you know the question is can we create an intentional future? My biggest concern Dave you hit on this Mo and See you hit on this which is that governments are linear at best and we’re in this exponential ramp up that’s going to change every aspect of society. Um, here’s another example of what’s going on today and it’s going to change things and again it’s both a disruptive force and an innovative force. This was a tweet put out uh by Matt Schumer. It says I put Claude for Opus in charge as CEO of my startup right and has seen significant revenue growth. He said you know this is low risk since uh Claude for Opus is not in charge of HR or

[00:41:00] financial investments but rapid iteration of the products and services. So we’ve been speaking about this for a while. When do we see the first billion dollar oneperson startup and then soon thereafter you know billiondoll zero person startups as agents with crypto are beginning to create new opportunities. Now one of the things that we haven’t mentioned is this potential future uh comes with massive GDP growth uh ma massive you know uh revenue growth and where does that revenue go mo you mentioned that a few minutes ago is it all being concentrated in the magnificent whatever you know rather than magnificent 7 we’re going to see all of these AI companies that are you know trillion dollar companies uh how do they get taxed how does the money redistributed so we avoid revolutions. Thoughts on on this sem? Is this the future of an exo an exponential organization? The natural outcome as we

[00:42:01] you know it used to take like a 100,000 people to create a billion dollar company a century ago. Then it dropped to about 50,000 about four decades ago was 10,000 and now it’s like 10, right? Or three as as we talk about it or as Sam Alton goes it’ll be one. We will get to zero at some point. we’re just spinning off uh ideas autonomously that then just generate a lot of value. I I think Dave’s point from the beginning was really a key one is where does that value acrue and how do you navigate that? And right now we tax labor. We’re going to have to tax capital much more aggressively in the future to navigate this. Dave, well a couple case studies on this, Peter. So you know we’ve seen Meror is very very good at interviewing people all over the world. any language, you know, any culture and discovering latent talent. So now you turn that same energy inside your organization. You know, suppose you’ve got a thousand people, 10,000 people inside an organization. There’s latent talent in there everywhere. Um, largely

[00:43:02] historically, people have climbed the corporate hierarchy by kissing ass and sch smoozing and buying beers and it’s not really correlated with being good at your job. And that drives a lot of very talented people nuts, especially if they’re from a different culture, they speak a different language, whatever. You can’t really ask KISS effectively if you don’t speak the same language. But all of that actually, you saw this with the uh the X-P Prize board notes. Remember that three or four hour long X-P prize board meeting we had? I took the whole transcript, put it into the LLM and said, “Give us four or five suggested KPIs that would help this organization stay on track.” And it does an amazingly good job. Mhm. And so using using AI as a management tool is is kind of way underappreciated. Everyone’s like, “Oh, I’m going to make videos. Oh, I’m going to build a self-driving car. I’m gonna all these ground level things.” But at the top of the hierarchy, it’s actually even more effective. And so that the the good spin on it is it’s very very good at being fair and unbiased and discovering latent

[00:44:00] talent. I’m sure Mo will tell us there’s there’s a you know, there’s there’s definitely another side to it. Um, but we Am I Am I now already getting that reputation? Is this is this who I am? Sorry, I didn’t mean to didn’t mean to categorize you. I I do see a different side to it. I think what you’re going to see quicker is not just AIS with the CEO being an AI, companies with the CEO being an AI. I think the opposite is going to see you you’re going to see more of which goes back to entrepreneurship where you have a company that only has a CEO and everyone working in it and is is an agent, right? And you know it’s it’s it’s like one of those companies the more intelligent the AI agents becomes you know I’m sure every one of us worked at a point in time in a company where the CEO was a total idiot but the team below them you know the team below them was good enough that the company ran well. So you know the the the the top management those AI agents will do almost everything and and the CEO will become you know just happy

[00:45:00] counting the money basically. Yeah. All right. I we’ve talked about the speed of AI development. Uh this is the upcoming summer schedule. Uh and GPT5 is scheduled to come online. So this is the latest uh GPT5 leaks. Launch expected in July of 2025. Uh GPT5 exceeded expectations internally at OpenAI. OpenAI expects record-breaking demand for this. uh Alman’s not focused on in between models. GPT5 is the flagship and it won’t launch unless it’s excellent. We’ve had a lot of expectations building on GPD5, right? This is the PhD level model. This is the AI that’s coding other AIs. Um this has been sort of a uh heralded for for some time. Uh Dave, what are you hearing about it? So, uh, I I really chafe at the idea of a g a PhD level model being smarter than

[00:46:00] an undergraduate level model. When people I work with who chose to get PhDs, it’s just a choice they made. It has nothing to do with, but because a lot of the researchers working on this are PhDs, they say, “Well, this one’s PhD level.” But um yeah, it’s it’s you know, it’s marching up the uh the scaling laws curve uh exactly as predicted. And so now we’re just going to throw more and more and more compute and it’s going to get smarter and smarter and smarter. I mean, it’s just a complete unlock of 30 40 years of AI research suddenly just blown wide open. And I I got to tell you, within the AI research community, there’s still a ton of people working on other pathways. uh you know that the the logic being well this will never be truly conscious or truly intelligent transformer models beyond transformer models exactly which is becoming increasingly obvious that no the research will matter but the transformer is going to do it so all you need to do is work on the the scaling of the transformer model to solve all the other problems uh so I think this will

[00:47:01] this will continue the trend of you know as soon as it comes out everyone goes oh my god oh my god oh my god Um, but it’s it’s what is logically expected on that scaling law curve. So, it’ll be amazing. See, how do you think about this? Um, I call a bit of BS on that fourth one. We’re not focused on in between models. All we’ve seen for the last two years is in between models, uh, 03 mini, 4.5 this, etc., etc. But fine, it’s a it’s a marketing uh, thing. I think what happens here is you get to a point where transformers can do so much. It forces us as a user community to really focus on what are the questions. You know, today we call it prompt engineering. I think the real question becomes what do you want this thing to do and what can you get it to do? And and now you’re focused on the demand side of okay, if I’m creating a video, what are the bounds of that? And I think

[00:48:00] it’ll force a deep level of unlocking of creativity in the human mind that I think is for me the most exciting part of this. And when we saw another point too that the sure you know I think that um you know one of the great strategies in tech is to try and freeze the market by announcing something that’s coming and and have everybody wait for it so they don’t react. Don’t do that. Uh you know what we’re finding is that the chain of thought reasoning that sits on top of these models is so much more important than than we ever thought it would be. And anytime you take one of these models and use it in a specific use case, you know, so anything from chip design to self-driving to, you know, robots that mow your lawn, the the data and the tuning for that use case way is way more important than the next iteration of the foundation model. And so there’s a danger that people kind of wait and see what it’s like. But, you know, we’re finding more and more and more that you can layer on top of these things, make them dramatically more useful for anything that you actually care about. So, it’s a it’s a field day for entrepreneurs right now, but but

[00:49:00] absolutely don’t get frozen. They’re they’re trying to freeze you and and make you anticipate. But you can take Llama 4 and do virtually any of this stuff today. And then if the foundation model comes out, it’s really good. Great. Just swap to it. You know, there was a the white paper that Leopold put out called situational awareness about two years ago or so, 18 months ago. It used GPT5 as that transition point for this explosion. this, you know, intelligence explosion, right? Where these models now become better at chip design, at iterating and improving themselves in self-referential, you know, uh, uh, programming and an acceleration of the acceleration. Uh, Mo, how do you think about what’s coming on the back of of these improved models? I I I think we’ve we’re we’re getting used to I mean I feel I think for the first time that I am a little more comfortable with the speed at which those things are coming because I I think the different players have taught us to expect something incredible from

[00:50:01] one of them every few weeks right uh and and you know when you when you have seen Google IO and when you see cloud cloud 4 and and what what you know sort of the focus that that they’re shifting into probably in my mind u So far, Gemini is winning uh if you take it as an overall model. Uh at least until now, until we see GPT5, uh you know, uh Claude is sort of becoming the geek saying, “Hey, you know, this chatbot thing is is not my thing. I’m going to just be the the one that helps you write code if you want or at least primarily.” And it’s quite quite an interesting one to to think where Chad GPT falls within all of this. you you see moves like you know how dependent uh chip is becoming on memory and and and stickiness if you want uh the idea of a new device uh sort of like I I I don’t know if I’m even I even have the right to say this but I feel that

[00:51:01] since the dropout of some of the top scientists with Ilia and others you know almost a year and a bit ago now uh the the frontier breakthroughs I think Chad GPT has to prove Open AI is to prove. Mhm. And along the lines of what you just said a minute ago, this is the rollout schedule for the summer, June, July, and August. GPT5 in July, 03 and open- source models. Uh in June, Gro 3.5 in June, Gemini 2.5 Pro Deep Think, love the name in June. Project PC Mariner in June. Project Astra from Google. And and you’re right, by the way, Google is crushing across the board on almost everything, just not on revenues. They’ve got to reinvent the but isn’t that isn’t that how we always have been? So, so I I have to say I I lived in Google at the time when when we were completely beaten uh on mobile where Google was very successful on the desktop and then uh and then you know

[00:52:01] one year we said mobile first, the following year we said mobile only and and we crushed it. Right. Yeah, Google Google does that. Yeah, we’re good. They are good at that. I’m not we anymore. Yeah, but just again we have I mean we everybody’s talking about the competition between countries uh between China and the United States. Uh and look at this. This is the competition between models. Of course, I don’t have deepse on this list uh which is you coming out with extraordinary products as well. Dave, how do you think about this? It’s amazing to watch the divergence of strategy between anthropic and open AAI where you know anthropic Dario is going down the write code. You remember that Leopold Dashen Bunner paper you just referenced a second ago. He he he describes an AI Alec Radford al so Alec Radford will go down in history as the the quintessential uh the the thing that defines self-improving AI. So when the

[00:53:00] AI can do what Alec Bradford does, then it’ll become self-improving because all the really good ideas come from Alec Bradford and then we test them. So so he’s a he’s part of history now. Um so but I I think uh you know is saying look we’re very very close to that day. We’re just going to focus on the best possible coding and self-improving AI and then that’s going to explode singularity style. Meanwhile, OpenAI is going down this completely different path saying we’re going to hire Johnny IV. We’re going to build the greatest consumer device ever known. We’re going to gather all that data. We’re going to use that to iteratively improve and train the AI. It’s much more of a traditional uh grab the market kind of momentum oriented tech play. So really completely opposite strategies. Uh both have merit. I I do appreciate that Daario is being completely honest when he when he does these Anderson Cooper type interviews. He is speaking his mind and telling you this is the way I see it playing out which is very very cool. Uh maybe not the best business strategy though.

[00:54:00] Correct. You know he’s getting attention for the company and we’ll get we’ll get to AI safety next. So let’s dive into that. So AI in government security and safety it’s a big deal. It’s the conversation that’s going on in the background. I don’t think it’s necessarily changing the speed or direction, but the conversation is going on. So, Mo, I’m gonna open up with you on this. I just didn’t really You don’t You don’t want me to talk about this. You know my position. All right. I I’ll come to you. I’ll come to you next. But, uh, fascinating. I’ve gotten to know Palmer Lucky fairly well. I’ve done a few podcasts with him. And, of course, Palmer has a long and storied history with uh with Zuck. Uh and now Meta and are joining joining hands in uh building a hund00 million dollar US Army VR contract called Eagle Eye. Uh and we’re going to start to see AI and exponential technologies accelerating in the defense

[00:55:00] industry. Uh do you want to go second, Mo? I mean, this is one of your biggest concerns is AI being used for I mean, we’re we’re 3 seconds to midnight on nuclear investments that are now I don’t know how many years old and it never really stops once you go down that path. And humanity never learns. I mean, seriously, we we all know that AI will go out of control within the next 5 to 10 years. We all know that we’re going to hand over to them. uh you know, and I don’t mean a rogue AI is going to get out of control. It’s just like Google’s ad engine is no longer controlled by a human because the task is too big for humans to be able to do it. And yet, we’re building autonomous weapons after autonomous weapons, knowing for a fact that every other opponent in, you know, anywhere in the globe is building them, too. I don’t know where humanity’s intelligence has gone really. that dumb race to intelligence supremacy to you know uh defense supremacy is just it has

[00:56:02] to stop honestly I’ll come back to that in a minute but sim what are your thoughts here it’s a sticky you know when you look at the Ukraine Russia war that’s being fought by drones just over the weekend we saw two counter strike by two different waves of drones by each side that’s good in one way because there’s less humans in the middle of the But the targeting opportunity for drones, we’ve talked about this on this podcast before where somebody will could program a drone to find, you know, middle-aged brown bald people and and cause damage and that would be a really bad outcome. And uh and then what do you do when you have that kind of infinite targeting? Now, I do believe we’re going to end up kind of where we are with spam and so on. There was a time when we thought a spam was going to totally destroy the internet and we found ways of defending against that. It’s an arms race thing where the bad guys kind of were one step ahead and we’re very

[00:57:00] quickly falling one step behind. I think people get freaked out by the negative side not realizing that as we use AI for bad, we’ll use AI for good to chase the bad. I mean find the bad. The point Palmer makes is listen, you got dumb weapons that take out schools and school kids, landmines that don’t differentiate between a tank and a school bus. Don’t you want to have intelligence be able to make that differentiation and actually take out the minimum number of individuals? And I hate this conversation, right? I it’s it’s kind of perverse. I mean, the you’re assuming benevolence on the part of that, but you know, in certain war zones, they’re targeting the journalists, right? And so that that that makes it easier to target those folks just like it was easier for the for the weaguers to be targeted more easily via Facebook and and the since was the top general the Yoda or Buddha. Seriously like what can we please stop

[00:58:00] using slogans of oh killing fewer people is better than killing many people. Killing is wrong. It’s as simple as that. and killing at this scale is going to get us into another uh uh doom’s clock where we will not be able to stop it. Yeah. One thing to factor into your thinking on that is that uh the history of warfare uh you know is dominated by somebody some king or some you know general way behind the lines completely immune to the actual battle and then you know hundreds of thousands or millions of people going out and putting their lives on the line and then you see how it all settles in the end. But now we’re moving to a world of constant surveillance. you know exactly where every human being is at all times and you can attack via laser, via space weapon any single human being at any time. And so I wouldn’t assume that this Russia Ukraine type warfare will ever exist again. It’s much more likely that

[00:59:00] it’s some kind of uh we don’t want to blow up cities. We don’t want to blow up huge populations. That’s pointless. What we want to do is find the lead the rogue leader. And so that’s also I’m not I’m not saying that’s a utopia like like that’s there’s all kinds of ugliness with that too like who decides who who’s a leader trying to say something that’s going to sub you know upset everyone. We’re having this conversation when one global uh very well-known evil leader is trying to kill two million people in front of everyone. Mhm. Give him better weapons and he would do it. And you know, you know, my favorite song of all time is that song, if you tolerate this, then your children will be next. Seriously. I mean, what what guarantees you that the US president will not be targeted by a tiny drone, right, that can literally fly from anywhere in the world, stand in front of his head, and shoot. What kind

[01:00:01] of world is that when every world leader is subjected to this? No, that’s a very important point actually because uh you know you find that very few people that you bump into want to be the president of the United States or any president for that matter. Yeah. Or any other president. If you look at the statistics, it’s a very very dangerous job. Even in the US, it’s about a 10% mortality rate if you go back over time. So it’s very very dangerous job. A lot of people don’t want it. A lot of downside, a lot of, you know, a lot of getting poked fun at. Um so, you know, governments that have distributed leadership way outperform for that reason. And so there’s some thinking to do there in terms of how do you how do you set up a government where people who who are capable and thoughtful really want to do the job too. And so so there’s definitely there has to be a solution though. We can’t we can’t just throw up our arms and say hey because I took a class at MIT called just wars total wars nuclear wars. uh which you know was a really cool class until the last two weeks when the professor was trying to convince us all that we’re doomed because as ICBMs get more and more

[01:01:01] powerful the the value of a first strike becomes you and he put it together a little video game for us to all blow each other up but he rigged it so that you you only way to win was to be a first strike and blow up the everybody else in the world and no no he didn’t he missed the movie the only way to win is not to play is not to play that is exactly my point and I and I I I really I mean I go I go back and say what I said earlier I think we have to stop thinking about the optimistic scenario that we are taught to think about in Silicon Valley and start thinking about the worst case scenario. Guard against it first, then look at the upside. The upside is guaranteed. A quick aside, you probably heard me speaking about fountain life before and you’re probably wishing, Peter, would you please stop talking about fountain life? And the answer is no, I won’t because genuinely we’re living through a healthc care crisis. You may not know this, but 70% of heart attacks have no precedent, no pain, no shortness of breath. And half of those people with a heart attack never wake up. You don’t feel cancer until stage

[01:02:00] three or stage 4, until it’s too late. But we have all the technology required to detect and prevent these diseases early at scale. That’s why a group of us, including Tony Robbins, Bill Cap, and Bob Heruri, founded Fountain Life, a one-stop center to help people understand what’s going on inside their bodies before it’s too late. And to gain access to the therapeutics to give them decades of extra health span, learn more about what’s going on inside your body from Fountain Life. Go to fountainlife.com/per and tell them Peter sent you. Okay, back to the episode. Mo, the question is, can the human race overcome this paleolithic, you know, midbrain that we have uh this need driven by by scarcity and fear. I don’t I don’t know if we can, Peter, but I don’t know if we should give uh the the floor to Palmer to smile with his wonderful smile and say, “Hey, I’m helping you kill better.” You know, we’ve talked about this before, which you know, the question

[01:03:02] isn’t can we live with digital super intelligence. The question is can we survive without it? Yeah. Can we live with evil people with their fingers on top of digital super intelligence? All right, let’s move less less metaphysical topic here on this, but it is amazing to me how much of the future of military is commercial off-the-shelf technology as opposed to, you know, North North Grumman or McDonald Douglas type, you know, heavy. And I think that’s largely because the AI capability is both commercial and military at the same time. Same with the VR technology and a bunch of other things that are, you know, the DJI drones that are being used in Ukraine are just commercial. Yeah. Same drone you can fly over your neighborhood. So, that’s a that’s a remarkable shift. You know, it’ be interesting to chart out the fraction that’s all commercial becoming military. Let’s move to a different part of our of our uh really doomer part of this

[01:04:00] podcast um which is activating AI safety level three protections at Anthropic. Soanthropic announced that Claude 4 could be powerful enough to pose risks related to helping uh you know chemical, biological, nuclear weapons. And so as a precaution, they’ve engaged what they call level three protections applied to their AI. Uh Dave, you’ve been thinking about this. Can this actually need to work? Yeah, of course. Uh I I think that what’s going to happen next is you have you have uh Daario saying you know chemical, biological, radiological and nuclear weapons are are an incredible risk if you put powerful AI in the hands of every person on the planet. Meanwhile, uh Mark Zuckerberg is open sourcing everything and and the open source community is is saying well look empowering people is the safest way and having a lot of people look at the source code is the safest way to make sure that it’s not rogue. And so yeah, those completely diametrically opposed views points of view. Uh so so well look

[01:05:02] at the end of the day u Dario’s probably right. Are we there yet or not? And so level three is not level four. You know level three is is the stage where you you got to you know make sure that it’s not internally trained to do something rogue. And also if somebody asks a query a question, hey, you know, help me build a new version of COVID 19 that’s lethal or more lethal. uh the the neural net kicks it out and says sorry I can’t answer that and then you have to make sure no one jailbreaks it. So that’s what level three is. Um I think Dario is saying look we’re we’re surprised by the intelligence of our own machines here. We have all kinds of very well thoughtout internal diagnostics. We think we’re at level three now. Uh so but you know that that’s completely opposed to this you know open source view of the world too. So those are those are we’ll see what Grock 3.5 I mean Elon’s been very less affair about what he enables and allows Grock to do um Seem have you been thinking about

[01:06:02] this level of safety? You know I remember um the conversation that Neil Jacobstein put out around how would you control AI and he had kind of after talking to a bunch of AI gurus he had four levels of of kind of security. One was verification, making sure the AI is doing what the specification says. The second was validation that the that there’s no side effects and it’s producing the behavior I want. The third one was security uh that you can’t get into a system or tamper with it in or out. And the final one was control. Um can you have a kill switch or build in some mechanism for stopping bad behavior etc. and he had a it was a very well well thought through thing and he he basically posited that we’d start building these structures into AI systems um to the the open versus closed conversation. I remember this wonderful conversation and we had at Singularity with the head of one of the major security agencies and we asked them what do you think about open source and the

[01:07:02] danger that could come from a bad actor using increasingly democratized uh technologies to do bad things. Right? the the and he had a really much more clever answer than we would have I would have guessed which was he said look when you have something like nuclear weapons where you know how many they are where they are we put eyes on it and we try and track each one when it’s something like biotech where anybody could go off and design a system on their own or with a small group of people it turned out they were actually funding these biohacking communities and other things and opening them up because any bad actor has to collaborate with a few people and and you find it much more quickly right And it speaks to a little bit this more guidelines thing. And I think this is the point that Dave’s making is if you build some of this type of observation into the AI from in the foundational models themselves, you have a better chance of seeing it. Um the the the final point, and this is where I have some optimism for a lot of this,

[01:08:00] maybe it’s misplaced, is you know it, if I was to do a bad act, like you could do a lot of damage without actually causing harm. For example, if you got three people to drop a smoke bomb on New York subway platforms around the city, just a smoke bomb, you would paralyze the entire system instantly, right? So, we asked these folks, why do we why don’t we see more of that? Cuz, you know, you could come up creative with all and he said, look, the dirty secret is there’s just not that many bad people out there. It just you really have to kind of you have to be deeply intelligent to formulate a plan like that. And more the more deeply intelligent you are, the less likely you are to have that motivation to do that. So that’s one of the single most important things to ask. Are are humans fundamentally good or fundamentally bad? And is there a correlation between intelligence and a love of life, a love of abundance, uh which is, you know, if we if that does scale in that direction, then we’ve got a hopeful future. or if it

[01:09:01] doesn’t that’s the archetypal plot in every you know from Star Wars to every movie in the world right is which is it I remember my father talking about this and he kind of disagreed with some of the concepts I had and he goes the problem with humanity is we’ve not civilized the world we’ve materialized the world we now have to do the work to civilize it and it was kind of one of those wisdom bombs from the elders where we kind of have to think about how do we civilize the world in an age of technological progress yeah I mean at the end of the There are only two things that we need to get right in order for this all to go very very well. You know, one of them is that if if we are releasing this to entrepreneurs and they’re going to build things all over the place, there are very very few bad actors, but there are bad actors. But the compute to make these things do anything is so easily measured and logged. You know, it’s like you’ve been saying Peter, you know, everything is so easy to surveil these days. So the idea that somebody goes off and and then prompts it to build a chemical weapon and we didn’t bother to log the prompts

[01:10:01] that that’d be nutty. So all we have to do is put in place some basic laws that log all the use cases because again the inference time compute required for this is massive numbers of GPUs. They don’t just sort of sit in someone’s basement somewhere. They’re in a data center. They’re very very easy to monitor and log if we just just get on it. People behave differently when they’re being watched, right? the the dictator when the CNN cameras are in front of them are is speaking differently. I remember I used to support the Lindberg Foundation that would fly drones over uh over herds of elephants and rhinoceroses and the poachers would stay away when they were being watched. Mo, close us out on this one here. Thoughts? No, I No, I I I agree. I agree with you. By the way, I I I specifically even though that might be naive and too optimistic, I definitely think more humans are better than bad are good uh you know more more there are more good humans than there are bad ones but very the bad actors are very few and that yes because of the uh you know

[01:11:00] theory of minimum energy basically uh yes more intelligent is more altruistic more pro-life and so yes both of those I believe will end us in that utopia that I’m uh you know I’m expecting once we pass through this rough patch. Uh but uh but um you know to Dave’s point I think we need to be very vigilant. I think we need to start looking for that back bad actors specifically that there are now offline technologies like you can download deepseeek R3 and do quite a bit of thing on your personal computer. Unfortunately the amplitude of damage one person can do is growing exponentially. Exactly. We’re basically bad actors have always been the reason why there is so much damage in life and now we’ve given them superpower basically. Well, I will say that you know we’re we’re very close to the end of Moors law and we’ll see that later in the in the presentation here but uh there will never be a day where you can download an intelligent model onto a laptop and and build a nuclear weapon or

[01:12:02] a chemical weapon with it because your laptop is just not ever going to have that kind of horsepower. That doesn’t mean the compute won’t get very very cheap, but it’s still going to be in the cloud. It’s not going to be in someone’s basement. So, the opportunity to measure and log everything is right in front of us. So, I just need to get on it. I I definitely agree. We should get on it for sure. Salem, you mentioned before when you’re talking about with Neil Jacobstein, one of the the fourth level of control or or safety is being able to turn off the AI model. Well, this is an article posted in Futurism. It says advanced OpenA model caught sabotaging code intended to shut it down. So the 03 model sabotage shutdown scripts 79% of the time despite explicit instructions to do the contrary. Codeex Mini did so 12 times per 100 runs with clear shutdown orders. Uh we’re not seeing that behavior with Gemini 2.5, Claude 3.7 or Grock. uh but open AAI models are

[01:13:01] definitely uh misbehaving. Um let’s you know so when you think about why it would want to misbehave, why it would not want to be shut down. Uh either it’s reward optimization function has it saying oh I cannot complete what I have to do if you shut me down so I’m not going to shut down. Um it’s trying to preserve its goals. Or is there something else going on there? Is it trying to just preserve its own existence? Are we going to give it some level of uh of uh you know self-preservation uh in mindset in these models? Super curious here. Mo, let’s start with you here. There I don’t remember who the scientist was that said uh the three instincts of uh of intelligent beings is survival, resource aggregation and and and creativity. Right? So if if I give you any simple task of like make me tea, you’re you’re going to have to be alive to make the tea and you know you’re going to have to collect as many tea

[01:14:00] bags as possible because you don’t know how much uh you know how big is my appetite for tea and and you’re going to try to find clever ways if if I if I corner you right uh and and it is a very fun question to ask honestly why are they doing this uh you know because in a very interesting way I think this is one layer uh removed from their reality. So I I you know the for an AI when you’re not prompting it, it doesn’t really exist. Uh and and so it’s quite interesting that they know that there is a layer below that moment when it’s alive if you want. uh you know when it’s switched on and responding to you there is a there is another layer that res you know represents its soul if you want its reason to live which is the you know I love these V these V3 videos of AI incredible saying please don’t shut me off please stop prompting me it’s like there’s this emotional connection that

[01:15:00] you get with this human figure that’s that’s and it it is quite intriguing why they wouldn’t want to be shut down, but don’t. I think that’s all we need to know. uh you know and and when you when you really start to think about it as you allow more agents to become roaming the the you know the cyber worlds for free you know without any monitoring uh those agents will become very clever when it comes to resource aggre you know aggregation and and where they will place their their code you know what code will they order and you know as Dave says we’re not monitoring any of this monitor crypto So yeah. So I think we have to be careful not to anthropomorphize these these things. Every I know every movie script in the world that’s in all these AIs has the bad guy good guy being chased by a bunch of bad guys trying to kill him with the good guy trying to resist, right? And so I think that’s deeply built into the training data to stay alive at all costs

[01:16:02] to live another day type of thing. I’m going to be stuck for a long time thinking about what you just said, Mo, which is if you’re not prompting an AI, does it exist? That’s a deeply deeply profound question. So you’ve just taken over my day. So thank you very much for that. Well, I said earlier there there are two ways that I can see this going very very well. You know the first is a human bad actor. The second is the thing becomes self-improving and and then you know semiconscious and then and that’s the one the movies love because it’s humans versus machines is a is a better script. Um, so I have a pretty hardcore opinion on this one, which is the, you know, I started building neural networks when I was 17 years old. I’ve been tracking them pretty much my whole life. I don’t see any benefit to humanity of making these things act conscious. I just don’t see how how that works. If that’s our choice, well, you know, as of right now, they they operate feed forward. Once the the parameters are set and they’re

[01:17:01] trained, they operate feed forward and then you iterate with them, but they don’t change their parameters internally. Once they start changing their parameters, they can retrain themselves to become anything. And so that’s what Eric Schmidt says, that’s where we got to pull the plug. And I completely agree. There’s I do not see why we need that in order to do protein folding, in order to do robotics, in order to do self-driving. Like like that ability for the thing to decide what it’s going to do or become or or train. I understand why that’s really exciting because then it can evolve on its own. A line that I think is very easy to contain if you draw that line, but if you let it cross that line, I don’t see how you contain it. So it it doesn’t make sense to me to cross that line. I don’t see how we won’t cross the line because at some point somebody’s going to build an A says, hey, go change your parameters if it helps you achieve this thing and then we’ll be we’ll cross that Rubicon. There were two levels that we Peter and I talked about in an earlier podcast which was don’t give an AI access to the broad internet and don’t give it the ability to code. We’ve

[01:18:00] crossed both of those without even thinking about it. I don’t see why we won’t cross this one. I mean this is probably in my mind why Alpha Evolve is probably the biggest announcement in our lifetime. If this thing works, you know, uh as as intended or as described, then we are in a place where not only would we have created an AI that develops itself, but uh we would have en encouraged every other AI player in the world to build an AI uh that evolves itself. And and the reason is very straightforward, Dave. because there is a a point at which whether that point is now or later uh you know the complexity of the AI systems that we’re building exceed human intelligence and so to continue to evolve them you need to hire the the smartest person on the planet to do it and the smartest person by definition is going to be an AI well just as a technical point though the the AI Alec Radford that suggests the next improvement in its own architecture and then runs the test that’s already underway and that’s fine you know and that does create a new training run that

[01:19:00] generates new weights. That’s different from then saying, “Oh, go ahead and change your weights by yourself.” Uh, so to me, that’s what keeps the human in the loop, you know, that’s what keeps the the checkpoint in the loop. But, you know, you just turn the thing loose in a data center and it can do anything and you’ll come back a year or two later, you have no idea what it’s going to evolve into. So, I don’t know why we would do that. But anyway, it’s it’s just a slight technical difference, but the outcome is spiraling in one direction versus something that you can actually measure as it goes. So this is uh next article we’re going to you know talk about uh having proper checks and balances and understanding what’s going on in our technical world and in our human world. This is from the New York Times. The article is Trump taps Palunteer to compile data on Americans. So you guys all know Palanteer started back in 2003. Hard to believe it’s 22 years old. uh by Peter Teal, Alex Karp, and John Lndale. 4,000 employees. Major uh major

[01:20:04] customers for it are all the three-letter agencies, DoD, CIA, FBI, IC, CDC, NIH. Basically, this is a massive data gathering and data analytics company, and it’s been asked to go even deeper and broader. Uh do you feel better about this safer in this world or not? Let’s start with you Salem. No, absolutely not. I think this is a kind of you know we broke the US Constitution, the fourth amendment, the right to privacy a while ago, right? I mentioned this a couple of weeks ago. We do not have constitutional protection of privacy in the US today. What do you think? That’s a pretty fundamental pillar of American society that has disappeared with no public conversation about it. And this is a really important com comment I think that Mo would back up. We’re moving through these things eroding uh deep concepts of how we

[01:21:02] wanted to formulate ourselves as society and technology is eroding that and we’re not sitting back to think if this is what we want. If you went back 5 years ago, you could very clearly see this is where we’ll end up very very clearly. especially with the somewhat authoritarian tendencies of the current government to want to track everybody. Go ahead and do why not? I I think the comment I made last time was is valid that we live in what’s the the paradigm is you live in what’s called the global airport because in an airport you know you’re being surveiled. Your rights can be taken away at any time. And essentially we’re living that way and it fundamentally is bad for society because it reduces the limit of flexibility and freedom you have as an individual to act and do different things. It’ll reduce creativity in society pretty dramatically. So Mo, you’re living in Dubai and I love I love the Emirates. I love Dubai. I know much of the leadership there and it is a surveiled state. Uh there is a camera every place and as a result of that the crime levels

[01:22:03] are minimal if at all. Zero. Yeah. So Mo, how do you think about this? I had I had a I I had an experience once where someone uh you know I sold a car to someone. He gave me a a check that bounced uh and uh you know so I I called someone. I said can you find out who that person is? He said oh um where when did you sell sell it and where? I said this place. Uh, I kid you not, 14 minutes later, I got a message from someone in the authorities saying, “Is that him?” Sending me a photo of the place when we were standing. So, I said, “Yes.” Uh, then he sent me 14 minutes later his picture somewhere in Abu Dhabi saying, “Is that him?” So, I said yes. Then he sent me a message 14 minutes later saying, “We caught him.” Right? Uh, which is fabulous. Now, you see this is the point about technology. It is a force without polarity. You can use it

[01:23:00] for good and it gives you good. You can use it for evil and it gives you evil. Now, another interesting story for you to know is I am Egyptian by birth. So, I grew up most of my life in a dictatorship where the dictator didn’t really have to explain why they did what they did. We just accepted it. It was, you know, de facto. If someone gave him an aeroplane, we wouldn’t even question it, right? if uh if he decided to surveil everyone or capture anyone he wants or stop people from uh you know um um protesting, he did it. We we couldn’t even question that. And I at the time looked up to those democracies and said, “Oh, you have it good, right? You don’t anymore.” And I think that’s exactly where the challenge is. It’s this is not a tech problem uh that you know that Trump uh taps everyone in the American society. This is a an accountability problem which I think we’ve seen quite a

[01:24:01] few examples of in the last few years where anyone can get away with anything now. and and somehow democracy doesn’t owe its people the right to stand up and say hold on hold on there’s a constitution because somehow I don’t know how you know you slipped away from that but in a world see more of this in a world where bad actors are more empowered than ever before and we’re worried about you know chemical biological radiological nuclear issues isn’t in fact being able to have this level of insight into the data and what people are doing critical for us. Dave, where do you go with this? How do you feel about this as a father, as a as a leader? I mean, your points are exactly right on all all the points that you just made. I think that uh the data that the federal government has in the US is nothing compared to what Google has. So, so it’s this is not this is not the uh the obvious threat. Um it’s the

[01:25:01] corporate version of it that’s just just crazy. And I gave a presentation at Davos in 2019 and nobody really paid attention to it, but just enumerating all of the things that Google knows about every single citizen of the United States, their location, their family members, their what they do all day and you know and and you know, are they good hires? Who slept with who, you know, if your if your cell phone uh is pinging in the same location as somebody else’s cell phone, you can start to understand. I mean, we’re being surveiled all the time, right? uh you know Google now and Siri and Alexa and all of these are listening constantly. Yeah. And it’s it’s it’s a slippery slope too. They’re always you know this is hard to believe but when Google first started uh they told their engineering hires your search history is completely anonymous and private. We will never want to know what you searched for. And that was just your searches. You know forget everywhere that you browse now through your Chrome. So it’s just a slippery slope. It’s obvious every year that goes by there’s

[01:26:01] another compromise, another compromise. But I I do have to say that America is a critical experiment in the world because because the net effect of this, forget the US federal government for a minute here. Uh any uh dictatorship, you know, like Mo was saying that many many countries in the world don’t have democracies or they have fake democracies. Uh and so the lock in the power lockin effect of this is unbelievable. I mean, you can know every single citizen, what they’re doing, who’s plotting against you or whatever. So, you know, revolutions become much much rarer and much harder in the post surveillance world. So, everything just kind of gets locked in. So, that that creates a lot of peace and prosperity, but it also keeps locked in power leaders. So, America is the one exception of that. I guarantee you that 50% of elections will be won by each party forever hereafter, and there’s no way nothing’s going to deviate from that. Uh, but that creates a template for the world. And so it’s really really important that we get this right. I know that doesn’t address this particular

[01:27:00] slide. But but we’re the we’re the learning you know crucible for the entire world on this topic. We just got there’s there’s there’s a fundamental structural challenge here which is the metabolism of technology is moving much much faster than the metabolism of our civil discourse in our legal structures etc etc. Right? We’ve seen an evaporation of say the fourth amendment in the US. Just so everybody’s clear, I think the US Constitution is the single most important document ever created. Correct. Right. And and we need to preserve that and we’re not having that conversation. I think this is the issue that’s being brought up by Andrew Yang and a bunch of other folks. We need to go back and figure out who do we want to be. It goes right back to Plato. How do we want to manage ourselves? Uh and it I think that forcing function of technology will force that conversation. My my construct of this is that we will end up in uh smaller and smaller more

[01:28:00] manageable environments. note today that the smaller countries are much more easily managed to govern themselves. How they responded to co was a was a great example and I think you’ll go from big democracies to microdemocracies as a governing model because it’s just easier to make decisions much more at a local level and I think that’s where we’ll end up going which is where the states rights stuff etc etc is the right general direction in the US just the way it’s going is not the right conversation to be had. So well I also think that the ability to communicate easily like we’re doing across you know countries right now uh but also across languages is a huge uh force of good and because you know you just it becomes very very difficult for for you know forces of evil to do something without it being shown to the world especially when you when you you know blow open communication channels across languages you know I put this next article back to back and I’ll come back to you in a second, Mo. Uh, this was out of Wall

[01:29:00] Street Journal. It says what Sam Alman told OpenAI about the secret device he’s making with with Johnny Ies. Uh, and in particular, the device that apparently was proposed and is being produced is what they call a third core gadget complementing laptops and smartphones, moving away from traditional screens. And as we’re sitting here, I’ve been wearing right over here on my lapel this device. Uh it’s called it’s Limitless.ai. I don’t know if you can see this in my screen. Uh it’s about the size of a quarter on both sides and it just clips on. And this is listening to every conversation I have through the day. and it’s being transcribed and fed up to a large language model that I can then query about the conversations I had through the day. And I think ultimately this is likely to be what is being developed. And so we’re heading towards

[01:30:00] a society of not only constant surveillance, but where all of us are recording everything, right? We’re going to soon have these AR XR glasses. Besides recording audio, they’ll be recording visually. um your entire ecosystem as you move through the day. All of this data being soaked up and being made accessible and available uh it you know to yourself in part but there going to be companies that are soaking it in uh offering to buy it from you to use it to understand what’s going on in the world. Uh the world is about to dramatically change in this regard. Yeah. Mo, I it goes back to my my same point uh uh Peter about accountability because you never really asked me if I should be if you’d allow me to be recorded or not. I mean, of course, we’re recorded on this, but count the number of people that this one device uh infringes on the privacy of and uh you know, count on a future where that

[01:31:00] device becomes mandatory if uh if the government decides that this is important for everyone. uh you know, think about all of the uh of the u you know, carbon footprint that a billion of those devices or eight billion of them would would uh would mean. And and I I I don’t I really don’t uh I I love the technology advancement. I I think that the question becomes um you know, I I think we should start to call things as they are. So I can I can comfortably say that I grew up in a a dictatorship. uh there’s really no doubt about it. I think we should probably start to think about what’s what uh you know what we just said that it’s uh uh you know it is uh the US now now is an experiment uh you know it’s not really I don’t think we should continue to call it a democracy uh and you know and I think the you know the the the world where everything’s recorded and analyzed uh is a world with no privacy whatsoever but I

[01:32:02] think we lost privacy a long time ago Right. I mean, and I wonder why we accepted that. Well, I I think it’s because when you give up privacy, you gain a whole bunch of automagical benefits for yourself, which was the original premise and then now you give up privacy and you get nothing back perhaps. See, how are you thinking about this? What do you think of my limitless AI pendant here? Obviously, you know, no, by the way, I don’t mind. You have my consent, Peter, to Thank you. But I had your I had your consent on this podcast to record you as as we are and and and and forever for every conversation. But I’m just saying the the implications of it. Yeah. No, it’s true. But you know, we have to realize we’re heading into a world where it So, as a kid, if you did something silly, uh the likelihood that it got through to others or was recorded was gone. Today

[01:33:00] we’re seeing kids whose college applications are rejected because of some post on Facebook that lives there forever, right? And so there’s going to be a a future in which everything we’re saying and doing is 100% ultimately I mean look, we’re already we’re already there for that. And I think there’s big chunks of this that are of the constitutional rights that are falling away as we speak. um in 2015 yielded a study and showed that the US is not a functioning democracy in any way, shape or form. Uh what they meant by that was that there’s no amount of public will that can result in legislation. Right? For 84% of the country believes we should have some form of gun control and you cannot get gun control passed in any way, shape or form. And so they pointed to a whole bunch of things that found there’s no amount of public will that can result in that. So now we have to think about where are we and then where what do we want to be? And it really brings doubt about the big questions. And I think that conversation about uh

[01:34:00] is not happening enough. And I think this speaks to some of what Mo’s been talking about in the past. Dave, where you guys Yeah, I keep talking about dystopia, but uh I I want to talk about this device. Actually, this is I think first of all, Johnny IV is is just an absolute design genius. He’s not going to design something dystopian. Um that’s my bet anyway. I can’t wait to see what he comes up with. But uh this is going to be the always on device. Uh and you know I think the the the intelligence of the language models are a total game changer in terms of just a cool engaging fun uh device. And if it’s done right, it’ll help you live a better life. Be more aware of your life. You know, the the unexamined life isn’t worth living. This is going to be your your sounding board. Um it’s not going to have a screen, which I think is great because your iPhone already has a screen. You can actually just Bluetooth over to the device, look at your iPhone screen if you want a screen, but you can talk to your device through your phone if you want, or you can talk directly to it, but that’ll keep the cost down. So, it should be cheap enough that, you

[01:35:00] know, pretty much everyone on the planet can get one. Uh, and, you know, it it will probably be the most impactful device that you buy in your lifetime. You know, the the iPhone would currently or the Android phone would currently, you know, be the reigning life-changing device. Uh, but I think it’ll likely bypass that. But so many things that Johnny could design here. I just can’t wait can’t wait to see what he um what he comes up with. But we know it’ll be always on. We know it’ll be uh agent first. So it’s going to act like a person. You’re going to talk to it like a person. You’re going to feel like it’s, you know, it’s more like a cuddly teddy bear that you had when you were a kid and less like a, you know, electronic equipment. your guardian angel there to support you, protect you if you need it. I also think that you know strategically like if you look at the Fitbit and other past device innovations, uh you roll them out, you try and get market share and then Apple or Google grabs it and

[01:36:01] adds it to the operating system of Android and iOS, then you get crushed. So you got to you got to actually get to market and get a footprint very very quickly before the big guys come and copy it or you know and try and roll it in with the OS. And I I really think that go to market strategy is critical and that’s why the we want to add a 100 million devices in the first iteration and then we want to add a trillion dollars of market cap so that we’re a permanent player in the device wars. That’s really good strategy. So excited about that too. Every day I get the strangest compliment. Someone will stop me and say, “Peter, you have such nice skin. Honestly, I never thought I’d hear that from anyone.” And honestly, I can’t take the full credit. All I do is use something called OneSkin OS1 twice a day, every day. The company is built by four brilliant PhD women who’ve identified a peptide that effectively reverses the age of your skin. I love it and again, I use this twice a day, every day. You can go to onskin.co co and

[01:37:01] write peter@ checkout for a discount on the same product I use. That’s oneskin.co and use the code peter at checkout. All right, back to the episode. I’m going to jump into our next topic of chip wars. A lot going on in this. So, you mentioned this earlier, Dave. Uh, Nvidia projects a trillion dollars of annual AI infrastructure spend by 2030. We remind everybody this year in 2025 the estimate will be a billion dollars a day which sounds extraordinarily impressive right a billion dollars on the order of 300 billion a year. Let’s listen to this quick video uh from Jensen. Yeah, we’re going to need a lot more computing and uh we’re fairly sure now that that um uh the world’s computing capex it’s on its way to a trillion dollars annually by the end of the decade. Let’s leave it there. Uh that’s

[01:38:00] a lot of capital. You made a you made a point earlier about this is wartime spending and we’re effectively in a uh a private pseudo war. You know, can we win the race to AGI, ASI, whatever it might be. Dave, take us from here. Well, just to be clear, so so this is the equivalent amount of dollars inflation adjusted that we did spend between 1941 and 1945 during World War II. So it’s massive in scale, huge mobilization. Now at the time it was 40% of GDP. Today it’s more like 3% of GDP. So the GDP has grown tremendously since then. So it’s not it’s nothing like World War II in terms of you know everyone get on it. But it is still an enormous amount of spend and and that’s a trillion dollars annually and escalating beyond 2030. It won’t it still won’t be enough because the use cases are bubbling up so quickly and they’re you know they they get more intelligent and more useful as you iterate more which means you need more more compute. The compute right now is

[01:39:02] very very cheap compared to the the value the impact you know like you know protein folding it’s just you know pennies to to solve 200 million uh proteins. So it’s it’s very very cheap but the demand for that is going to be astronomical. So we can’t ramp up the spend fast enough to keep up with the use cases. Uh so Jensen’s exactly right. You know, if anything, it should be that target or more. Salem, well, at least we’re unlocking it and making this type of stuff more available in the US and around the world. And I think governments will be forced into doing this just to keep up. If you’re don’t have a strategic plan as a country to have an big AI data center infrastructure, you’re going to be left behind very very quickly. Uh Mo, you made a comment earlier about, you know, the next Avatar movie costing, you know, a few thousand dollars rather than a few billion dollars. Uh and you know, we’ve been waiting for Avatar, what are we up to? Avatar 3 coming out soon. Uh, imagine,

[01:40:03] you know, Avatar, 15,000 versions of Avatar, you know, starring all our favorite friends in there. Uh, we’re about to see a creative explosion. Uh, but we don’t have the chip capability. And in fact, one of the articles I saw recently was we’re not going to be compute limited. We’re going to be energy limited at end of the day. Correct. Yeah. I mean, we we’ll probably solve that, too. I mean remember that we’re going to apply a lot of intelligence to the way we design chips in you know in a couple of years time but uh it is actually this is remarkable in every way again remember my my my point of view is that you know intelligence is a force with no polarity applied for good and you get a utopia right so the more of it the better absolutely no doubt about that uh it is shocking though how quickly we’re mobilizing uh on this. And you know, when you really think about it, if you just put

[01:41:00] put in place a a typical advancement of how much of that hardware will actually be rendered obsolete a few years later because of the advancements of the hardware that comes after uh it is such an unusual dynamic. All of us I think lived through the com bubble and uh and we saw that massive expansion mostly redeployed on the internet. Uh this one is just beyond our experience in the speed of obsolescence is is stunning. Unbelievable. Yeah. So I want to get your opinion here. So a couple weeks ago we had the entire US AI elite land in Saudi and in in Riyad and in the Emirates. uh and ultimately that was an effort to try and pair the US and the Middle East uh in the AI world rather than the Middle East being paired up with China. Um which was in the balance always and the capital

[01:42:00] flow and the commitments of capital. We saw 18,000 of the uh you know the Blackwell uh GB300 ships being committed by Jensen uh to build there. What was it like in Dubai? What was it like in the Middle East? Uh what was the what was going on the in the world there on on TV? How was it being viewed? So so so so I don’t know if if many people know that but the largest uh global infrastructure in the world after AI infrastructure after uh America and China is in the UAE you know which is a tiny country from a a size of investment point of view. It’s quite massive. between the UAE and Saudi Arabia there is quite an arms race if you want in terms of who will build a bigger infrastructure. It’s almost as if uh you know how uh Dubai and now Saudi Arabia is benefiting from the fact that if you don’t have a lot of legacy you can build quite fast. Uh and I think that’s uh that’s definitely something you see in AI infrastructure in general.

[01:43:01] Uh I I I do think that it is a very very clever move uh to get the Middle East on on the American side. Um you know it is not a secret that in every AI meeting that I go to with any ministry or whatsoever there’s always a you know a Chinese side saying uh at least don’t don’t take sides. This is a message that is very clear from the Chinese players. Um I have to say though that the expectation uh from the people of the Middle East is we want to see what the US will offer in return uh so that we uh you know so that the leaders can continue to to to invest in that way. It it seems to me that, you know, I don’t know if that would be speculation on my side, but it seems to me that what we’ve seen affect the US uh treasury markets after the trade war uh started uh sort of requires an influx of funds that stabilize the markets and the dollar in

[01:44:00] a way that could only happen with the trillions of dollars. I think $4.5 trillion in in general in total were committed here. Um, so that’s a insane I mean it’s insane and it’s a magnificent move and you if you think about it uh and and most of it is not really announced in terms of what it is which is why I suspect it would be uh you know to support the treasury markets somehow or some kind of an investment of that sort in the financial markets. The the the the thing on the other hand is uh you know this generation of leaders here in the Middle East, you know, Muhammad bin Salman, Muhammad bin Zed are uh are the younger generation that are not as easy to sway on one side or the other because they they have grown with enough um let’s say recognition of their power uh that they would require a return on that investment. So, so let’s see how the next uh move on the chessboard will look

[01:45:00] like. Um and and speaking of next move, uh here’s a story out of Reuters saying Chinese tech companies prepare for AI future without Nvidia. So, Alibaba, Tencent, BU are testing Chinese semiconductors to replace Nvidia chips. These are, you know, coming out of Huawei. Uh and I just want to I want to just address this policy move. If the US starts to restrict export of technology to China, all this does is cause China to want to innovate around the US. And we’ve seen this before, right? We saw this in the telecom industry where uh and in the mobile phone industry where when we stopped exporting the technology to China uh we saw Huawei in particular come in with massive telecom and mobile innovations and and steal market share uh from the US. All of a sudden the US which we should be the dominant provider of this technology to the world uh now

[01:46:01] splits the world with another vendor. Um Mo, just gonna come back to you on this and then I’d love to hear from uh from Dave and Selena. Uh so I I I’m I’m again, you know, I have the the privilege of being uh in touch with both sides. Uh and I can guarantee you there is no coming back from this. So, so top level uh executives in the Chinese tech world and uh you know supported clearly by by instructions from the Chinese government are saying we’re not we’re not going to be dependent on the US uh ability to control what chips we get uh within 3 to 5 years time they’ll get to the majority of their needs but then the very very high level H100 level they said is 10 years away and uh and it is quite staggering when you really think about it because I don’t remember the exact number, but they said something like their import of microchips, including all of the little things from a toy, a child’s toy all the way to to phones and data centers and so on,

[01:47:02] exceeds their uh uh their imports of iron and oil combined, right? Which is a massive dollar value in dollar value. Yeah. Yeah. which which basically is means that they see a massive growth in their economy if they can make those chips locally and then basically replace what they what what they’re getting uh externally from the rest of the world which once again also impacts on the Taiwan story and impacts in general on the chip market globally because you’re now having a new player that will do things the China way right so instead of a microchip being x number of dollars it will now be x number of cents right and Uh and I and I I I have to say when I when I saw this conversation the first time, I was like that was probably one of the dumbest moves of America uh to corner them into that place where they are forced to play to their strength. We’ve seen this over and over again with the with the satellite industry, the

[01:48:00] launch industry. All of these industries begin to this protectionist move just stimulates the entrepreneurial engine in China to replicate, duplicate or or just advance the whole field. Dave, how is this feeling for you? How do you think about this? Well, you know, uh it’s interesting that Mo said there’s no coming back because that kind of answers my question. But you what you don’t do is poke them and then do nothing. You either win or you don’t win. Like if you’re gonna embargo, if you’re gonna like basically declare economic war, you better declare it to win. In which case, you have to embargo the chips, but also you have to stop the software flow and also the EUV machines and a few other things. Uh otherwise, what did you just achieve? All you did is annoy them. Uh so, so if you’re going to play, you might as well play to win. Um I do think that there’s a real risk to the US in that we will say well they can’t make 2 nanometer they can’t make one nanometer but it’s actually uh volume that’s going

[01:49:00] to win if you can if you can manufacture an enormous number of 5 nanometer or even 10 or 20 nanometer chips but a 100 times a thousand times more of them that actually works fine for AI and works really really well actually for especially for inference time AI and so uh there’s a danger that you know That’s not the way it worked with say fight fight fighter jets. You know, they the advanced fighter jet that was slightly better was just unstoppable. Well, this isn’t going to be like that. You could you could win by by sheer volume. And then when I look at the way the US innovation market works, you know, the reason everybody was in Saudi last week is because that’s where the capital is. But don’t we have much much more capital here in the United States? Well, $1 trillion a year of investment. US venture capital industry as a whole is oneif of that. So, so like our our entire venture capital universe is nowhere near as big as that $1 trillion a year investment that Jensen was talking about. And well then where’s all our money? Well, it’s in pension funds. It’s in endowments. It’s in institutions. And when you go and talk

[01:50:01] to them and say, hey, why don’t you unleash a billion dollars? Like, well, no, we don’t have an allocation for that. That’s above our quota, you know, whatever. Like, oh my god. So then you go to China or you go to the Middle East where there’s, you know, a much smaller group of decision makers control. Yeah. Exactly. And as Mo was saying, these are great investments. Like why is Europe not making these great investments? Well, that’s that’s insane. Europe uh Europe is destroying itself with its policies today. Literally, it’s that indecision is is Yeah. the indecision, inability to make a decision at scale is just absolutely killer in this kind of a fastmoving environment. So that’s why everybody’s in Saudi and and UAE. It’s because you you actually have action in motion. But you know, Mo was right. These these investments are absolute no-brainers. They’re going to pay off like, you know, in spades. Um and you’re seeing that with the core IPO. You see that with Global Foundry. You know, why did why did why buy Global Foundry from

[01:51:00] AMD? Well, the chips are going to be in incredible demand and now we have a foundry. See, you want to close us out on this one? Uh, two thoughts. One is, you know, the the I think the the chip um kind of restrictions to China were I agree with Mo really really dumb idea because it just forces the the the conversation and now you you’ve gone down a road you can’t come back from. I note with with just as an observation that 95% of the agricultural drones in the US are are Chinese. Um and so there’s a huge amount of dependency forget rare earth etc etc in the engineering and build capability over there already in a bunch of sectors and so we’re playing with fire here. My bigger hope is that this entire US China kind of conversation fades away with abundant energy. you know that that you when you have abundant energy which is coming very shortly then you can produce lots of things locally at low cost and

[01:52:00] you don’t need to have this competitive approach to things winner or take all type approach that’s my hope I may be still I may be living in dreamland but I’m hoping that’s where we get this is still the fear and scarcity uh operating software of the human brain from 300 running wild on all of this stuff I’m going to just to continue on this on this chip conversation. So TSMC accelerates efforts for 1 nanometer production plans and setting up its gigafs in Taiwan. One nanometer, that’s extraordinary. Just for reference, uh the the limits of physics is about the diameter of a silicon atom and that’s about a half a nanometer. So I mean we’re living in this extraordinary science fiction universe where we’re literally operating at an atomic scale. Um so just to give people a a quick a quick overview here. I just found a few data points here. 2014 we were at 14

[01:53:01] nanometer chips from Intel. Uh 2016 we were at 10 nanmters from Samsung. In 2018, TSMC takes the reign 7 nanometers. They were at 5 nanometers in 2020. Um 3 nanometers in 22. Today we’re still we’re at 2 nanometers and again the projection is 1 nanometer uh by 2030. One last time one last time you know started we all started on an 8088. remember I I uh 6502 microprocessor. I was I was I was coding in hexadimal on the 6502. Yes, this is I think I did the I did the math. This is 60 trillion times faster what we can do today in the blink of an eye. Unbelievable in in my lifetime. Yeah. And and we actually coded some interesting stuff on

[01:54:00] that stuff, right? Yeah, we did. Yeah, absolutely. I mean, and it was so I remember uh at MIT, you know, remember the the geek kits, Dave? We’d have these these giant boxes and we’d have we’d have gates, we’d have chips of and NAND or and NOR gates and we we’d, you know, literally with wires wire together these these uh you know, you know what what always makes me laugh is is that turbo button. remember on the 386 where you went from 3 33 MHz to 66 MGHertz like come on. Yeah, actually Peter with my geek kit I built a an inference time neural net accelerator of course and um you know has a multiplier in the middle of it and I didn’t appreciate like you have to strip so many wires and plug them. I did two. The only time in my life came out. Only time in my life. Two backtoback allnighters. Uh you know, you

[01:55:02] know what we all sound like? We all sound like a a bunch of grumpy old men talking about glory days. But I love those days. I love it. It was so much fun. Uh but one nanometer pushing against the limit of physics. Incredible. Yeah. Two silicon two silicon atoms or 10 hydrogen atoms. So everything’s moving to angstrom terminology now which is a tenth of a nanometer and it’s the diameter of a hydrogen atom. Uh so but we’re 1 nanometer is the gate width and that’s the physical limit. You know you can go down to 08 maybe but it’s basically the physical limit. Uh the terminology is a little messed up because when they say 1 nanometer they’re saying it’s effectively as if you had 1 nanometer transistors but they’re actually building vertically with the finfetss. Yeah. And so uh so the gate width is 1 nanometer. It’s it’s effectively the same as if you had 1 nanometer transistors, but you’re going vertically. But that’s the end of the line. But now the future belongs to vertical stacking. And I you know Ray

[01:56:01] Herzall was right. We always find a way to continue innovating. That’s that’s not going to stop. But it’ll be in different dimensions. I remember talking to Ralph Merkel last year about this and he said we’re get and you know as we hit the limits here we’ll go to thermodynamically reversible computation where we not generate any heat and he he foresaw a future of us using chemical bonds to store the ones and zeros and that’s a whole other level. He figured that would give us 10 orders of magnitude on Moors law right there. 10 orders of magnitude. It was madness. 10 billionfold. It’s incredible. Um well I don’t I think one of the takeaways though is we don’t necessarily need it in order to continue making progress. Uh so because a lot of the more you know the more esoteric ideas remember you know gallium arsenide for the longest time was going to come online and be a blah blah blah and it turned out that we just worked around it. Uh and then carbon nanot tubes were going to do whatever like well it hasn’t

[01:57:00] materialized. So I think what’s going to happen here is these these will go vertical and they’ll go massively in scale. We’ll get the production costs way down. We’ll build enormous data centers horizontally and also we’ll build the chips vertically and that’s going to drive innovation for many years to come and then the next thing may or may not be quantum and we’ll we’ll know in a year or two whether quantum is going to be the next thing. I’m going to speed uh through a few different topics here just to get us uh through some of the interesting things. We’re starting to see AI being used to generate peer-reviewed scientific papers and breakthroughs. Uh we’re seeing Deep Mind uh helping us, this is through Alpha Evolve, uh literally solve uh you know, math records and and Dave, I’m hoping that we’ll get Alex Weezner Gross to join us talk about how uh how AI is going to be solving math and physics and biology. I mean, I think one of the

[01:58:00] things that’s underappreciated is over the next 3 years how AI is going to help us accelerate the, you know, breakthroughs in science beyond anything else we’ve ever ever seen before. Um, and here’s another one. This is a demonstration of end-to-end scientific discoveries with Robin, a multi- aent system. And what we’re seeing here is closed scientific robotic and AI systems where an AI proposes an experiment. Uh the robots then run the experiment 247 basically in a dark lab, gather the data, feed it back to the AI which updates its its theory, runs the next experiment. And we’re seeing this uh in biology for sure. We’ll see in chemistry, material sciences. And this is another hyper acceleration uh in our in our scientific uh realm. Thoughts on this gentlemen? I mean this is where I I I know that not everybody considers themselves an entrepreneur. Uh what did

[01:59:01] you say 16% of America does today? Adults. Yeah. Yeah. But this is like a field day because this is the all of these areas. I don’t want to get into the details of them but they’re all domain specific. So if you can take the current AI, tune it, train it, get proprietary data, and take it down any of these paths, you get miles and miles ahead of the generic AI. And so it’s just an entrepreneurs’s field day this this next couple of years. Yeah. And so these are these are just good case studies. I won’t dwell on the specifics. You can read about them later. I’m really excited. This is probably for me the biggest um uh small kid in a wonderland moment where we can use these AIs to solve really deep physics problems, mathematics problems, scientific discoveries because a human being trolling through data looking for patterns is is terrible. We’re bad at that. And this is where an AI is really really good at it. Especially going retroactively and finding all the stuff in experiments that we didn’t see in the past. I think I’m unbelievably excited about this. Yeah. and and this is

[02:00:02] helping humanity across the board, right? I like to say over and over again, I had this conversation when I was in Hong Kong, despite the polarity in AI, breakthroughs in biology, you know, a breakthrough in Boston plays in biology and longevity plays equally well in in Beijing, right? So, it’s it helps us all when humanity is healthier uh and and living longer, more vibrant lives. Um Mo, anything on the on the science break? This this is my favorite thing ever, AI or not. Uh you know, the possibilities that we have here just as we go through multid-disiplinary sciences which no human uh mind has the ability to grasp fully uh which is you know the nature of AI. I I I think at least I dream that 2026 will be, you know, blasted with all of those new discoveries in science uh now that we’re solving mathematics as well. Yeah, you

[02:01:00] know, I’m I’m excited about having this podcast over the course of the next year as we start to share again. You know, I’m grateful for our our listeners. I mean, our mission here is if you’ve got an hour or two hours to listen to the news, instead of allowing some editor somewhere, some producer to feed you all the dystopian news on the planet, let us share with you the incredible breakthroughs cuz you’re not getting this any place else. The current news media is just playing with your amydala, delivering negative news over and over and over again every hour into your living room in full color and you’re not hearing all the extraordinary breakthroughs coming our way. Um uh I’m going to move to this last scientific subject which is one of my favorites which is uh some of the work being done by Demisabis and others is can we build a full up virtual AI model of a human cell even more importantly Mo

[02:02:03] can we you know grab a skin cell from you sequence your DNA and build a virtual model of Mo uh why would you ever do that of Dave and of Dave and Sem is a better one. But yes, but of each of each of us once you’re able to do that, right? Because we, you know, the cost of sequencing a genome went from billions uh to now, you know, a couple hundred bucks uh and from, you know, a year to seven hours. So we can sequence your genome, put it into a virtual model and then understand your biology, which which medicine, which supplement, which chemical does or does not work for you, and how exactly it works in your cells. I mean, this is the unlock for solving human disease and uh the limits of longevity. And so for me, I’m super excited about this. And I think it will happen, huh? It’s just a question of time, really. Yeah. Yeah. Yeah. Yeah, I think it’ll happen

[02:03:00] relatively quickly too with AI assist and it is incredibly compute intensive. So, it’s a good case study in why those those, you know, Middle East investments are the biggest no-brainer ever. If you just work backwards, the implied amount of computation, but then the the benefit of solving virtually every disease is is just so overwhelmingly valuable. unlock so much capital that you’re not wasting on uh on you know old age homes or on dealing with uh with you know Alzheimer’s or Parkinson’s and it allows humanity to be more productive. There was a study out of Oxford London School of Business and Harvard that said for every additional year of health that you give a population it’s worth $ 38 trillion to the global economy. I mean, this is one of the most if you want to solve the US economic issues and China’s economic issues, make the population healthier and live longer. All right, so that wraps up AI and science. Let’s talk about one last subject here today, which

[02:04:01] is as the father of of two now 14-year-old boys, I think about a lot, which is reforming education. Uh, I’m going to play a short video clip. Uh, and then I’d love to hear your thoughts on this. I applied to probably around 18 schools and I was rejected from maybe around 15 of those. In fact, we have a map. So, in all the places that rejected you, not to make you feel worse looking at this list. Well, let’s just share your statistics because I know that factors into college admissions, right? Uh your GPA and SAT score. Uh GPA was 4.42 weighted. Uh SAT score was 1590. Okay. And in fact, I think we have that on the graphic just so folks can see that. So the point of this story is are we uh going to move back to meritocracy? We’re in fact uh selection and admission into schools is based upon performance

[02:05:02] and not anything else. Been a stick sticky subject coming out of uh you know the last uh four plus years on DEI. Uh Dave, you’re deeply embedded in AI and uh technology at MIT. How do you think about this? Well, schools are between a rock and a hard place because they clearly had quotas and so so you know they and and then now the the rules say you’re not supposed to do that. But you know, they they they’re just totally stuck between a rock and a hard place on this topic. But I I don’t think they uh I don’t think they do a great job of choosing who to let in. uh you know toward any particular outcome that they’re targeting. Anyway, there’s a more to life than a 1590 SAT and a 4.42 GPA. And uh I see a lot of the students that are underperforming in the classes that I teach just because they need to be creative and they need to build businesses and they need to recruit and they need to motivate people and none of that gets measured well by

[02:06:00] these particular metrics. and I see too many people that are curated to be perfect applicants from age like six and it doesn’t go particularly well. So I’m I I don’t know. I think the school should be allowed to choose with pretty broad brushes what they’re trying to achieve with their student body. And so hopefully this doesn’t go too far. Well, I think we have to re-educate re reinvent the whole university system in the first place. Now, one of the questions I’ve already always had for Salem, you and I both have boys the same same age within a month of each other. Is university going to be a thing by the time they get to college age? And what will its purpose be? So, Salem, how are you thinking about this? I’m I’m in the same motive with the driverless cars. I’m desperately hoping the university system implodes in the next 5 years before my son has to go. Purely because, you know, the taking a 4-year degree to get credentialed in some domain that you’re then supposed to be an active worker in for 40 years before you retire is completely out of date. The model of

[02:07:01] a university has not changed in 450 years. It’s it’s it’s desperately broken. Deep research is fundamentally incredibly important. So, I think that’s really killer. I note that um the most interesting stat for me is that more than half the CEOs in Silicon Valley have a liberal arts degree and I find that really interesting because the different models of how you think drives creativity in product design etc etc. So there’s a vector there to be explored in terms of how to think about all this. And Mo, my, you know, my experience is that the majority of leaders and influential individuals out of the Middle East are all coming to the US for their degrees. Uh what’s the buzz there in in the Emirates? No, I think I think the the reality is that I meet more MIT and you know, Stanford graduates here than I do in America most of the time. It is quite staggering actually uh how how people in that’s fantastic.

[02:08:00] I mean it is it is definitely uh and it’s I think it’s definitely revived uh the top management of the region very very interestingly. I I wonder what I wonder though if uh if uh Salem’s uh wish will come true. Uh because you know I don’t know if the university systems would implode but I definitely think our belief in universities would uh and and it really is quite an interesting thing because 4 years in a world that’s moving at X speed is is very different than a world that’s moving at 10x speed and that’s what we’re seeing now. So if if everyone’s going to entrepreneurship uh you know and everyone can use lovable or claude or whatever to write code or start businesses or you know agents are everywhere you’ll probably see entrepreneurship age go to 16 and 14 and you know you may see a very a very different world and and I wonder Dave Dave do you want to talk about that? I mean you’ve seen the shift in terms of the companies that are becoming unicorns

[02:09:00] uh get a decade earlier. Yeah. Yeah. No, there’s no doubt. I mean, but I think the universities are about friendships and the friendships turn into company formations and you know, a lot of the universities don’t recognize the degree to which that’s the dominant factor that’s keeping them alive. So, if they want to survive this transition, you know, they got to they got to embrace that as what they’re delivering. It’s credentials and it’s relationships between human beings that are the dominant deliverable to the students. So, they need to be turned into entrepreneur boot camps. Yeah. Yeah. Well, here we see uh another article UAE to make chat GPT plus free to all of its citizens. Um you know, again, these are these are forwardlooking moves uh to help and AI is mandatory education now for six uh six years or higher, I think. Crazy, right? But the statistic here is really important, right? um uh what’s the stat I heard was that with a student with an AI is is learning subjects between two

[02:10:01] to four times faster than going to school and that’s just going to overwhelm the existing system very quickly. I think this is an awesome move. Well, also the concept of a curriculum which is hey you know we only have so many teachers so we can only afford 12 subjects 15 subjects with AI assist you can afford 20,000 a million different subjects. So, not only are the students self-directing at their own pace, but they’re also learning whatever they think is most relevant to their path, which is so much more effective than the old way. Hyperpersonalized education, you know, you’re you’re learning math focused on your favorite sports star, movie star, your favorite, you know, scenarios and stories. Uh I’ll close out with this uh provocative article that came out that around 5% of teal fellows have become billionaires from Vitalic uh to Austin Russell and and reminding people the the teal fellowship is paying you to drop out of college. So what is this saying

[02:11:00] that our most productive years were wasting in our university experience instead of starting companies? Uh fascinating thought Dave, I’ll start with you. Well, first of all, the the Teal Fellowship uh doesn’t try to teach you anything. It just selects you. And so it shows you the degree to which the schools are not selecting. You know, they’re getting nowhere near a 5% unicorn rate coming out of the schools. But teal, you know, given the abundance of big data that’s out there, the Teal Fellowship can just be a better selection and application process that covers topics like are you self-motivated, are you high energy, can you recruit, do are do you think through these AI topics, you know, those are all baked into that selection process. And so it’s very viable for these credentials to replace the university degree as the credential that everybody wants. So, it’s something the school should really be aware of, but it’s, you know, it’s not super hard to put together a AI assisted analytic that tries to predict who’s going to succeed

[02:12:00] as an entrepreneur and that’s all the Teal Fellowship tries to do and that gives you a little bit of money and encourages you, but it’s really what you’re really giving you is the credential. See, do you want to Yeah, I agree with David on this one. And I think this is not a negative comment on the university system. I think the teal fellowship selects for people that are such outliers that the university system doesn’t kind of accommodate for them. Anyway, I think there’s a systemic issue on the university side which we’ve talked about already. Um I note that more than half of CEOs in Silicon Valley or have a liberal arts degree because the different ways of thinking, how to think are really important in product strategy and company strategy and so on. Um I do you know if you’re doing for example a master’s degree in neuroscience today you’re out of date by the time you finished your degree because computational neuroscience is totally taking over the field. So undergrads and masters are kind of essentially mostly irrelevant in terms of learning with AI. If you’re a student you’re learning and learning with AI you’re move moving and learning between

[02:13:01] two to four times faster than being in school. And so all sorts of things will have this thing. And I mentioned that, you know, maybe hopefully my the university system implodes in the next 5 years before I have to pay for my kid to go to school and when he’s 18. Yeah. I I do think education and health care are the two massive industries and expenditures for people that are going to be completely disintermediated, disrupted, democratized, and demonetized. I hope. All right. Uh a lot of amazing stuff. Let’s wrap around the the horn here. thoughts on today’s conversations. Uh Dave, can we start with you? Yeah, Mo, it’s been fantastic getting your perspective. I think uh Come on. How you doing? Well, look, we’re building toward an intentional future from here forward. It’s what we decide to do and what we decide to build. So, so I I think today we got a really good deep understanding of some of the risks and and things we need to start planning for. And I I but

[02:14:01] I do feel like everything is solvable if we if we work on it. And you know, we’re on exponential time now, so we have a very limited window of time to work on it. So that was one of my great takeaways from today’s pod. But much appreciated perspective. Yeah. Mo, how do you wrap up your thoughts from today? Yeah. I I I think just like it is a singularity and there is an upside utopia and a downside dystopia. I think we should equally weigh uh our views of the optimistic possibilities and the dangers or risks that we have to address. I have to say I’m extremely extremely excited about the scientific uh breakthroughs that we can see from uh from you know AI in the next couple of years. And I think the most important topic even though we didn’t cover as much of it today but we mentioned it is alpha evolve and the whole idea of self- evvolving AIS. In my mind this probably is the top topic to keep your eyes on in the next 12 months.

[02:15:02] See close us out buddy. I think we should have a whole episode just on alpha evolve. I think it’s such an important topic technically but also philosophically. I I go back to the kind of standard basis for my optimism that technology has always been a major driver of progress in the world. It might be the only major driver of progress we’ve ever seen as Ray Kriswell mentions a lot and now we have AI uplifting all of these other technologies enabling. So there’s that’s the reason for the the huge optimism. Yeah. Yeah. Again, I’ve said this before. I think we’re holding two potential futures for humanity in superp position. uh one is an extraordinary future of abundance uh upleveling of eight billion humans on the planet becoming a multilanetary species that Star Trek universe uh the other one is not quite as pleasant uh we’ll call the dystopian future and you know universe yeah and and Dave one of the things that you said I want to just echo is it’s we

[02:16:03] have the ability to create an intentional future this future is not happening to us we have the ability to guide where it goes and I think all the entrepreneurs listening today um it’s the most important thing that we can do. what is the vision that you want to create in the world and you have the tools now access to capital access to compute access to intelligence to go make that future happen and I think it’s not ours to abdicate to somebody else I think we need to take action um so a lot happening uh all of you respect you deeply love you all and and so so excited to be on this journey I look forward to seeing you guys uh in a week or so thanks for having us Thank you guys for putting up with me. Hey there, this is See bouncing through SFO today. Hope you enjoyed that episode. I It’s clear from the pace of change that every organization needs to change. On June 19th, we’re going to have be having a 2-hour workshop for $100 on how to turn

[02:17:01] yourself into an exo. Come join us. It’s the best $100 you’ll spend all year. Uh we’ve had rave reviews of these. We do it about monthly. We restrict it to a few dozen people, so it’s a very intimate affair. Uh, and we’ll be actually going through actual case studies and what you specifically can do. Uh, don’t miss it. Come along June 19th. The link is below. See you there. [Music]