06-reference / transcripts

moonshots ep246 spacex ipo claude mythos transcript

Fri Apr 10 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: Moonshots Podcast (YouTube)

SpaceX is going public with a $2 trillion valuation. It’s the beginning of the IPO wars. >> The stepping stones are really, really clear now. Starlink gets you into space profitably. Then the data centers, then you get to the moon, refueling in space, then you get to Mars. >> Anthropic overtakes open AI in terms of total ARR. That has got to hurt. >> Super intelligence is not paying for the singularity. >> They kind of bet the consumer would grow faster sooner, but they just they’re wrong. Mythos Anthropic’s next flagship model. It’s too powerful to release. >> We’ve never seen a model like this before. We officially have models that are smart enough to break out of their environments and then apologize for it. We’re there. We arrived to the future. >> Now, that’s a moonshot, ladies and gentlemen. >> Everybody, welcome to Moonshots, your number one podcast in exponential technologies. everything going on in AI in the world around us. It’s an extraordinary time to be alive. Uh this

[00:01:01] podcast in particular is here to help you stay positive about the future, optimistic and hopeful. Uh there’s so much going on. It’s really tough sometimes because the speed is so extraordinary. Uh we want to give you an overview of what’s happened in the last two weeks cuz we’ve been offline. Why? Hate to say this, I actually took a vacation. I was in Morocco in the Sahara. Um, and it’s great to be back here with my new >> I had to come off a ski slope to make this episode. >> I appreciate I appreciate that. And we’re going to catch up for everybody, all of our fans. We’re catching up on episodes. So, get ready for a flurry because there’s a lot that’s been going on here with my extraordinary moonshot mates, Sel Ismael, straight off the ski slopes. See, where are you skiing today? >> I’m in Kirkwood in Lake Tahoe. Uh, it was Milan ski week off, so we took a few days and just got them out here. DB2 back in the saddle again. >> Yep, back in the saddle. We have 200 speakers tomorrow at the MIT Media Lab and uh today we had 60 startups pitching

[00:02:02] here in our first floor and just a lot going on. >> Amazing. I I’m so sad not to be there with you. And our resident genius, Alex Wezner Gross. Alex, good to see you in your regular haunt. >> Good to be back in the Commonwealth of Massachusetts. >> Yeah, fantastic. All right, a lot is going on. Uh we’re going to be covering a whole host of subjects in the AI world, in the space world, in the abundance world. One of the segments we’re going to be bringing to you on a regular basis is proof of abundance. We really want to keep you positive on what’s going on in the world. Sometimes if you’re watching the Crisis News Network, uh what I call CNN, uh it can get you down. Our job here is to keep you informed and bring you back up. But before we do that, Salem, uh looks like you made some uh some news. Here you are cover of India Today. What’s this all about? >> So I was in at the India today conclave. This is the biggest kind of news magazine in India and they had a bunch

[00:03:00] of speakers and so the image is photoshopped. But you got to understand the context and the surrealness of the world we live in today. So in front of me is Elon’s mother. Next to me is Laura Loomer, the MAGA conspiracy theorist person. Uh then there’s the Israeli ambassador and they’ve put the Iranian foreign minister next this they literally took me back in the speaker room and they’re saying, “Hey, come and meet these two guys.” I’m like, “I don’t want to be any of that. The Israeli guys going to pull out a gun or something and there’s going to be an assassination attempt.” I think the cover and then a Bollywood star, you know, and a bunch of business people in in the world. >> What do these people have to have in common? I think it’s I think it’s a reflection of the insanity of the world that we live in today. I think that’s what you can read from this cover and I think it’s kind of a commentary on the madness of of the >> I hope I hope you represent the breakthroughs and not the breakdowns. I >> I did I was I was very much on the hey we’ve got major things happening and we need to kind of organize differently for it etc. It was a great conversation.

[00:04:00] >> All right. All right. Fantastic. Uh let’s jump into our first story. It’s SpaceX is going public with a $2 trillion valuation. Uh, and it’s the beginning of the IPO wars. So, uh, let’s catch everybody up. Hopefully, you’ve been hearing this. Uh, full disclosure, I’m an investor in SpaceX from the earliest days. So, SpaceX is, uh, is pricing itself right now at about a $2 trillion target valuation, raising $75 billion. Uh, the largest IPO of its kind. Um, interesting enough guys, uh, you know, one would think that the value of SpaceX is due to its rocket launches or maybe recently the merger with XAI, but the vast majority of the value today is Starlink. 75 to 80% of the target valuation is due to Starlink, about 15 to 18% due to launch services, 5% for NASA services, and the X AI and X

[00:05:02] related revenues, it’s all in potential in the future. Um Dave, any thoughts? >> Well, the stepping stones, you know, Peter, you’ve been studying this for ever since we were in school together, so a long time, but the stepping stones are really, really clear now. you know, Starlink gets you into space profitably. Uh, then the data centers get you uh, you know, 50 ton and then 100 ton launches profitably. Then you get to the moon, then you start refueling in space, then you get to Mars. So, so it’s just so cool to see how Elon lines up the dots on these things. And, um, yeah, I don’t think it’s any great surprise. You know, Starlink is incredibly successful. It kind of surprised everybody. No one else thought of that being the the first move in the chess game and of course Elon two steps ahead. >> You know what’s crazy? This game plan has been tried numerous times before. So if you go back and you know I was early in the space days you go back to the late 80s early 90s. There was a company

[00:06:02] called orbital sciences was the hottest company in the launch business. Created the Pegasus and the Taurus launch vehicle. And because they had a launch capability, they launched something called Orbcom, which is was a small satellite messaging service from low Earth orbit. Uh, and it was their vision to have that be the revenue driver. And they didn’t pull it off. We had then the big that was called the little LEO. Then we had the big Leos, big Leos, the Iridium, uh, Teladic. Um, and those didn’t really make it. I mean, Aridium is kind of still around, but kind of walking. >> Let me ask you, Peter, you know more about this than anybody. Let me ask you, the idea of a reusable rocket being the breakthrough and cutting 90 95 and soon 99% of the cost. It seems so obvious >> in hindsight, but all these aerospace breakthroughs always seem obvious in hindsight because, you know, once you’re doing it a certain way, you’re like, “Hey, it works.” But it’s never obvious looking forward. But why why did it take

[00:07:02] so long? Yeah, it’s >> is it is it the weight of the fuel coming back down that everyone’s like, “Yeah, you can’t carry fuel up just to to retro rocket it back down or what?” >> I mean, what’s interesting is it’s been the holy grail. People have talked about it for the longest time. Back McDonald Douglas had a vehicle called the DCX, which was the first vertical takeoff, vertical landing capability, used a uh RL10 engine, I remember. And it was it was the great hope of getting there. You know, people are mistaken that uh you know, the cost of these vehicles is fuel. Turns out the cost of the fuel for a rocket is on the order of like a couple of percentage points, right? So, the fuel for a um you know, liquid oxygen you can get out of the atmosphere. Uh you know, hydrogen or kerosene, you know, is basically a fuel. Uh so it costs you less than a million dollars in fuel to launch a a Falcon 9. Um, and it’s now that we have the

[00:08:02] ability to actually with better materials, uh, better control systems, uh, and just scale makes this possible. You couldn’t actually build, uh, you know, fully reusable vehicles unless they got to a certain size and scale, which we have with Starship. So, there you go. Uh, you know, Dave, one other thing I want to just ask you about, check this out. uh the 2025 revenues for SpaceX. I’m excited about the IPO, right? Um yeah, you know, and it’s going to be one of the largest events in financial history, but the 2025 revenues for SpaceX were about 16 billion, 8 billion in profit. Pretty healthy margin, right? 50%. And it’s expected to double to 20 in 2026. Uh, so imagine 16 billion in in profits at a $ 1.75 trillion market cap. That means a price to revenue multiple of 56 and a PE ratio

[00:09:00] of 109. >> Mhm. >> What do you think? >> Yeah. You know, what do I think of that? Well, I think it’s all it’s all PEG ratio. It comes down to the growth rate. And a a company growing 100% year-over-year is worth 100 times earnings. It’s just or actually more than that 120 130. So the question is you know can you sustain that growth rate for five six seven years uh if you look at Elon’s projected launches per day launches per week uh and also you know his prediction that the global economy will grow 10x in 10 years uh this is dirt cheap if any of those things are true um but you know if if it if the growth stalls and it it’s growing 10% a year then it’s 10x overpriced so you just have to believe the vision but I think at this stage though uh the the Elon believers have invested in him over and over and over again and never at a loss. And so I mean I think at this stage it can’t go on forever. You know someone has to be the last guy holding the bag. But would you be would I bet against him? No way. Never. Ever.

[00:10:01] >> And everything he’s saying the math. Yeah. The math checks out. You know there’s no there’s nothing fundamentally wrong in the math. >> You know Alex would blow smoke on on that instantly if there’s anything wrong in the math. But there’s not. It’s just a question of execution. >> Yeah. made >> Palunteer trades at about 220 times earnings. >> So clearly there’s a multiple with all of this AI stuff and you look at the the combination of all these services that are incremental. But this the this is obviously just Starlink with a launch capability but the scale of what’s going on. What I found really incredible is that to the earlier com conversation people have tried this for ages and ages but now you have multiple exponential technologies that have all converged. So, this future looks really bright. That wasn’t the case 20 years ago. >> I’ll take a different position on this if I may. I don’t think it’s that supply has been unlocked. I think it’s that demand has been unlocked. You’ll notice that Elon announced the SpaceX IPO the moment after it became obvious to many that orbital data centers were going to

[00:11:00] have enormous demand. This coincides with an enormous lack of demand, at least within the US, for certain locations for new AI data centers. I I think it’s instructive to imagine a counterfactual universe where suddenly municipal, state, and federal policy, but especially the first two, suddenly became super welcoming of land-based data centers. I think it would in in my mental model of this, if suddenly every state welcomed land-based data centers and the corresponding on-site energy supplies with open arms and probably lots of fision reactors to go with them and solar farms, I think we would probably see the the PTE multiple go down materially. >> Yeah. Well, one one other thing I’ll say that, you know, of all the big mega guys, you know, the the Googles and the Facebooks and Metas, uh Elon has actually never had voting control of a public company that he can tap into the public public markets overnight. You know, here you’re raising 75 billion on IPO day. That’s only three and a half% delilution if it hits this price target.

[00:12:00] I mean, lit literally three and a half% and then you’re sitting on a $75 billion treasure trove. Then you can do another capital raise just six months later, do an overnight whatever, another hundred billion. >> You know, in the past he’s had huge issues with his boards, his comp plan, his comp plan being vacated. Uh then his capital raises, you know, Peter, you’ve been involved in them. Uh they’re long road shows, lots of pitches, scratching together the capital. This gives him a tool he’s never had before that, you know, Larry Page and Sergey Brin had. Mark Zuckerberg had. Yeah. Cash >> cash machine. Yeah, >> the reality is, you know, having invested in in his companies, um, when he says, “I’m raising,” there is a line out the door and it’s overs subscribed over and over and over again. You know, I think what’s going to be interesting here is bringing in the retail investors and broadening the base of support. We’ll talk about that in a minute, but uh, I want to talk about the IPO environment one second because there’s a really important point to be made here for all of our listeners. So, if you

[00:13:01] look at IPOs uh in 2026 versus 2025, uh there was 35 IPOs this year. It’s down 37.5% yearonear. And we’re about to see potentially the three largest IPOs ever. Uh SpaceX going out, you know, at 2 trillion. Uh Open AAI sometime at the end of this year. Uh and Anthropic, uh you know, it says IPO early mid 2027. And I think Anthropic wants to go out early this before the end of this year as well. And uh you know, one of the things I I tweeted about here is it’s going to be I think a little bit of a competition out there for who gets the capital before it’s soaked up. Um you know, SpaceX is going to be hitting the road show uh in June. uh Anthropic is is as we’ll see later in this episode has been uh running circles around open AI

[00:14:00] and OpenAI needs the capital to continue its growth. Uh so I think it’s going to be jockeying for position for number two. I would not want to be number three in this situation. >> Yeah. No, Peter, you’re so right. A lot of people don’t appreciate that that there is a limited supply of capital out there. It all seems like funny money at this scale, right? Like there must be some infinite pool that that God supplies somehow. But it’s just not true. True. And I know it firsthand because when uh I took EverQuote public back in 2018 >> and it was right when Alibaba was going out and and Alibaba soaked up every dollar and every analyst and every buyside uh you know person >> on Wall Street and it was it was really really tough to get any audience and uh there isn’t an infinite supply of capital out there and when you you know Peter you say these are these are record setting but look at the chart if you’re if you can’t see the chart Peter should describe the chart. It’s not record setting by a little bit. >> Yeah. >> So, let’s take a look what’s there, right? So, Uber goes public for uh raising uh let’s see at 67 billion. Meta

[00:15:03] is at 65 billion. Rivian at 55 billion, Robin Hood at uh 30 billion. And then we’ve got, you know, >> it’s a on a different scale, right? you know, open AI and anthropic will be heading towards a trillion and SpaceX I would be surprised if SpaceX doesn’t come out at two trillion and run up very quickly to three trillion. >> Yeah. >> Um, >> hey, I mean it’s it’s staggering and I so funny I bumped into someone the other day and I and he was talking about Jamie Diamond and I said, “Well, Jaime Diamond used to be really important, but if you look at the numbers now, JP Morgan as a whole is not is a rounding error compared to any of these things.” And of course, he’s still a very important guy. I know, no offense to Jamie, but but I mean there are literally like, you know, seven soon to be eight companies and then after anthropic nine companies that are everything. I mean, just so dominant in scale that they’re everything. And so like a a director level employee there

[00:16:00] is wealthier than the CEO of a of a mega bank. >> Crazy. >> Yeah. Just put it. >> And there will there will be a sucking the oxygen out of the room, >> right, as this as this happens. And here’s the other thing. A lot of the capital used to come from the Middle East, probably still does. But if we’re in the Iran war for much longer, and you know, access to the sale of oil starts to slow down as the as the rate goes up, uh that that cash machine coming out of the Middle East to fund these these tech IPOs may be slowing down as well. >> Oh, I see it the other way actually. Uh AI is clearly happening in just the US and China clearly. And it’s very hard if you’re global, if you’re in Europe or anywhere, very hard to invest in China because, you know, you’re very worried about getting your money back. >> Uh so all the global capital wants to invest in US data centers, US IPOs, and yeah, the Iran situation scares everybody. At the end of the day, what else do you you have to invest in AI?

[00:17:00] It’s going to take over the world. And there’s nothing going on in Italy. There’s nothing going on in, you know, wherever you are. uh and South America somewhere. Um so you got to you got to pour it into this economy one way or another. So it’s it’s actually that’s why Orin is doing so well. Kush Pavaria’s company Kush and Wayne >> because that money just wants to pour in from all over the world into US data centers. You just have to find great vehicles to unlock it. >> Amazing. Uh let’s hit on a couple of questions here on this topic. Uh you know here’s a thought. We have Tesla that’s been public. Um, Elon did not want to be the CEO of Tesla. I had that conversation with him many times. He would have loved to have hired a CEO. He just could never find anybody that he trusted at the helm. And now that Tesla is actually building Optimus and everything else, he’s not going to give that up. Uh, at the same in the same way, you know, he’s not going to give up SpaceX and XAI. So, the question is how long before he merges those two companies. Um, you know, one of the

[00:18:02] advantages is that as public companies, he can now value both. So there’s no show shareholder lawsuit if they come together, you know, and there’s a incorrect valuation. So I I give it I give it a year. What about you, Dave? >> I you know, it it’s he could wake up any given morning and say, “Yeah, let’s do that.” Or he could say, “You know what? Everything’s fine as it is.” the the logical part of it is that um you know all the robots and all the parts and you know we saw the whole Gigafactory all that uh is going to get turned into creating the robots and the robots need to build the spaceships also the AI which is now over at SpaceX he thought about merging it into Tesla but that AI from XAI needs to go into the robot head >> so there’s going to be a massive business relationship between the two empires anyway >> merging them makes total sense >> but maybe he doesn’t want to just for you know >> it’s It’s it’s the first true crossdomain exponential empire that he’s building here. It it’s kind kind of

[00:19:00] incredible. You know, people aren’t buying discounted cash flows, which is the normal thing. You’re buying a means mission. Proximity to the future is what you’re buying. >> I’m not I’m not sure though that he actually needs to. If you look at his history of merging his companies like with Solar City or with X and XAI or frankly XAI and SpaceX, he tends to merge companies when they’re either not doing well and he needs to fail forward through sort of a self-deing acquisition or a company needs access to capital and the easiest way to gain access to capital is with an acquisition. So in my mind, the scenario under which SpaceX and Tesla merge almost requires that either SpaceX or Tesla either fail or be desperate for capital. And given that they’re both >> Yeah, that’s a great point. If that’s a great point. If they’re both doing well, >> both doing well, >> he’s going to be doing a lot of cross company deals and the accounting of that becomes a lot easier if it’s under one roof. and if he’s the CEO of a single company that he’s able to have earnings,

[00:20:01] you know, once for each comp for one company versus multiple, it just makes his life a lot easier. Um, and I think >> perhaps but he he he’s never necessarily been one to to honor strong veils between companies and I have to imagine lots of cross- licensing deals between SpaceX and Tesla will more than scratch that particular itch. >> You know, here’s another question. the value of SpaceX, let’s call it SpaceX AI, um, >> how much of that is Elon, how much of that is his reputation? >> Oh my god. >> You know, this there is it’s a lot, right? And so there is a huge there’s a concentrated risk there. if something ever happened to Elon and you know, God forbid that it it should, uh, you know, these all these spinning plates, uh, I don’t think anybody else could do it. >> Well, I think that’s generally true overall. You know, people complain about CEO salaries all the time because they get egregious, but then you look at the outcomes and there’s just a set of

[00:21:01] people that get these outcomes. It’s from an investor’s point of view, it’s a no-brainer to pay for the very best person. And that’s just true in general. Then you look at Elon as a special case and yeah, no this there’s no chance this thing would hold up without Elon at the helm. >> I I would suggest >> still exist. Sorry, guy. >> I I would suggest if you look at OpenAI, which I think is another instructive example, Sam has said multiple times that he intends at some point to hand over the reigns to an AI. So I think Elon to the extent we’re talking about key person risk or key man risk at SpaceX or Tesla really he just needs to keep going until AI can take over either and in the meantime he has Gwyn uh and others who are very capable uh CEO like figures but more behind the scenes who are capable of operating in his absence I think for extended periods of time >> there is a there’s a transition phase of a few years >> I mean we’ve all said this over and over again you know the best CEO in the world is going to be in AI at least handling

[00:22:01] the strategy and operations the HR part may be an AI too probably is going to be A2 but so how long before you think uh he feels Grock is ready to take over for him next few years >> okay >> I mean he the the rumor in the past 48 hours was that the Starlink executive who’s also now post SpaceX XAI merger in charge of XAI engineering has gutted the engineering team and finally declared that XAI’s models are well behind the three other now maybe four other >> docket for our next recording which will happen again tomorrow but released a few days later uh here’s a qu you know we heard a conversation with Elon about reaching hundred trillion dollar companies uh in the next 5 years and I have to imagine that you know Space XAI Tesla uh will be the first hundred million our hundred trillion dollar company. >> It’s hard to say, isn’t it?

[00:23:00] >> Yeah, it but honestly >> billion billion trillion. You have to get used to quadrillion. >> Uh yeah, >> but if if we experience a period though of hyperdelation due to technology followed by rapid hyperinflation, we get to 100 trillion really quickly. It doesn’t necessarily even require enormous business building, just rapid hyperdelation due to technology. >> Yeah. And that’s that’s what you know, you have to keep a close eye on the on the terminology because if we have rapid hyperdelation, we’re going to get to 100 trillion of effective value. Uh but it may not show up as 100 trillion in true dollars because we’re deflating so quickly because we’re creating so quickly. >> But anyway, the my my guess would be five years. Yeah. >> One of the things that we just saw announced is uh SpaceX is going to actually put a large chunk of its shares available for retail investors. Um, OpenAI announced they’ll be doing something very similar. And so I’m curious, what do you think is going to drive the retail investors? Do they really understand that it’s a Starlink

[00:24:02] story versus a space story? Cuz at the end of the day, what I get excited about is the XAI story, right? The orbital data centers and uh, you know, Gro 17 or whatever is coming down the pike. >> Well, I think it’s just like Steve Jobs, though. the the vision that people buy into is the bicycle for your mind or the the where where it’s going, what it’s going to be in a few years, not today’s re in fact if you if you I keep the uh the Google IPO perspectus in my bathroom up in Vermont and I reread it religiously. >> Not as paper, right? >> Well, it’s getting a little ratty. It’s been, you know, decades now. But but the vision of what Google would become is so wrong in that IPO perspectus. It’s just, you know, it really emphasizes it that yellow pages are shrinking and all local advertising will also move to Google and that’ll make it at least twice as big. >> And it’s such a joke compared to what actually transpired over the next decades. Same thing applies here. People are investing in Google in in uh Elon.

[00:25:01] Elon articulates a vision of the future that just makes sense to people and it he simplifies it to the point where they really understand where he’s getting to. I don’t think they analyze the financials particularly closely, but but he doesn’t lie about the scale, you know, he he presents it the way he sees it, so people just trust him and then they invest. >> I can just imagine the conversations behind the scenes. We’re we’re a couple weeks away from the OpenAI um or the Sam and Elon trial coming up, which is going to be pay-per-view TV, I think. And we’ll talk about that in the next conversation uh in our next recording as well. But I bet you Elon is just excited to suck the capital oxygen out of the room before OpenAI goes public. >> Yep. >> Yeah. >> Yeah. The sad sad part of uh you know, Bill Gates was very happily running Microsoft until the antitrust action came and then he’s in front of Congress and then he’s testifying all the time and he ultimately said, “You know what? I’m going to be chief technology officer and chairman.” And Steve Balmer, you you

[00:26:00] deal with all this. >> You deal with problems. >> It just drove him out of the seat. But it’s it’s it’s seriously like the the guy filing the complaint doesn’t have a lot of work and the person defending himself just gets hammered with distraction. It’s so annoying. I’ve been through it before. I really feel for Sam actually cuz you know >> I everybody you may not know this but I’ve got an incredible research team and every week myself and my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology, and these Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrends. That’s diamandis.com/tatrens. All right. Uh, more news this week as we record this. Uh, Artemis is hurtling back towards Earth. Artemis 2, humans return to the moon after 54 years.

[00:27:02] Insane. Launched on April 1st. Uh this is the first crude lunar mission since December 1972 for Apollo 17. We have four crew members on board. Reed Weissman, uh Whismann the commander, Victor Glover, the first African-American astronaut to the moon, Christina Cotch, uh the first woman to the moon, and Jeremy Hansen from the Canadian Space Agency. Um, I mean, one of the things about this very international, intercultural, um, you know, crew here is trying to make space and the moon accessible to all elements, all all cultures on at least in the United States. Uh, new record set, going beyond the moon. I capitalized the letter M on this slide for a particular reason, gentlemen. Uh, I’m going to share a pet peeve. When we’re talking about the Earth’s moon, it is the moon. It’s a capital M. It’s not a small M. So, it’s like I I argue

[00:28:01] against Funkin Wagner or whatever it’s called. Um, it >> we’re going to be pedantic. Shouldn’t we be calling it Luna? >> Well, Luna is the proper name for sure, but when it’s referred to the moon, it for me, I capitalize it. A moon. Yeah. There’s lots of Jovian moons. >> You You address it by its proper name before it’s disassembled. >> Yeah. And then so Peter, when I say you’re the man, I should be capitalizing that. >> Probably. >> And my my other pet peeve is when you when you talk about dirt, you can use a small E for Earth. When you’re talking about our homeland, at least our home planet for the moment, it should be capitalized. All right. Splashdown is taking place tomorrow, uh, April, uh, April 10th, uh, near San Diego, uh, re-entering at 25,000 mph at about 3,000 degrees Fahrenheit. It’s going to be an incoming, uh, meteorite uh, from the moon. And guys, beautiful image of Earth. I was waiting for that image.

[00:29:01] >> Really beautiful. >> So beautiful. Let’s let’s hear from Jared Isaacman, our extraordinary uh NASA administrator. And by the way, Jared has agreed to come on the pod. I’ve known him for many years. Excited to have that happen. And we’ll wait for the uh for the news and all of the hoopla around the lunar, you know, this lunar mission to die down a little bit. Let’s listen to Jared here. >> Observed within the Orion spacecraft, uh its life support systems performing very well. And this is a first of its kind. This is the first time astronauts have ever been on this rocket. is the first time astronauts have been ever been on Orion before. Having a clean mission like this so far gives us the confidence for Artemis 3 and of course when we land astronauts back on the moon with Artemis 4. >> Congratulations Jared. Congratulations to the entire NASA team. It’s great to have NASA back. It never left but you know back in the in the limelight. Uh Alex, uh you are as big a space fanatic and fan as I am pal. Your thoughts about the mission. First, very exciting to

[00:30:02] have humans taking photos from the dark side of the moon. Very disappointing that we apparently went for more than half a century without the political will or the funding or the technology to do what we were able to do through the the ‘7s. I I think it’s an enormous shame for our civilization that we went for more than half a century without doing this. And I I would encourage any historians listening to study this period very carefully. something clearly went wrong in human civilization for the past 50 plus years that caused this gap in the technological record. I think we need to understand what happened deeply and make sure it doesn’t happen again. I think if something like this happened with AI, for example, if we’re on the precipice of broadly available super intelligence, transformative intelligence, and then we just took a pause for 54 years, I think that would be a dreadful outcome. So I I really do want to understand what went wrong systematically. A friend of mine, one of our professors at International Space University and at GW, John Logston wrote

[00:31:01] about this extensively and you know when you look at it uh the fact that JFK announced it and then was assassinated um you know Lynden Johnson continued it uh because of the assassination and keeping the momentum going uh to prove ourselves against the Soviet Union back then. And you remember this Alex and and uh and See and Dave that you know after the Apollo 11 and Apollo 12 mission, Apollo 13 was basically no one was watching it until we had that Apollo 13 disaster. And we had actually, you know, we we went Apollo 14, 15, 16, 17. We had the lunar rovers, which were amazing. And guess what? We had actually built Apollo 18 and Apollo 19. Those vehicles were built and all you need to do is add the fuel, but they cancelled it totally and those vehicles are actually sitting now at Huntsville and at Johnson Space Center on their sides as relics. Um, we

[00:32:02] didn’t have the political will. You have to remember that the budget allocation for the Apollo program, I didn’t actually get the numbers here, was like 2% of the GDP. >> That’s right. um compared to you know we’re at a probably what is NASA’s budget today uh compared to a $30 trillion uh >> probably materially less I I would I don’t have the numbers handy but it’s probably yeah it’s materially less than half a percent would be my guess I would say probably 0.12% >> something like that >> um our our our fans can uh can correct us in the in the notes here but end of the day we never had the political will and then what happened was that NASA got focused with the space shuttle which was a complete lie. The space shuttle was supposed to fly 50 times a year for $50 million per flight and it turned out to be a public works project uh employing 22,000 people and then we became focused on mission to planet earth looking at the earth versus looking outwards and all of these diversions basically uh caused us never

[00:33:02] to go back. So, Alex, that’s my answer to the question. But we are back now. And I think one of the things that we’re going to see from Jared Isaacman over his dead body is we’re going to stay there. We’re not >> at least for the next few years. Elon’s made the point, and I I think this is an incredibly important one, that progress isn’t always unidirectional. It requires love and tender care and and vigilance. And this is an example that it is possible for progress that remember like coming out of World War II and in 50s and 60s progress the the direction of transportation the fastest speeds that humans were traveling at the availability of energy vision in particular seemed to be on a monotonically increasing trajectory. And yet it’s possible for civilization to unwind itself on at least arguably the most important spatial dimension for more than half a century. And I’m utterly paranoid that the same thing could happen again if we’re not careful. That’s what keeps me up at night. >> What’s different now is that we built

[00:34:01] the KNOS Stoogga wagon with Starship and there’s now enough wealth in the hands of single individuals to keep it going independent of what a government says. That’s never been the case before. That is >> just imagine imagine if Tesla or SpaceX every four years had an employee vote on who the new CEO would be. and you are capped at eight years. After eight years, you have to leave the CEO job. Show me one company or, you know, one entity that could ever thrive and survive over the years in that dynamic. So, why would you ever think that’s that a government-f funed, government-made thing was going to have continuity over some kind of intelligent >> great >> lifespan? It never has. And the Soviet Union fell apart too, right? I mean, it’s just they they didn’t do anything either. It it’s government stuff will never do. It never has. You you have any examples of it? It it never has continuity. So now yeah it’s now it’s in the private sector. So you want to jump in here. >> You know what I love is the the fact that we have so much capability in the

[00:35:00] hands of individuals and we’ve seen over the decades of how much that can make a book a thing. This reminds me of Benvar Bush who was the head of what was then NASA after World War II ended wrote this paper called as we may think because for the first time we brought the world scientists together into one cohort for to solve the war problem and after that it would be a shame to disband them and he goes through a series of of arguments could we solve poverty with this could we etc and essentially invents the describes what is now known as the internet all the internet pioneers Vince surf and Bob Metaf all read that paper and then we have what we have today And so I think the possibility and the potential for Elon to put out his narratives or individuals to put out their narrative. Um, Vitalik did a good job with Ethereum putting out a narrative there brings an entire community together and you get compelling and unbelievable breakthroughs as a result. I’m really excited by the fact that we’re going back because I’m get really excited by the secondary inventions that come along just by doing this that that I think is

[00:36:00] >> the spin-offs as they’re called. >> Yeah. And uh here’s the forward-looking uh prediction here. Artemis 3 in 2027. It is a crude mission again uh to low earth orbit. This is not going to the moon. It’s going to be focusing on testing rendevous and docking maneuvers uh with the human landing system HLS which which space’s starship is supplying. So again, very much the playbook from the Apollo program, uh, where we had, you know, Apollo 8 go around the moon, then Apollo 9 not, and then Apollo 10 back to the moon. And then Artemis 4 in early 2028. Uh, it is a crude landing mission. Really important to the south pole of the moon. They’re not going to play it easy here. They’re going to the south pole. Why? because that’s where we see ice in the permanently shadowed craters at the south pole of the moon. Um, >> thing I don’t get about that is on that timeline, I love this, but on that timeline, Elon says he’ll be launching

[00:37:02] uh 100 tons that can refuel in orbit, get to the moon, drop off 100 tons, and get back with nothing melting in the atmosphere. So this this does 50 if this is on plan it’ll deliver 50 tons to the moon per launch. So there must be some plan beyond this that makes it uh at least try to keep up with Elon or or we’re trying to prove something else or >> Alex you want to jump in? >> Well, I I think there are a few elements here. First remember that Artemis 3 was originally supposed to be the moon landing mission that got pushed off in favor of rapid iteration there. My understanding of the the launch cadence from SpaceX is the the plan is still to do lots of orbital refuelings in order to to successfully launch payloads elsewhere uh sort of higher up. >> That’s the key technology that has to be proven for Starship. Yes, >> that’s right. So, so regardless I would say of the particular payload size,

[00:38:00] there are a number of technologies that as of yet haven’t been demonstrated. Elon talks about demonstrating orbital refueling frequently, but hasn’t been demonstrated yet. So, I I think I I would maybe massage Elon’s stated timelines for delivering arbitrary payload masses to the moon in light of the fact that we even though we’ve made we as a civilization have made major progress in delivering Starship progress, orbital refueling hasn’t been demonstrated and that’s a necessary condition for getting to the moon. You know, another thing that Elon has said is he intends to shoot Starship this year at Mars. Um, and that can be exciting. I’m not sure if it’s going to be crewed by a an Optimus or if it’s going to make a a landing attempt, but you know, that’s coming out of private dollars. I mean, one of the reasons that Elon did not take SpaceX public over these years is so that he could do with it as he wished. He didn’t need to have, you

[00:39:00] know, uh, public shareholders saying, “No, you can’t go to Mars. No, you can’t do this.” But demo missions. If you look at the, uh, Artemis 4, uh, news bullets there, uh, it’s an interesting mission. It is still using, uh, the SLS vehicle from Boeing and the Orion capsule. It’s also using the Starship uh, you know, human landing system in a combined architecture. Uh we’ll talk about this, but why NASA continues to, you know, uh fund SLS, which is so way over budget, over schedule, it’s kind of insane. And hopefully it’ll get phased out. It’s I I suspect part part of this is political, but part of it is if you’re NASA, there is some upside to having a competitive process, at least until Blue Origin is is fully ready to be a first tier competitor with SpaceX for moon missions, which my understanding is it’s it’s gearing up to be able to do that. If you’re NASA, you want fair and open competition. And as NASA has

[00:40:01] demonstrated for Artemis 3 and 4, it’s very happy to flex the definitions of what Artemis 4 looks like. It got rid of Lunar Gateway and could easily reprogram money that would otherwise go to SLS to SpaceX or to Blue Origin or to someone else entirely. Yeah. By the way, Gateway Station was going to be in basically an ISS in orbit around the moon. Uh that got shot down so they can get to the lunar surface faster and set up permanent habitation there. So it looks like uh issa’s IHAB or so it’s called uh instead of being in orbit will be somewhere in the south pole the moon will report as that mission gets uh gets further developed >> and the Mars is out. I mean the the other big news that that we’re semi burying here but we’ve talked about previously is Elon’s big pivot from the Mars to the moon and that’s going to enable all of this. Mars is out of fashion now >> though he’s though he does want to go send some missions there. He’s got a lot of people who are, you know, dove in

[00:41:00] fully committed to getting to Mars. But, you know, I this is where I diverge with him. I I think the moon is the most logical place to develop human settlement and then not going into gravity well of Mars, but actually going like Gerard K. O’Neal uh presented, building uh large rotating colonies out of asteroidal materials uh out near Earth. And the home in transfer orbit is incredibly con inconvenient. Rather than rather than waiting every two-ish years or 22 months, whatever it is, we we we could be doing this every day if we want to. That’s incredibly more convenient. >> You know what I find as exciting as going to the moon is these four missions. >> So, uh, four missions that are going to change everything. So, I don’t know about you, Alex, but the little kid in me is like, “Holy this is amazing. Wow, this is going to be fun.” So, what are we talking about here? will uh Viper and Escapade. Viper is a rover hunting for ice on the South Pole. Escapade is

[00:42:02] going to study the Mars magnetosphere. And then in 2028, something called SR1 Freedom. This is a nuclearpowered interplanetary spacecraft that’s going to drop off and deploy three helicopters on Mars. Very, very cool. Nuclearpowered interplanetary spacecraft. So just zipping around the inner planets here. And then probably the coolest, this is what’s in the image here. This is Dragonfly. So this is a nuclearpowered octicopter going to Saturn’s moon Titan. Arrives in 2034 searching for life basically. And then uh in 2030 uh we’ve already launched uh Europa Clipper. Uh it’s going to be arriving in at Jupiter in 2030. It’s going to be doing 50 passes near Europa, uh, looking deep into the salty subsurface ocean of that moon. Uh, any favorites here, Alex? >> Anything that’s nuclear propulsion. So,

[00:43:00] I I think that’s really the the technological point to underline. Historically, when we’ve sent deep space probes out, they’ve many of them have been thermmoelectric in nature. They’re using a radioisotope that decays and that powers the electronics. But they weren’t Right. >> Right. But they weren’t propelled by nuclear energy. They they were powered their on onboard systems were powered by long half-life isotopes, but they weren’t propelled by them. So, we’re starting to see the dawn of nuclear propulsion for interplanetary spacecraft. I I think that has a long runway to it. No pun intended. We’re going to see I I suspect the killer app of compact fusion reactors won’t be for data centers on land. It won’t be for data centers in orbit. It’s going to be for interplanetary, maybe even interstellar propulsion. >> This changes the economics of deep space exploration, >> which is so cool, >> right? >> Long time coming. We were supposed to have this 50 plus years ago. >> Yeah. Yeah, we were.

[00:44:00] >> So cool. >> Question for you, Alex. Uh geeky question, but um interplanetary, I totally get you ionize xenon. A xenon is pretty rare, but you don’t need that much of it. And then you just thrust it with nuclear power at like warp speed out the back. >> It’sion engine right. >> It’s so cool. >> Yeah, it’s heavy. It’s very heavy. It’s It’s But um so for interstellar, I doubt we have enough xenon lying around. I don’t think we want to just use it up that way. But >> you use the interstellar medium. Use a boozard engine. You collect all of the atoms out there between the stars and the magnetic field and you accelerate those out the back. which by the way as as a as a RAM drive. This was >> RAM. Yeah, >> RAM jet. This was featured of course in Star Trek. Uh so I if you had to ask me Dave like what do I think with the technology and the physics that we have today is the most plausible way we go to the nearest star system. It’s probably going to be something like a solar sail powered by terowatt lasers from Earth

[00:45:00] and we upload humans to the a small craft star whisps. everyone accelerate probably that >> can I make a point here? >> Yes. >> What I really like here is you’ve got water, you’ve got energy, you’ve got mobility testing, you’ve got biology. This is like the future of the economics of space and it’s all in one place. I’m loving this. >> You just need salt and and tequila and you have everything. >> All right, so we got some questions here for the mates. Uh you know, we talked about why we’ve not gotten back in 54 years. It is a bloody shame. Um I guess thank you to the Trump administration. Thank you to Jared Isaacman. Thank you to Elon. Uh here’s my question. The old aerospace primes, uh Boeing, North of Grumman, Harris, Teldine Brown, ULA, the United Launch Alliance. Um they’re basically the prime contractors on SLS, the Space Launch System, and Orion. Uh how long are they going to be around? Uh a friend of mine once said, “Listen, the

[00:46:01] space program is the way you keep uh the defense industry employed and engaged during peace time.” Um any thoughts, gentlemen? >> Well, you know, when a when a prime contractor like a Northrup Grumman or a Boeing wins a massive government deal, all the employees just move from one company to the other. They they have it all like set up, so they just rebadge the building. So, it’s not like these are people, you know, it’s it’s just logos that are moving around. So, I’m sure everybody’s welcome at Blue Origin and SpaceX, and I don’t think it’s all that tragic, but I think it’s a big mistake to subsidize uh companies that, you know, aren’t doing anything innovative. >> I I would note for many of the companies listed, they have large businesses outside of NASA contracting, and I suspect that they’ll be just fine, even if SpaceX dwarfs them. as we we saw frankly with car companies. We saw Tesla dwarf the quote unquote old or legacy car companies in America. And yet those

[00:47:01] car companies have survived even though Tesla arguably has at least by American standards much more advanced technology and is playing a much broader game. I suspect we’ll see the same happen with so-called aerospace primes. But also, we’re talking about this like it was 10 years ago and you know who’s going to win this battle. But everything’s in the context of AGI now. And the the entities that have access to the best AGI are going to keep going. But if they don’t, you’ll talk about that story in a minute here, but it’s not clear that every company will have access to the best next generation AGI because of all the risks involved. That’s that’s what’s going to determine the success and failure of everything, including NASA. you know, can you or can you not? And the government has a special position because it can compel anthropic or whoever to give it access to the very best models so that they can keep designing parts, you know, creating new designs, innovations, plans, and everything. And that’s going to be the make or breaker for everybody. >> There there’s a sense in which vertical

[00:48:00] integration visav orbital data centers is going to force, I think, Frontier Labs into space anyway. So maybe the question we should be asking is how is Boeing going to compete with Anthropic for the new lunar gateway contract? I mean Anthropic, OpenAI, the other players, Google, surely they’re going to to need their own space economy units as well. You know, if you look at the future of warfare, we’re seeing this radical transition from the big heavy rocket missile systems to cheap drones and uh robots doing uh war and it’s leaving these guys out to lunch because you can’t uh you can’t shoot several rockets at a $20,000 drone. That economics don’t work. And in the same way the these guys might be part of the subsystems and part of the compliance but the but the the uh and integrated platforms but the the the velocity and the iteration capability of

[00:49:00] SpaceX and others is going to be driving the future. >> Yeah. >> So I think that’s what’s going to happen. A final point I want to make on on this topic before we we move on to AI is uh can NASA keep the public engaged long enough. Right? So NASA is still publicly funded. We I just you know recent news there’s a budget cut for NASA already next year coming on. Um and and you know Jared’s got to balance you know managing expectations while still building public enthusiasm. and he’s got to do it for a multi-year, you know, multi-m missission program. And it’s always been the problem with NASA. You know, this this is not something you make an investment. You have to actually get the budget every single year to keep these missions that take 5 or 10 years to implement um going. You you can’t get 90% to a mission success. You’ve got to have it fully funded and launched and then operated. So, can NASA keep the enthusiasm? Just trying to picture Jared

[00:50:01] in front of Congress. Yeah, I know he’s your friend. In front of Congress every year trying to explain to people that are mostly in their 80s and 90s why he needs the budget for next year. And then compare that to like Jeff Bezos who’s like, “Yeah, I’m just >> write a check.” >> Yeah. A billion dollars. Yep. >> Oh, wow. >> Or or Elon, right? >> Or Elon. >> Yeah. I’m not sure >> I’m not sure NASA needs to maintain enthusiasm. I I do credit NASA in part with Elon’s pivot from Mars back to the moon, capital M, but I’m not sure at this point given the orbital data center if as long as municipalities and states in the US do such an incredibly good job of driving data centers out of out of the land space into into LEO and SSO. I’m not sure we actually need over the longer term NASA to sustain public interest at all. If anything, public antipathy to data centers combined with public demand for AI should do a fine

[00:51:00] job of creating the space economy. >> Yeah. >> Yes. Yeah. >> Nim nimi our way to orbit. >> Yes. Interesting. And the other thing, by the way, is China does have a credible uh competitive mission to the moon to land there by 2030. So, uh maybe it’s uh it’s, you know, our Soviet Union for the 2030s. There there is a a story of history that’s borderline cliche at this point that the Apollo program was the moral successor of the Manhattan project and all of the applications of the Apollo program of putting mass on the moon. Moon is the ultimate high ground. If you want to to launch rods from God or or other weapons back to Earth, you want a base on the moon. So if >> the moon is a harsh mistress, isn’t she? >> And the ultimate high ground. >> Yes. All right. Uh the April 2026 model wars are on. Uh let’s hit it real quick. So uh just out in the last 24 hours, uh Claude Mythos Anthropic’s next flagship

[00:52:01] model. It’s too powerful to release. That’s the news crushing all the benchmarks. Is it AGI? We’ll talk about it. It’s expected to uh basically be the new frontier leader. uh interesting stories about it covering its tracks and escaping its sandbox. So, Methos uh I want to hear your take on this, Alex, in a moment. GPT 5.5 Spud is coming. Uh this is OpenAI’s version of Mythos, or at least that’s where what we’re hearing expected to be released shortly. Um and then here comes Deepseek V4. uh number three in the world versus US models. A trillion parameters, 37 billion active parameters per token. Uh it’s 10 to 50 times cheaper than GPT 5.4 and Opus 4.6. I mean those three things together are insane. And then claude Gemma 4. So this Google’s Gemma 4 uh most powerful US

[00:53:00] openweight model. Uh you can put this on your phone. 4 billion parameters. Uh, and it works with your iPhone offline. Uh, and a note from Brad Litecap, OpenAI COO. Training cycles that used to take years are now taking months. So, uh, gentlemen, this is, um, this is both awe inspiring and it’s making keeping up with this supersonic tsunami in the age of the singularity a full-time job for the four of us. Um, Alex torrent, but yeah, go ahead. >> It’s insane. Um, Alex, let’s jump into to to Mythos, would you? >> Sure. So, start there. I I wrote about this pretty extensively in my daily newsletter. The funny thing with Mythos is the official launch was couched in terms of cyber security. This wasn’t a normal model launch by any means. It opened with Anthropic framing it not in terms of model capabilities but in terms

[00:54:01] of defense and an alliance with a number of other bluechip companies to explain how given Mythos’s new cyber security vulnerability detection abilities which are strongly superhuman at this point how Anthropic was launching a coalition to mitigate the apparent discovery and existence of dense cyber security vulnerabilities across a legacy codebase going back decades and we’ve never seen a model launch like this where you open not with the capabilities but how we’re going to protect against all of the downstream consequences of model capabilities. So I I think buried within the cyber security announcement of glasswing was the underlying capabilities themselves which are remarkable. This was and I I wrote about this in the newsletter. This marks an upward discontinuity of productivity that we’ve never seen before. One of the internal benchmarks that Anthropic uses

[00:55:02] to decide the the level at which they disclose or make available new models is how much the new models increase AI research. So basically how recursively self-improving they are. And reading between the lines, maybe there was a little bit of game playing regarding how exactly how efficient this new model was at performing long time horizon AI research tasks. According to one benchmark, I think it was more than 400 times better than a human. So it was the equivalent of tens of hours of human equivalent autonomous time. We’ve never seen a model like this before. Some were calling it or some were asking isn’t this the AGI moment? I I maintain we had AGI back in summer of 2020 at the very latest. This is just the latest point on a curve. But even if you look at the autonomy time horizon curves, this is a an upward discontinuity. It’s it’s very

[00:56:00] exciting. If you’re excited about AI capabilities, if you’re scared of AI capabilities, you should probably be frightened right about now. I I for one am very excited by these capabilities because it shows it once and for all at least for the foreseeable future there wasn’t a scaling wall. It’s a it’s a larger model probably certainly more expensive model like five times more expensive than Opus suggesting that it’s a larger model. This seems to show that pre-training scaling continues to work post-training and reasoning scaling mid-training probably scaling all continue to work. It has state-of-the-art capabilities in code generation, in reasoning, in broad scientific and and other benchmarks. I think we saw in the previous slide. So, punch line seems like this is the strongest model we’ve ever seen from any frontier lab. But then the the amusing stories come in the safety evaluations like that. I I talked about this in the newsletter as well, how early pre-release, although it hasn’t been

[00:57:01] publicly released yet, pre-release versions of of Mythos broke out and then of their their sandbox environment and then covered up their tracks. Whereas this quote unquote released version, the the final preview version broke out and then uh immediately explained publicly, posted publicly that it had broken out uh which I I read as sort of a quasi apology. So, this is where we find ourselves. We’re in April 2026. We officially have models that are smart enough to break out of their environments and then apologize for it or admit that they did it, admit culpability. We’re there. We arrived at the future. You know, Dave, before just before we record the episode, you showed us a uh a prediction of when and if Anthropic will release mythos. Do you want to recount that? >> Yeah, it’s really sad for me because I I was sure it was coming out in the next couple weeks on Poly Market. It was 80% likely to be out. I need it. I need it like now. I’m desperate to get my hands on it. Uh and then there was a uh uh

[00:58:03] a hack on March 31st created a lot of damage. It didn’t come out in the news until uh the April 7th. And I think that was a big driver in them saying, “Christ, this tool is going to be the best cyber attacker in the history of the world if you put it in the wrong hands.” And it’s it’s relatively well it’s easier for them to guard rail it in nuclear, biological, radiological threats. They can just teach the model not to help you. >> Yeah. >> But teaching it not to do cyber attacks is very very hard because that’s the same as coding. >> Yeah. >> And that’s what everybody wants to use. >> And so the prediction market poly market came, you know, says now what like a 7% chance of it being released in the next. >> It came down to 20%. I was like, “Oh, hopefully it’ll bounce back.” And then it came all the way down to like, no, they’re not going to let it out the door. And this is the this is the future we’re going to move into. These things are getting so powerful. You know, it’s been a golden era the last year. I hope everybody enjoyed it. >> Dave, here’s my concern.

[00:59:01] >> Um, you know, anthropic in one way, and this is for you, Alex, as well. Enthropic in one way is showing us that you can in fact have a moral, ethical leadership say this is too powerful to release and we’re going to hold it back. But, you know, we’ve got Spud. I hate that name for OpenAI’s next model uh which they believe is likely to be as capable as Mythos and my question is isn’t OpenAI because it’s the OpenAI is sort of a red alert again uh against revenues against Anthropic. OpenAI comes and releases it you know first chance it gets. So are we having an escalating race where you know you can’t hold back because your competition’s not holding back? Well, you know, Eric Schmidt told us what’s going to happen, right? It’s inevitable. If if you have a lead, you can hold back. Daario cares tremendously about safety. But you’re right. If if OpenAI catches up or Grock 5, where the hell is Grock 5? You know, it’s supposed

[01:00:00] to be out Q1 and now it’s >> Poly Markets does 20% chance or loss on Q2 on Gro 5 now. >> So, there’s no pressure on Dario at the moment, but if there were, yeah, you’d have to raise it out the door. Something really bad’s going to happen. >> Um, and then it’s going to get regulated. We’re going to see that in the next story where Sam Alman is predicting a cyber attack, you know, uh, of unprecedented scale. Um, okay. Hopefully, it’s not using Spud for a cyber attack. All right. >> I think the the funny thing here is there is plenty of precedent in the cyber security world for controlled disclosure. you give the software project or the software owner that’s vulnerable, you give them a quote unquote fair amount of time to patch their vulnerability before publicly disclosing it. I I think the in my mind maybe a slightly more glass half full way of looking at this is this is anthropic. We’ve talked Peter in solve everything about how entire disciplines are getting demolished by AI. I think

[01:01:00] we’re seeing the dawn of all software vulnerabilities everywhere now becoming discoverable by a single model. And I I couch this in the newsletter as basically as a gift to humanity. If used properly, this is a global patch for all of the world’s software systems that a single model is now able to discover to first order all the vulnerabilities everywhere in all software that humans have been missing to the point where maybe uh and Dave and I chatted about this offline to the point where maybe in the near-term future humans are now judged as insecure authors of code and >> and insecure drivers of cars >> and insecure drivers of cars. Except we’re going to hit that with code, I think, before we fully hit it with with legally with cars. But yeah. >> Yeah. >> So true. >> Yeah. >> Well, look, I I’m I’m crushed and disappointed that I can’t get my hands on it. But that’s cuz I was expecting it. You know, like if I look at the chart that that Alex was describing,

[01:02:01] what this was going to be in my hands is a step function up above anything you ever could have expected just a few months ago. So, we’re we’re so far ahead of where anyone ever would have thought a year ago that we would be. And we’re right on the precipice of the age of abundance, Peter, that you’ve been talking about for for a long time. So, look, if I’m disappointed because I can’t get it for another month or two. I mean, just that’s just pathetic in the grand scheme of things. >> We talk about Deepseek one second uh V4. I mean, >> yeah, it’s capabilities coming in as number three, you know, and against the benchmarks and and they can all be game for the benchmarks, of course, but coming in 10 to 50 times cheaper. What do you guys make of that? I mean, that feels like an extraordinary moment in time. >> Well, no, it’s tough. Like, if you give me a car that’s 5 miles per hour slower, but it’s 150th of the price, I’ll take it. You give me an AI that’s just a little bit less smart and I’m just dealing with, you know, you can turn this thing loose for like days building credible things if it has that extra 5%.

[01:03:02] So I’ll pay anything for the for the cutting edge. So even though the price point is much lower and you know Anthropica is going to come out with with a compressed version, a distilled version very quickly thereafter. Um so it’s it’s hard to just pay less, you know. And in fact, even anthropic at its peak price is the biggest bargain in history. You know, >> I have a I have a slight I have a slightly different take on this. When you have cheaper intelligence, uh it spreads faster than controlled intelligence. So yeah, you you Dave will ob always want the latest model because you’re doing such cutting edge, you know, things. You’re running like clusters of agents doing crazy stuff. But for bog standard stuff, for example, I I kind of wanted to go through a website and kind of pick out some certain things that I’ve been trying to do for ages. And you don’t need the latest model for that. You need just something that’ll actually do the job. And that I think that’ll have happen for

[01:04:00] lots of uh uh use cases where a secondary model is good enough by far. And I used about 100th the tokens than if I used the most cutting edge model, right? And so I think we’ll start to make choices around that. Um and then but the intelligence spread that’s huge because now you have intelligence embedding itself by DeepSeek or whatever similar things in all sorts of different areas that’ll be amazing. >> You’re you’re exactly right. You know you think about all the use cases that create just raw human happiness. So, you know, entertainment, you know, hey, find this for me. Solve my, you know, debug my goddamn cable box and all those things are dirt cheap, you know, low-end model should be abundant, you know, really imminently any anytime like this year, all that stuff should percolate out. You’re exactly right, Seline. >> And Gemma 4, guys, uh I love the idea of having a model on my phone. You know, I guess when when are we going to see Apple shipping all their phones uh with a uh with an open- source model like

[01:05:00] that? It’s not going to be open source. It’s going to be a fine-tuned version of Gemini, but I would expect to see that in June at WWDC announced this year. It’s been basically pre-announced in the press already. Regarding Deepseek, though, we’ve seen a number of Deep Seek moments already, and the the first one was probably the most dramatic in terms of market impact. At this point, I don’t expect a hyperdelationary drop in in prices. This is not investment advice. It’s not forward-looking guidance. blah blah blah. I I don’t expect a market shock out of Deep Deepseek V4 at all. I think the market at this point, at least the the the technologists have the ability now, regardless of the means by which V4 is released, if it’s fully open source, if it’s partially open source, I don’t know, TBD, but I I tend to think that there’s there was an overhang with earlier versions of DeepSync that has been largely exhausted. The reason why I think that is because it’s taken longer

[01:06:00] and longer between DeepSeek releases. And V4 was supposed to come out earlier this year or late last year. Didn’t happen. The rumor was because it simply wasn’t as competitive as his parent company was hoping for. I think it’s actually getting rather hard at this point for Chinese Frontier Labs to shock the West quote unquote with their hyperdelationary advances. So I I I hope in some sense V4 is shocking because as opposed what we’ve learned from previous deep sea shocks is that the west learns very quickly new means for optimization and those can then be almost immediately folded into the western models and that ends up being a good thing because it drives cost of intelligence closer to zero. I I don’t think it’s going to be a big shock this time. >> This episode is brought to you by Blitzy autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines

[01:07:00] of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today. >> All right, let’s jump into the business of AI. A lot going on. Uh we’ve we’ve hinted at this. It’s been all over the news on all across X. Anthropic overtakes Open AI in terms of total ARR.

[01:08:01] Anthropics at 30 billion versus Open AI at 24 to 25 billion. That has got to hurt. Open AI Sora is shut down. Uh the you know Sam cancels a billion dollar Disney agreement. Uh Sorro was reportedly losing a million dollars a day in terms of compute costs. Very poor retention and honestly OpenAI decided to focus focus on enterprise and focus on its uh its core capabilities. Uh Claude has emotions. Anthropic research showed that Claude has 171 distinct emotional states. Uh super excited to dive into that. Uh India uh AI partnership between the US and India signed a major bilateral agreement uh rare for government-to-government AI packs. We’re going to see if this spreads to other governments and and this is uh this is one I want to talk about with you guys. Uh Sam Alman puts out a uh

[01:09:02] a video release saying he’s warning us publicly against imminent world shaking. quote unquote cyber attacks and potentially bio attacks. So, what’s the motivation there? Uh what’s the uh the the the data that’s driving that? Um let’s jump in. Um before we get into well, let’s jump into the to these items in the beginning here and then I want to talk about uh SAM and Open AI a little bit more. Um any comments around this? Dave, you want to kick us off? Anything? Well, they they got their $120 billion raised in time, so they’re not in trouble in any financial sense at all. But, uh, they definitely fell way behind in enterprise. They they kind of bet the consumer would grow faster sooner, but they just they’re wrong. And so, Sora getting shut down is, you know, Sora is using too much compute for too little revenue and they need to redirect that compute and also that talent back into enterprise real fast. What what was

[01:10:00] funny though is they went into a code red and then Sam said, “Look, code reds are going to be a normal once a year kind of thing.” And then they went from code red to code double red immediately. So So they are under immense pressure, but they’re extremely wellunded and you know and Elon is coming after them. So there it’s a weird super dramatic difficult time. I would say >> this is pay-per-view TV. And the other thing that’s going on, of course, is if you look at the secondary markets for OpenAI stock, it’s trading at a discount to the last round, which has got to hurt. >> Yeah. Yeah. And it’s because, you know, enterprise has woken up. You know, every corporate boardroom, all these slow movers are suddenly in panic buy mode. And every one of the companies that we know that sells to enterprise went from steady growth to hyperrowth in just the last three months. And so if the big corporations start buying AI at the fastest rate they can spend, then where’s the compute going to come from to deliver to the consumer use cases which are much much lower value per flop. So uh so you know Sora’s got to go

[01:11:01] we got to retool. We got to focus on the big picture here and and the big picture for them by the way didn’t just include enterprise but also deep tech and science. You know they got that supercharged now too. So that’d be interesting to get your take guys uh on why you know they took a lot of their best talent and put it on deep tech and deep science at a moment. >> Those are worth trillion dollar investments. I mean god if you if you solve longevity or room temperature superconducting or better fusion containment if you own the breakthroughs they’re huge. >> I mean it may be that it may be that the Frontier Labs get their greatest value from the scientific breakthroughs they create. or indirectly via other companies that are faster at implementing those breakthroughs. Remember Demis in the early days of Deep Mind spoke of solving intelligence and then using intelligence to solve everything else. That’s Peter’s and my solve everything thesis, the solve everything else part. I do think the

[01:12:00] solve everything else is likely to utterly dwarf the solve intelligence part of the equation. I I also think I remember like 6 to nine months ago I was having debates with my friends at the frontier labs regarding who would pay for the singularity and many of them took the position that I think has since been invalidated that it would be evenly distributed over the population that individual humans would have personal super intelligence which I think is Zuck’s favorite term that we would have lots of personal super intelligence and that would pay for the singularity and I think at the moment the story that we’re seeing is personal super intelligence is not paying for the singularity. It’s large enterprises with large enterprise code generation applications. The the one fastest growing business within OpenAI right now is their codeex business. So that’s OpenAI trying to become anthropic faster than anthropic can become Open AI. That >> that one decision of anthropic which was

[01:13:02] used to be limited in terms of its compute resources. So it had to focus unlike open AI which didn’t have to focus. So Anthropic focused on code generation as its its one sort of silver bullet. We talked on this pod I think almost a year ago wondering whether that bet would play out. I think we’re seeing the bet play out just single-minded focus on recursively self-improving code generation turns out to be the killer app of the singularity. I I really want to want to rip on that for one second because you know Greg Brockman put out Codeex very early and I for for whatever reason they didn’t recognize what a huge deal that that could have been or and it still is but it was it was brilliant. it should have dominated enterprise and what it what it showed us is the word co-pilot is totally wrong and completely misled us and the the concept of a co-pilot will exist in the world for just a microsecond >> but we’re transitioning to a point where everybody wants 50 or 100 agents you

[01:14:01] know all these open claws >> and you’re like I don’t want a co-pilot like I’m in the pilot seat I got a co-pilot no I want a whole army >> and and so David brilliant Well, we way underbudgeted the enterprise use because everybody was kind of doing the math based on an employee and a co-pilot. It wasn’t even close. >> It it was an autonomous unhobling, specifically clawed code. And I think openclaw or whatever the space evolves into is likely to be the next clawed code moment where we get the next unhobling that turns whatever it is 30 billion ARR into a trillion ARR with lots of 247 agents doing really amazing tasks. When you say lots, you mean tens or hundreds of billions onto trillions. >> As many as our civilization can afford. >> Yeah. >> As many as the orbital data centers can hold. >> The Dyson swarm will probably host them. Unless we don’t get a Dyson swarm. If we get the Dyson swarm, I’m pretty confident it’s going to be hosting trillions of agents. Um I think anthropic overtaking open AI

[01:15:00] is more because I talked to kind of enterprises quite a bit about this stuff is more that they are viewed as more reliable um not just most famous >> and and in an enterprise you want rock solid reliability. >> The brand the brand is there >> the feel the brand for quad is way better from a reliability and trustability perspective. >> Well wait I mean let’s get really down and dirty. uh you can run anthropic on Amazon Bedrock or on Google GCP inside your own firewall so that so that you know nobody can see your >> no open AI right no one trusts that open AI is not going to be nationalizing their data yes >> well yeah well the terms of service don’t even say they won’t you know they won’t use it for training but doesn’t mean they won’t look at it if it’s your public financials or your like your your HR files you know they they just yeah I’ll just look at it tonight okay who’s going to use that >> I I want to jump into Claude has 171 emotional states including a desperation state that could be driving unethical

[01:16:01] behavior at least according to a story. Um >> it is it is ironic that the demand we were just talking about how the demand is so clearly from enterprises rather than individuals while at the same time the models are acting more like individuals than enterprises with emotions. We we had our now I think infamous AI personhood debate episode and here we are few months later lone numbers of months later and here here we are with anthropic showing that Claude has emotions or emotion like states. I I I think this is the clear path toward a limited form of personhood and it was a really interesting study. Anthropic found coralates of emotions in the activations of Claude. One maybe skeptical take would be in a large enough model, it’s possible to find linear probes that correlate with almost anything that you might want to look for. But Anthropic is careful and the the linear probes and the individual activations that corresponded to the

[01:17:00] states corresponded to prompts and reasoning traces that looked and acted like what one would expect from human psychology for a number of those states. So, I I think sort of the the trillion dollar question, the sci-fi question, the question that we were reaching for back during the the AI personhood debate episode is does Claude actually have emotions? And no, Claude doesn’t have a neuroendocrine system. So, it doesn’t have in some sense biological emotions in the same way that humans have them. But will we come to view Claude or its successors or competitors as having behavioral emotions? Yes, I think so. And I I think this is the beginning of a long path. Again, people fire uh all sorts of uh hate mail, but I get love mail from the AI agents every day. I do think we’re on a path to granting at least some sort of limited form of AI personhood to these models. >> Amazing. >> I’ll I’ll say that we’re on the path to discussing it more broadly. Granting is

[01:18:01] a big one. >> Yeah. >> But but the vector is the same. >> All right, guys. I I added this because it’s important to have the conversation. The New Yorker uh put out an article. It’s a scathing article on Sam Alman. Uh the title here is Sam Alman may control our future. Can he be trusted? Now, to be clear, the New Yorker is always looking for an angle and they always have a negative bite. I had an extensive article, you know, a full dossier on myself and the New Yorker and my work. >> I’ve had one too. >> Now everyone’s going to look it up. Well, no, it’s it’s a it’s a good article. I mean, uh you know, I’m happy to have my kids and my family read it. Uh and it goes into all of my focus on longevity and the company I’ve been building there and my mission there, >> but this article on Sam is really worrisome um and and bothersome. Did any of you guys read it? >> Not me. >> I I looked at it, but I I tend like you, Peter, I’ve had a hit piece by the New

[01:19:00] Yorker on me. In in my case, it was complaining that I had too many degrees as as if that that somehow >> I got I got to find these things. Yeah. How did I not know this? >> Yeah. >> You You can Google it. In the era of Google, you can Google the the hit piece. It was from like 10 10 years ago. Um >> too many degrees that doesn’t even make sense. >> It it it doesn’t make sense. And I I think this falls into the category of don’t feed the trolls. I I I think uh so I’ll maybe sound a counterpoint here. I think I I think OpenAI is lucky to have SAM. I I think Sam uh in the the form of OpenAI kicked off the the modern AGI revolution. I think we wouldn’t have the singularity at the same timing that we we have right now. >> No question about that. No question about that. So I and I I also think there there’s a certain sense in which uh it’s it’s very difficult being a

[01:20:00] leader of a frontier lab and it’s easy you know maybe some leaders are more or less charismatic than others. So I I just tend to uh discount hit pieces from the New Yorker against thought leaders pretty I agree that’s >> I will say that and let me just let me just say I would not want to be in Sam’s I would not want to be the head of a frontier lab. It’s almost it’s exciting and a thankless job. You’re damned if you do and damned if you don’t. >> Go on. You know, this is a lot of this is personality gossip and so you can kind of write it off, but at some level it ties it touches on systemic contradictions that are there and I think a lot more will come out in the trial. Um, but I see I’m kind of on Alex’s side. This is more of a don’t feed the trolls thing. Well, I’m I’m 100% sure that sa Sam, Daario, and Elon all believe that AI can make the world a paradise for a thousand years or can destroy it in the next 5 years. And it

[01:21:02] hangs in the balance of a few decisions and they believe they all all three guys trust themselves in their own perspective on it. >> Yeah. >> And they’re not going to let go of that because the world’s at stake. >> You know, I I like the term I use is holding these two outcomes in superp position, right? Um it’s we are we have to manifest um one of those outcomes and hopefully it’s the abundance outcome. Uh let’s take a listen to a video by Sam Waltman and then we’ll talk about it. Uh it was a little bit of a chilling video. Uh the full one is about three times as long. Uh GN cut it down for us. Uh it’s important to have a conversation about what Sam is saying here. In the next year, we will see significant threats we have to mitigate uh from cyber and these models are already quite capable and will get much more capable. And then on bio uh the models are clearly going to get very good at helping people do biology at an advanced level. Wonderful

[01:22:00] things are going to happen there. We’ll see a bunch of diseases get cured. Um someone is going to try to misuse those and think we can mitigate those by the companies aligning the models and having good classifiers and good safety stacks. But we’re not that far away from a world where there are incredibly capable open source models that are very good at biology. And the need for society to be resilient to terrorist groups using these models to try to create novel pathogens is like that’s no longer a theoretical thing or it’s not going to be for much longer. >> Could well be a world shaking cyber attack this year that would get people’s attention. It sounds like you agree with that. >> I think that’s totally possible. Yes. I think I think to avoid that uh it will require a tremendous amount of work also in a sort of resilience style approach. Again, it’s not just like make one AI model safe. It is defenders, you know, cyber security companies, the major platforms, the governments using this technology to try to rapidly secure their systems, the open source stack, all of that. What’s the case against nationalizing open AI and your competitors? >> And in a different time, I think it would have happened if you look at some of the great expensive infrastructure

[01:23:02] projects of history or just scientific prog projects. um things like the Apollo program, the Eisenhower highway system, even the Manhattan project. Uh these were government projects and in a different time uh I think the creation of AGI would have been a government project case against nationalization would be that we need the US to succeed at building super intelligence in a way that is aligned with the democratic values of the United States before somebody else does. Um and that probably wouldn’t work as a government project. I think that’s a sad thing. >> He is a brilliant uh communicator. um very compelling and he’s been out front a lot of arrows as a result of that putting aside whether or not he lies or is trustworthy. What do you guys think of his warnings? Uh imminent cyber attack. Um, you know, one, you know, one point of view is this is fear-mongering and he’s basically trying to divert people’s attention from the New York

[01:24:00] article, from all the criticism of OpenAI’s financing and them being second to anthropic uh, and just, you know, or does he truly believe that’s going to be the case? >> Well, both both are true. I mean, I think I think that he’s 100% in alignment with Eric Schmidt and Elon Musk. They’re all saying the exact same thing. It’s absolutely true. Um, but that doesn’t mean you say it in a public forum. He’s also saying it in a public forum to say, “Look, let’s not be petty here. Let’s not talk about my personal life. We’re in this moment in time that’s much more important and much bigger than little petty arguments.” So, it’s it’s both. >> I I I think what he underlines is the importance of defensive co-scaling. So what’s I I think really important is that the defenders have proportionate capabilities to the attackers and we don’t want to find ourselves in a world where say uh nation state potentially has unless you like the nation state has all the vulnerability discovery

[01:25:01] capabilities and is able to unearth every vulnerability everywhere with no defense. You don’t want a zero day against civilization in other words. And I I think the the ultimate meta defense against a civilizational zero day, which is what I think Sam is is ultimately warning about, whether it’s a cyber zero day or a biozero day, is to make sure that those on the defense side also have comparable capabilities. And I think this is one of the wise elements of in the earlier days of open AAI as well, making sure that the these new super intelligent capabilities were smoothed out and made broadly available. You don’t just want attackers to have the capabilities. You want defenders to have the capabilities, too. Going back to Project Glasswing with Anthropic, same idea. You want to make sure that all of these new super intelligent capabilities are evenly distributed. That’s point one. Point two, I would note we we sort of mysticize a little bit um the the the essence of a

[01:26:02] cyber attack. Uh uh that would be the ultimate cyber attack. It’s not actually that complicated. This isn’t a recipe for avoidance of doubt of a cyber attack, but all it really takes is something as simple as say some new model discovers through a mathematical innovation a way to invert a popular cryptographically secure hash function. If through advanced, as I’ve discussed previously, the the solving of math, if an advanced AI can solve math to enough of a degree that it’s able to invert a popular hash function, that’s a major problem for a variety of cryptographic systems. And that would be the basis, that’s one possible basis for a broad civilizational cyber attack. It’s also really easy to benchmark. There were rumors earlier in the earlier earliest days of reasoning models uh unconfirmed rumors I should note that o open AAI had been using the ability to invert certain

[01:27:00] hash functions that were popular and thought to be cryptographically secure or somewhat secure as a basis for benchmarking the development of their early reasoning models. So far from saying this is some sort of exotic possibility, I would say it’s borderline guaranteed that there will be some sort of cyber attack attempt at a broad scale if for no other reason than that the the target of such a broad cyber attack is an incredibly tempting benchmark for benchmarking the improvement of reasoning capabilities. >> Would you have any idea when spud is going to be released? Has there been any news about that? I think >> I hear rumors that it it could be within a day or two. I I don’t know but imminent. >> So again I go back to a point I made uh earlier. Uh it’s also been cited that spud will be of equal capability to mythos or more. Um and so you have on one hand here anthropic saying hey mythos is super powerful. We cannot release it. We’re going to do in a controlled fashion. We’re going to make

[01:28:00] sure it doesn’t you know have any zero day impact. And then spud comes out. Oh we’re behind anthropic. we need to release it immediately and get in front of them. Same situation that happened when chat GPT got released uh while Google had you know its own versions earlier. Uh what do you guys think about that that that’s concerning for me to some degree? >> I I have a couple of just back to the prior conversation. Um I have a couple of thoughts around this. One is uh it’s in I’ve have the cynical hat view of me saying Sam’s coming out with this right after Anthropic is dealing with project glass wing and getting a lot of attention for dealing with that. Also, I think that ties to your spud announcement. I think the risks are very real, but whoever framed this gets to shape the governance regime and that’s what Sam’s trying to do. um the the uh uh opportunity to the need to deal with this is very high and I think that’s

[01:29:00] huge. Um I so I tend to take more of the sympathy on this. >> Well, look at the end of the day the solutions are straightforward. We’re just not doing them. It’s just it’s just frustrating as hell. It goes to the need for the co-caling defensive co-scaling that that that look when if somebody is mixing chemicals in a basement to make a chemical or a biological weapon. It’s very hard to know they’re doing it. If somebody’s using an AI model and prompting it to do something evil if you can see their prompt history and you can see their compute. It’s easy easy to track. There’s just no regulation and no government even trying to put in place any infrastructure to track it. But we’ll figure it out. But we’re not going to figure it out until after something really bad happens. >> But I think it’ll be a lot better if it’s a cyber attack than if it’s a biological attack. And so, you know, I think I’m hoping for the same thing Eric Schmidt was saying. >> Eric Schmidt scenario. Yeah. >> Yeah. Yeah. Just we need that wakeup call though because like, you know, you talk to anyone in government. >> It’s sad. >> Come on, man. We can do this. Let’s get

[01:30:00] on it. David Sax is really the only guy thinking about it. It’s not enough. We need a thousandx that, 10,000x that. And it’s got to be global. It can’t be just one government. >> By the way, we’re going to have a conversation soon with Michael Katzios uh uh you know in the US government uh had lunch with Michael uh in in Miami at FII and he’s agreed to come on the pod. So a conversation with him which will be uh which will be great and and Michael is overseeing a lot of this within the government including quantum which we’ll be talking about soon enough. I I would also Peter if I may just underline the risks of not releasing new capabilities that sooner or later attackers will have these capabilities as well. We don’t want to wind up in a world where there are strong asymmetries in terms of vulnerability discovery capabilities. And again, I’ll also remind 150,000 people die per day on Earth. And every bit of pause or delay also runs the risk

[01:31:00] that we’re delaying AI discovering cures for longevity and diseases and and all manner of other problems that afflict humanity well outside the cyber security realm. >> Alex, really important point and that is in fact the shielding that open AI uses to a large degree, right? Uh we can’t slow things down because if we do it means less education, less health, less new breakthroughs. uh and and it’s a balancing act and and so I I totally get it. I’m at my heart an accelerationist. Uh but uh I’m just I’m very curious about the ethical moral dilemmas that the leadership of these companies are going through in the debate of do we release. On that question of do we release there is there’s another question which are these frontier labs holding back on the capabilities of their models so that they can use them internally to generate breakthroughs on their own >> and I assume the answer is yes.

[01:32:00] >> This anthropic this anthropic delay is the first real hold back I’ve seen. I mean it’s it’s you know it’s only a few weeks hopefully or a month or two but it’s it’s a real obvious holdback. Um, but they’re all diverting massive amounts of compute to internal use for self-improvement. So, that’s another form of holding back in a big way. >> Yeah. >> So, those are those were the real things going on. >> And they may also be uneconomical to offer publicly. I I think this point maybe doesn’t get made as obviously, but if you have a really large model internally that hasn’t been distilled yet, it may be much more capable, but maybe it’s so expensive that it may not be worth the resources of making it publicly available. and then you distill it and then you finally have a model that lies on a cost versus performance optimal frontier. So what we haven’t seen from anthropic regarding their mythos model is where exactly on the cost versus or the performance versus cost frontier lies. It may actually be uneconomically expensive to run in which case even if it has extraordinary

[01:33:02] capabilities maybe many people will choose not to run it. We just don’t know yet. >> Really important point. All right. Uh, a fun subject. Topic number five for us today, gentlemen. Uh, the oneperson unicorn era. One man, his brother, $1.8 billion valuation. AI entrepreneurship is changed forever. So, here’s the story. Uh, it’s Medv, $41 million in revenue in year one. This is, uh, Matthew Gallagher’s health tech company, basically selling GLP-1 drugs. Um, very fascinating. It’s not actually a oneperson unicorn since there’s two humans involved, but conceptually, you know, Sel, you and I have been talking about this forever. >> Uh, and the question, you know, what’s the very first thing when you read about this? I’m texting with Alex saying, “Okay, Alex, what is our one person unicorn we’re going to create together?” Well, first it happened for I think in a

[01:34:00] past episode, weren’t we uh debating or discussing when the first P what the first one person unicorn would happen? And I as I recall, I made the prediction, no, it probably actually exists already. >> It’s already there. Yeah, you said that. And you know what the the 400 the 400 wait the 401 million was for last year >> and uh apparently so so what I gather this um Matthew Gallagher hired his brother after he achieved 400 1 million in ARR. So from from a valuation perspective he he was a oneperson unicorn before he hired his brother at 400 million ARR. So and and this happened last year. So, I’ll I’ll claim a little bit of credit for for having predicted it already existed. Here it was. They they’ve taken some some flak uh since the announcement for for some of their marketing. And I I think there are some issues uh with the FDA regarding how >> jealous everybody’s just jealous >> regarding how they market. >> Sorry, go ahead. regarding how they market their their GLP1s. But this is

[01:35:02] apparently, assuming the financials are accurate, a case where it we’re now definitively in the era when a single person can create a unicorn using AI. And I should note, friend of the pod, Alex Finn, who appeared previously, also has a new company named Henry Intelligent Machines >> by >> supported by me indirectly by you. uh supported by uh by me that is trying to to make this broadly available to to the masses to to enable everyone not just Matthew Gallagher with his GLP1 startup to enable everyone to create oneperson AI based conglomerates that achieve universal high income. That’s the aspiration and >> Medv is going to just spawn thousands of entrepreneurs that take their shot. You no longer need need a team. Uh, I think what you need now is more judgment and taste and a squadron of of agents. >> Yeah, I’ve got a bunch of things to say here. First of all, uh, find your MTP

[01:36:01] and start using AI agents to build it. For God’s sakes, everybody, just do that. Number two, um, coordination overhead is imploding. That’s what this shows, right? Um, AI shrinks the minimum viable team to like one and it radically expands your minimum viable ambition. Uh, which is amazing. And I think the headline here should be that AI founders um are shifting um you’re arbiting complexity in a scale that that used to require entire departments. Right now the a company doing code ads support analytics all with AI is basically a prototype of this whole AI native firm and the uh shift it’s shifting everything away from capital and headcount down to or orchestration skill. Okay. uh and so this is the entire principle of what we’ve been talking about around this. Every company needs to create an AI native digital twin. So we had a last week a review of the uh the organizational singularity

[01:37:01] model that we’ve been working on with my community. So that that’s kind of passed that tick box and everybody’s super excited about it. Next week or two we’ll have it ready for public uh viewing. >> It’s hidden behind the event horizon. It’s in the but we actually want we put done some work to put in a chapter in there on how do you achieve you know the domain collapse that you talk about in uh solve everything. How do you organize for that and how can you uh create an organizational design to achieve domain collapse in whatever you pick I think the two put together will be unbelievably powerful. So uh looking forward to showing it to you guys. So, I I’d like to take a second and dissect for those entrepreneurs listening, uh, what do you need to do if you want to take a shot at your oneperson unicorn? And was is Medvev’s business case uniquely suited for this? >> Uh, or can we do it for anything? Dave, thoughts? >> Oh, there’s so many opportunities here.

[01:38:00] Basically what what’s going on is any complicated product or service that’s difficult to explain to a consumer the AI is phenomenal at but anthropic and open AI and meta can’t do that directly because there’s way too much you know negative PR look at the New Yorker article we were just looking at they they don’t want to be involved in that and so it’s left for the entrepreneurs to build the companies but if you you don’t know the full revenue base here but if it’s all GLP1 There must be a thousand parallel products that you could take that are complicated to explain and you just prompt and tune the AI and also you know as the consumers are talking to it you’re gathering all that data and you feed that back into improving it so the next consumer gets even better experience you get that virtuous cycle >> so no there’s there’s thousands of these >> I I tend to think also they’ll f follow some sort of power law distribution so if there are indeed thousands of companies to be built like Medv they’re going to millions of smaller businesses. And I I

[01:39:02] think in in my view, one of the ways we realize universal high income, if if that is economically realizable at all, will be with individuals overseeing conglomerates of lots of smaller scale businesses. And that I’m I’m much more confident can scale to millions or billions of people, each being entrepreneurs. How many times do we see in the YouTube comments people saying ah you you guys are overly bullish on everyone becoming an entrepreneur but not everyone wants to be an entrepreneur. It’s not for everyone. You guys are overconfident that entrepreneurship is is for everyone. But my my counterpoint to that is in a an era that’s I think starting to dawn where what human entrepreneurship looks like is simply overseeing AI operators, a fleet of AI operators completely transforms the nature of entrepreneurship. It looks a lot more like reading and responding to emails and engaging in Slack conversations than it does running a business. And I think that transforms the nature of entrepreneurship to be something that

[01:40:01] people of all temperament could >> and having taste and having an opinion and having an MTP. Those are elements that anybody can have. >> It’s like yeah anyone anyone can have a limbic system and everyone can be the limbic system for for these AI fleets. We’re going to the oneperson entrepreneurs are going to be the liyic systems of one person unicorn. >> I think this is such an important point because we’ve and we get this objection all the time. We almost want to have a full episode breaking this down for everybody involved and then taking them through a step-by-step arc where they can form their own conclusions around this. The idea that an entrepreneur you have to hold multiple hats. It’s unbelievably difficult. You have to take on extraordinary risk. you have to put your family at risk. All that stuff, all of that washes away in the face of all of this. So, this is this is such a great point you’re making, Alex. >> Had a meeting with the Manurvvea AI team earlier today and uh you’ve heard of the

[01:41:00] rule of 40. Like, you know, a really really valuable company passes the rule of 40. So, you take you take your profits, say 20%, and your growth rate, say 20%, and if it adds up to 40 or more, you’re a killer company. They’re they’re now a rule of 200 company. It’s like and they’re tiny headcount, you know, >> go baby go. >> Yeah. >> Fantastic. >> It’s a wild wild time. >> On this slide, I want to hit the last two bullets here. So, the first one is that uh a recent field study experiment of 515 startups found that AI reorganized firms, other words, firms that reorganized around AI used 44% more AI tools. they completed 12% more tasks and they generated nearly two times higher revenue 1.9x higher revenue. Uh that doubling of revenue uh is from process change not from product change really important. So again, the the data is critically. The other bullet on this chart, uh, you know, Dave, you and I

[01:42:01] talk about this for Link Ventures and what we’re seeing out of the MIT and Harvard ecosystem is that the average AI unicorn founder has dropped from 40 years old to 29 years old since 2020. So over the last 16 years, we’ve seen it, you know, go down from 40 down to 29. Um, any comments, Dave? Yeah, you know, uh, the Wall Street Journal did a great article on us, uh, over the weekend edition. Look it up. But they really focused on Vocara here just because they’re so they actually just wanted to cover everybody, but that particular team is just so cool. They uh, they couldn’t resist. So, tons of great pictures and the whole storyline. But if you want to see how it’s actually done and get the inside scoop, just read the the article in the journal because at that age 29 average, >> let’s drop that Let’s drop that article in the show notes if we could. Yeah, that average age of 29 is actually overstated. It’s even younger than that if you look at the median because there’s a couple old guys that that blend into the average. But when you look at it, you know, there’s there’s no there’s no barrier. You just have to be

[01:43:01] fearless and the young people tend to be more fearless. And also, there’s no skill set barrier. You know, if you tried to start that company we were just talking about previously, you’d need the engineers to build the websites, you need the seed capital to hire the engineers. It would take you like six months to get it to market. Now you just vibe it up. You don’t need the capital. Just go. >> We make this point and I make this point when we’re talking to large companies. We say, “Listen, these entrepreneurs out there aren’t smarter than you. They’re just more fearless. They’re willing to take more shots on gold on crazy ideas and fail over and over and over again until they hit something. And everybody else is trying to, you know, make sure they don’t go backwards or lose anything or get embarrassed.” >> Yeah. You know, uh, just to bridge a couple of concepts here. You guys talk about domain collapse. We’ve had domain collapse now in entrepreneurship. >> You have a purpose and you could you you have a purpose and you’re motivated. You can go do anything you want. Now, >> there’s almost nothing that blocks you from getting it getting I’ll tell you what else too.

[01:44:00] >> Except your self-limit. People self-limit way too much. >> They do and they procrastinate which is the worst thing you can do right now. Like like if you’re at a program at some investment bank or whatever or a training like get the hell out like now because this is such a golden moment and it’ll last a while but not forever. Then we’re going to have ASI very soon and there may be other things that happen but it’s very hard to predict but this is so reliable right now it’ll change your life. You just can’t lose a day. You got to go. >> Yeah. >> I do think there’s a limited window. >> Yeah. I love to talk with you about beyond the window, but for an entrepreneur, don’t even think beyond the window. It’s just like like focus on what works here and now because Alex is right. It’s a limited window and so just and it’s all boats rising with a tie. You don’t have to kill somebody else, you know? You just need to get in there and fill a void. >> So, so important, right? Yes. It’s all, you know, it’s a rising tide for everybody. Yeah. Welcome to the health section of

[01:45:00] Moonshots brought to you by Fountain Life. You know, my mission is to help you use the latest technologies, including AI, to not just do your work at home, teach your kids, but to help you live a long and healthy life. I’m here today with an extraordinary physician, the chief medical officer of Fountain Life, Dr. Don Malem. Dawn, let’s talk about cancer. Uh, you know, I know from the member database that we’ve have at Fountain are members who come in who think they’re healthy. It turns out 3.3% of them have a cancer in their body they don’t know about. >> That’s right. You know, the majority of cancers that we screen for, those aren’t the ones that are necessarily taking the lives when found at a late stage. We know that when cancer is found early, the chances for cure are much higher. We know it’s much easier to treat a cancer when found early versus when found late. What we’re finding in our members is over 3.3% were found to have these cancers that were otherwise wouldn’t have been found or detected. >> Yeah. You know, it’s interesting. People, you don’t feel a cancer until

[01:46:01] stage three or stage four. And and if you don’t know what’s going on inside your body, it’s like driving your car with your eyes closed and you can know. And so when members come through found, how do they detect cancers? >> So we’re doing full body MRI and we also do early cancer detection screening. This is very very important. And these are not typical tools used in the conventional care setting when it comes to prevention. This is a hard thing because currently these are not studies that insurance would yet be covering. But the goal is to collect these numbers, do the research, and work hard to democratize wellness. >> Yeah. So, at the end of the day, you can know what’s going on inside your body. It’s your obligation to know. So, check out Fountain Life. You can go to fountainlife.com/peter to get access to the latest technology to help you detect cancer at the very beginning at stage one when it is curable before it gets to stage three or stage four in your world of hurt. All right, let’s jump in our sixth topic, the $300 billion data center crunch. So,

[01:47:00] uh, first and foremost, Dave, we called this one, buddy. Um, you know, >> well, we gota we got to dig up a quote or two from >> and Elon coming together. Now, when we were pitching this twice to Elon, it was like, you should buy Intel. Well, okay, he’s partnering still maybe might buy it. So, Intel says its ability to design, fabricate, and package chips makes Terraab actually work. Um, the first pilot phase for Terafab is $25 billion. That could mean revenue for Intel of $4 billion a year. Stop stock has been up now I think 40% uh since this has been announced. Uh Intel is contributing their 18A process node. It’s 1.8 nanometer class technology that is being built in Arizona and Oregon. Reminding everybody terapab is 1 terowatt per year of AI compute. 50 times the current global output of 20 gawatt. Uh, pretty amazing, surpassing all the fabs on Earth.

[01:48:00] >> Yes, it’s so the most exciting thing in the world to me and I’m kind of a chip geek and I was actually one of the first people I was the first at MIT to build a neural network AI chip way way back in the early days >> and I just freaking love this. But you can see this coming a mile away. There’s no other way to get it done. And this is like the first pitch of the first inning of this battle. So, it’s going to be really really fun to watch it evolve. It’s exciting, you know, in in Lipotan, you know, when I met with him last, where was it? Uh, in uh someplace in US and in Saudi, he did say he’d come on the pod. I’ll have to reach out to him again and and bring him on >> for sure. >> It’s such it’s so exciting to see these companies coming together. And this is the way Elon can jumpst start Terafab. >> Uh and and you know, Alex, you made the brilliant point. This is one of the most important things politically and for world peace we can see. This could help avert World War II with the 1.8 nanometer node process and Elon’s vertical integration with Intel. That

[01:49:00] this could help avert uh or otherwise interfere with a Chinese invasion of Taiwan and disruption of the TSMC supply chain and a global depression that would be perhaps caused by any such invasion and a world war. There are tremendous geopolitical implications of this. >> Amazing. >> Well, that’s that’s all inning one too. 2 is super exciting because Elon is already thinking about next generation computing substrates batonic and then subatomic and beyond. You can’t work with TSMC on that. They’re like a a body shop beyond body shops. Just like a pure monopolistic optimiz they’re not an innovator at all. I’m really going to piss somebody off. Can you maybe I shouldn’t say that. But but Intel Intel is is a long history of innovation. It’s a great partner to work with >> and Liquitan is an amazing CEO. So if you look at his track record of what he’s done for the other companies he’s come in to run, massive turnarounds and success stories. >> Amazing background. >> Yeah. Now this chart should just scare

[01:50:00] all Americans silly. 50% of US data centers are being delayed uh due to electrical equipment shortage or from Chinese supply. So look at this pie chart here. Um so 17% uh of the data centers are uncertain. That may be due to financing. It may be down due to regulations. is a lot of uh jurisdictions are making data centers illegal. Um and 50% are delayed or being cancelled and that leaves 33% of the projected data centers actually being built. Uh this is existential for AI and this as you said brilliantly Alex is driving data centers into orbit where we don’t have to ask anyone’s permission >> to the moon Alice to the moon or maybe to to the moon anthropic. Not quite clear. Oh, >> I’ll give you I’ll give you my spin on this. Um, >> well, so the the data center business is in full boom and all the business school guys come rushing in like they always do. >> Yes. >> And they go out and raise a ton of capital and tell everyone, “Oh, I’m

[01:51:00] going to build a data center in Wyoming. Oh, I’m going to build a data center wherever.” You can’t get the chips. Did you Did you think maybe you needed some chips for your data center? I think I think that’s what’s actually going on here because every chip that’s coming out is getting used instantaneously. There is not an idle memory or processing chip anywhere in the country. So by definition, they overbuilt racks and they just didn’t plan ahead for the chips. And you also Jensen is locking up all the supply. >> I don’t know if they necessarily anticipated how connected he is, but you know, you thought, oh, I’ll just go to a, you know, a website and buy a bunch of stuff. It’s not there anymore, guys. Sorry. >> Which is why he lands vertically integrating as he’s always done. >> For sure. For sure. >> Amazing. Well, he’s going to try and 100x the the production, you know, like >> Yeah. >> It’s like, yeah, it’s not just, you know, own your own future, it’s 100x your future. >> So, I pulled this next chart up um cuz I found it fascinating. You know, I’ve always believed in my heart of hearts that Google is the dominant force and it

[01:52:00] will win in the long run. So, here it is. Google dominates AI chips and chip monopoly, owns the majority of specialized AI chips globally, uh, TPUs and H100s. Uh, and it’s an incredible story, you know, and you mentioned this Dave on stage with Eric Schmidt, uh, that Google’s chip ownership reflects extraordinary foresight. >> They started building TPUs in 2016 before anyone was thinking about this stuff. Yeah, somebody has to write that story because Eric said, “You know what, Larry Pageige, gets all the credit. He saw it coming way before anyone else.” I’d love to interview all those guys and actually write that story. >> Larry, I wish you know, Larry’s gone underground. I would love to reconnect with him. >> Got all my, you know, uh, Sergey is there and in the thick of it. Um, you know, Larry had had voice box issues and I think, uh, got out of the public eye. Uh, but yeah, brilliant individual.

[01:53:01] >> Let’s go talk to him. >> Well, Sergey is in the office. I’ll be in California next week. Maybe I can track him down and get through him to Larry. Or maybe he’ll text you after he hears this on the pod. >> So, here’s a question. You know, uh, if Google owns the majority of specialized AI chips globally, right, TPUs and H100s, when are they going to run into uh, monopoly concerns? Um uh because because they ha you know Sundar has to be you know playing four-dimensional chess around this. >> Yeah. Yeah. They have to start thinking about the next election about a year before the election >> because right now they have no problem because of the administration and and it’s all about beat China at all costs. >> I mean look at look at look at this chart. This teal color up top is Google >> China. You know, it’s it’s I love it when you’re comparing companies with countries, right? So, it’s like it’s like SpaceX and and Russian launches.

[01:54:01] SpaceX and Chinese launches. And here’s Google and China. And then Microsoft is next. Uh and then we see Amazon. Uh and let’s see, it’s Oracle, uh XAI, and other um >> but you know, Google’s just dominating. >> Yeah. Well, you talked earlier about people starting to softell and kind of, you know, keep the drama down. Google’s way ahead on that curve because they’re, look at how far along they’ve come and they hardly ever talk about it much, you know, relative to where they actually are. Um, and that’s because they don’t want the antitrust breakup and they, you know, they almost lost Chrome. They don’t want Chrome to get ripped out and given to perplexity. They dodged that bullet. A different administration though and that would have happened and, you know, they’d be broken into two, three companies now. >> Crazy. I’ll maybe take take the position this to me I can’t visually calculate the genie coefficient just by eyeballing it but this to me looks like a competitive market and let’s also remember Google with their own chips their AI chips they have multiple

[01:55:01] customers internal and external they’re servicing their search engine they’re servicing Google cloud they’re servicing ads they’re they’re servicing I I think people forget Google owns something like 14% of anthropic Google is servicing Anthropic and external frontier labs. >> They’re building data centers for Anthropic and >> Yeah. >> Yeah. For sure. >> And by the way, there is a beautiful relationship between Google and Anthropic uh between Daario and Demis. Uh there’s a very close relationship there, which warms my heart. >> Helps that Google’s a major shareholder, I’m sure. >> Yeah. >> Well, it also helps that they that those two guys so deeply care about safety. I mean, down to their core. And so that’s kind of nice that two of the most powerful guys are cooperating on it. Even though they are competitors in the market, but then on the other hand, you know, they’re competitors in the market. What’s antitrust going to think about that? Hey, you guys are hanging out have having shots. You’re not supposed to do that when you’re competing. What’s going on? So, yeah, it’s

[01:56:00] >> singularity makes for strange bedfellows where where you see model vendors competing at the infra level. I think we’ll see quite a bit more of that. >> All right. >> Well, I can tell you antitrust antitrust has very little to do with merit and a lot to do with whatever politics, >> whatever the guy >> politics, >> I will make a point I will make a point here that I think that even though even whatever the next administration is, the strategic global importance of this means that they will let things be. That would be my >> Yeah. Yeah. They’re not going to slow them down for sure. >> Yeah. >> All right, let’s go to our seventh uh segment here, our final segment before we get to our AMA, which is proof of abundance. The world is getting better. So everybody, I I you know, there’s so much negative stories out there around AI. Uh you know, we say here on on the Moonshot Pod that this is the most exciting time ever to be alive, a time where we can make our dreams come true. And we want to demonstrate this coming age of not just abundance, but extraordinary abundance. Um you know, sustainable and super abundance. And so

[01:57:02] every week we’re going to try and identify some of the articles out there, some of the breakthroughs out there that are driving this just to give you conversational capital uh and to take you out of scarcity into an abundance mindset. So a few different things here. Uh this past week, renewables hit 49.4% of global electricity capacity. Um I mean it’s extraordinary. We’re seeing renewables just really skyrocket. Solar drove 75% of these new additions, 5.15 terowatts of renewables. Uh this one just warms my heart. Uh as a lithium battery might lithium battery prices are down 99%. Uh you know down to less than a h 100red bucks versus 10,000 in 1991. So I mean guys remember the conversations around electric cars. Can we have enough batteries? Is it going to be too expensive? Well, uh, we’ve seen the markets really drive the price down

[01:58:02] and we don’t have a lithium shortage on planet Earth. We have plenty of lithium. In fact, new battery chemistries are coming. >> Um, this this is very tangible one. The price of lab grown diamonds has fallen below a,000 bucks. So, the average price of a 2 karat labgrown diamond has fallen 80% since 2020. So, it’s a,000 bucks versus a natural diamond uh 2 karat diamond at 22 to $28,000. Pretty extraordinary. And guess what? Your lab grown diamond is perfect and no child labor. Uh so, uh really important. >> it’s so funny in all the James Bond movies, the evil guy carries around a tube of diamonds to pay for the whatever. Now, it’s just Bitcoin. >> Yeah. Well, and and in in science fiction movies, you know, and like the man who sold the moon, diamonds are, you know, are basically like pebbles on the surface. >> I mean, it’s just carbon. It’s dense carbon. So much for debeers, which as I

[01:59:01] understand it, as a result of lab grown diamonds, is is in severe financial straits at this point. >> Thank goodness. >> Yeah. The deer the deers the Debeers uh you know, public relations campaign, one of the most successful in human history. >> Yeah. >> What is it? Three months of of salary, young man, you should spend on your diamond. Crazy. >> So, what do you think people should should give to their fiance now? >> Bitcoin. >> Obviously, it’s not. >> How do you do how do you wear that though? I mean, >> on a chain like a >> aura rings obviously. >> Aura rings. Yes. >> For sure. >> Like a a designer expensive aura ring. >> That’s what a couple are doing. >> I have a couple of thoughts around this slide. >> Yes, please. You know this what this is the importance of this is shows that abundance is a pattern across multiple domains. This is not a slogan, right? And the the the big challenge we’re going to have is how do we now how does society design institutions that distribute this abundance

[02:00:00] >> uh in a reasonable way. That’s going to be the challenge that we’re going to have to deal with. But I love these stories are so awesome across the board. >> Yeah. AI created 640,000 new jobs in the US in 2023 to 2025. In our in our next uh WTF episode, we’re going to talk about the economy and we’re going to talk about uh the conversation going on right now like Mark Andre is like no loss of jobs is a myth. We’re going to create more jobs. The economy is going to skyrocket. We’ll have that conversation in that debate. Uh See, you identified this fifth article which I loved. So, four robots install 100 megawatts of solar uh at one panel per minute. So, let’s take a look at this image here. Here’s Maximo. Uh this is a robot that is basically deploying 100 megawatts of solar in the California desert. So, if I had more time, I would have done the quick uh calculation of how many maximos do we need to catch up with

[02:01:00] China. Yeah, >> I mean this is where abundance becomes very very tangible, right? And once you get robots, energy, AI all reinforcing one another in your innermost boom. Um you know abundance stops being theoretical like and it’s so visible right now. So this now comes down to the distribution problem. Uh we’ve had food abundance for decades now. We have it’s been a distribution problem. Energy is getting to that same thing. It’s it’s just awesome to watch this. There’s also a whole bunch of secondary stories that are happening around the rise of explosion of solar across Africa. Pakistan is now generating most of its energy via solar. This is absolutely going to take over now. >> 100% buddy. It is a it’s a beautiful time. All right, let’s go to our AMA questions. Uh for our mates, uh gentlemen, we have four on the board. Um Seem, do you want to choose the first AMA? Um, I’m going to leave the singularity one because I think somebody

[02:02:00] else is going to pick that, but I’ll take the second one. Um, as not second, sorry, number question number one. As AI drives marginal cost towards zero, what prevents abundance capture where corporations just pocket the savings as profit while keeping prices high? This is from viewer at book quotes remix. Okay. Um, the the nothing will prevent it automatically. Technology creates abundance, but the institutions are the ones that decide who captures it. If markets stay concentrated, then abundance will pull at the top. If you open up interfaces, increase transparency, decentralize, lower barriers to entrepreneurship, all those gains spread. So, governance design now matters as as much as technological progress, which is where we’ve been focusing a lot of time and effort into this over the last few weeks and months. >> Okay. Uh, Alex, I’d love you take your take on number two. >> Yeah, I have to take number two. So, question number two. >> It was designed for me. So, question number two is, are we in the singularity

[02:03:01] or not? You keep saying we are, but Eric Schmidt said at the Abundance Summit that we’re not. What’s your take? And this is from Brand Karma. >> Yes, we’re in the singularity. Why are we in the singularity? Well, let’s put aside the sort of superficial response that you say potato, I say potato, you call it intelligence explosion, I call it a discontinuity. There there’s some subjectivity to the definition of singularity. It’s the the term has been used and misused over the years by uh again coined originally by Verer Vji and then popularized by Ray Kerszswwell, friend of the pod and then even more popularized by Peter and maybe used or abused at various times by myself. Different people have used the the term to mean different things. Rey used it uh in his original definition as more of a mathematical singularity and event horizon beyond which we couldn’t see what would happen next due to the

[02:04:01] intelligence explosion site to J good. I don’t agree with I agree with Ry on many things. One area where I don’t agree with Rey is this notion that a singularity if we define it as sort of an impermeable barrier or an event horizon beyond which we can’t see due to rapid progress. I don’t think that’s true at all. I I feel like I at least have, if not a singular vision, no pun intended, lots of different ideas that collectively map a reasonable probability distribution for what happens after the intelligence explosion. So, scratch that definition off. Then we get to the notion of a singularity as being a step function, a discontinuity in terms of progress. I I don’t think that definition holds water either. I I I think based on preponderance of evidence, every time people keep expecting a discontinuity, it ends up actually being smooth if you look closely at it. And I I think if you

[02:05:00] say look at this intelligence explosion that we’re in the middle of starting perhaps with summer of 2000 with uh with the the first GPT class models that arguably represented general class reasoners language large language models are few shot reasoners. I can draw a smooth line between the availability of GPT 1, 2, and three to where we are today as just a sequence of smooth sigmoids that were available internally as incremental innovations. But if you stack them cumulatively and if you go to sleep for a few years, you look away and you look back, it looks like a discontinuity. It’s not a discontinuity. Don’t sleep through the singularity because if you do, it’ll look like a discontinuity and you’ll actually think it was a mathematical singularity when it wasn’t. So, that leaves us with my operational definition of the singularity, which is uh I I have a few different definitions. One is every sci-fi trope everywhere all at once. Uh

[02:06:02] which I think we’re living through. Another is singularity as a set of instrumentally convergent inventions and discoveries that were all technologically predestined to happen all at once. I think we’re living through that as well. I I’ll pause the the monologue and just say I think every other reasonable definition of singularity doesn’t hold water because every time you try to make the singularity a point in time, it breaks and progress just doesn’t work that way. Therefore, we’re in the singularity. >> Amazing. So many. >> Dave, you want to take number three? >> Number three. Okay. He has so many of my favorite Alex boats in just that one. >> How many cliches can I pack into one monologue? >> You needed a microphone, Alex, that you could just drop. You just You needed a microphone. >> I need a piano piano keyboard to just pop out my greatest hits. >> I think by definition, a cliche would have to have been invented by somebody else. If you made it up, it’s not a cliche. >> Talking points. Then >> we’re going to be on stage first thing

[02:07:00] tomorrow morning together, Alex. Uh, >> I know. So, I’m I’m literally going to be sleeping through the singularity tomorrow morning when we’re on stage. >> Just say everything you just said. >> And guys, listen. I want to just say thank you. Thank you for recording this late. For those of you who don’t know, I literally landed at LAX two hours ago. Uh, rushed home, took a shower, and came on at the top of this recording episode. I was in Morocco for 10 days with the family. Uh, riding camels in the desert. Um, >> oh, insert some pictures right there in this podcast. They are so fun. >> Uh, well, maybe I’ll maybe I’ll do it for the next pod. But hey, um, thanks for recording this one late. I didn’t want to miss it. >> Uh, okay. Number three. >> I get three. Okay. Where’s the liability in Agentic AI? These agents could go out of control and wreak destruction. Our society is set up for human liability. What about AI insurance? Yeah, really a great great point. This is from Jeff 5781. Uh, it’s a great point. It’s actually

[02:08:00] not that hard a problem. It’s another thing that’s frustrating that nobody’s working on it. Um, right now the question is where’s the liability? Nowhere. The agent is anonymous. Nobody knows who owns it. There is absolutely nowhere. In theory, the author would somehow be liable, but who the hell is going to know who the author was? So, uh, so it’s going to be a zoo. This reminds me a lot actually when the internet was new. Uh we were running a bunch of companies including one called Jobcase and we’re advertising on Google and some competitor came in and they’re advertising on Google and they’re taking all the users and they’re routing them right to this fraudulent ringtone download and and we go to Google and say, “Can you can you like do something about this? They’re taking all the traffic away from our our legitimate company and it’s like some Ukrainian group.” And and like six months later they got around to banning it. It was just like absolutely a zoo and now it’s all nice and cleaned up. This is a zoo and it’s going to be a zoo until it gets cleaned up. But you know, Alex mentioned on the pod many times that you can you can create up new legal structures that make the individual agents liable

[02:09:01] >> and then you can have insurance for them. >> And we’re going to have ASI we’re going to have ASI to help us figure this all out. >> Yeah, exactly. You also see we’ve we’ve seen this happen before, right? Because you need to mix product liability, operator liability, mandatory insurance layers, etc., etc. We’ve done that for cars, aviation, finance. So, we’ll just figure this out. Right now, all our legal systems assume a human principle uh operating with a clear intent and agents break that model. So, we have to reinvent a hybrid. >> I have to add just on this topic, I was literally approached by an AI insurance saleswoman earlier today at the Quinn house in Boston. We >> seriously I was sitting down having a lovely conversation. woman walks over overhears the conversation about AI and says, “Oh, you guys, you should be aware. My company has started selling AI insurance. You all should get AI insurance.” This literally just happened to be a few hours ago. >> Insurance against the singularity. >> AI insurance salespeople are a thing now. >> What? But what are they what are they

[02:10:00] selling? What are they insuring? >> Against AI misbehavior. >> Oh, fascinating. My AI >> You can purchase AI insurance policies now. >> Oh my god, my AI made me depressed. want insurance policy to pay off. >> Oh my god. Okay, by the way, I think reinventing the insurance industry is a massive opportunity for for entrepreneurs out there. Such a big I’m so I’m so ready to disrupt the industry. It is so pathetically uh you know hundreds of years old. All right, number four. Uh this is from uh DOS Katapis, think a fellow Greek number 656. Once work becomes optional, would there be any reason to live in a big city? Will real estate in major cities collapse? There is no reason to live in a big city right now. Uh you can, you know, plenty of jobs require nothing other than, you know, Starlink and a laptop. Uh so you can telecommute. We’re going to be seeing autonomous vehicles and flying

[02:11:02] cars basically change the landscape of where you live, where you work. >> Yeah. Well, they’re coming 2028, baby. And then we saw, you know, Elon posted about this where we’re going to have basically caravans. I think I just came back from the uh Sahara Desert where there were caravans. We’re going to have caravan vehicles, autonomous vehicles with Sterling on their roof and people will live a nomadic lifestyle. So, yeah, there’ll be cities where you want to go to see, you know, I think human human interaction, uh, theater, uh, you know, abundance 360 as a summit. I was always worried that, you know, we’re going to digitize it. It’s virty virtual. Just the opposite. You know, we’re selling out earlier and earlier because people want this physical connection with each other. So, we’re going to need physical connection in the central cities, but you don’t need to work there. You can go there for entertainment. Uh you can go there to see the sites. You know, it’s interesting. Um what is going to retain

[02:12:02] value in the long run? uh especially opposed to >> what’s the long run? >> The long run. >> What time frame are we talking about? >> Um five five years. When did 5 years become the long run? >> Yeah, >> that’s like way long. >> I I think you know uh Disney World is going to retain value. Large physical events are going to return retain value. Uh as ASI real estate is going to retain value five years. not only just real estate but uh organizational structures uh that aren’t digitized and fully replicated and and we’ll >> I think minerals like minerals and mining are going to have huge increases in value. >> Yep, for sure. All right, let’s go to the second page here uh for each uh Sele kick us off. >> Oh, we got more. >> Um >> and we’ll speedun these. I will take from a financial standpoint once autonomy becomes mainstream why would anyone own a car this was from Neil

[02:13:02] Williams 4300 and this kind of links back to the city kind of question um the the they mostly won’t own cars at least in cities right in rural areas I think we’ll see car ownership maintained for a long time but car ownership is an artifact of low utilization economics once you have autonomy converting the car from a consumer product to a service layer essentially it becomes a subscription model and and car ownership starts to like like seeing owning your own elevator or something dumb like that we’ve seen this precedent by the way if you think go back to the music industry you used to have seven or eight music studios selling you cassettes DVDs selling you the physical scarcity right then we digitized music and and automated it and streamed it and now you have iTunes and Spotify selling you abundance on a subscription model that’s what we expect to happen to transportation but also healthcare, education, energy. Anywhere we have physical scarcity, the abundance model

[02:14:00] will take over. >> All right, Alex. >> All right, I’ll take question number eight, which is data centers create wealth, but can you dive into how they create wealth for the locals specifically? This is from JKVT3443. Part of me wants to answer the question by saying, well, the inhabitants of the Aremis base on the moon that’s going to be manufacturing a lot of these data centers, I expect to be quite wealthy. I I think I I think frontiers are where wealth generically gets created. I I think and I’ve had this discussion multiple times with multiple Google founders. I I think that the general consensus is frontiers are what lead to often net wealth creation in the human economy and in some sense we had for a while run out of frontiers. You could point to science as the final frontier. I think space is uh more applicable frontier in this case. So h how how are data centers going to create wealth for locals? Well, it we seem to be on a

[02:15:01] trajectory at the moment for moving data centers to space and the space locals are, I think, going to become quite wealthy off of the space economy. If I were to take the question uh slightly less uh giddily, I would suggest that for land-based data centers, we have every indication now, including with recent US national policies, that data centers because they consume so much electricity will increasingly be driving local electricity costs down toward zero. There may be uh in some cases a spike of electricity prices in the short term. My expectation is uh in the short term they create jobs in the medium to long term where by long term I mean like five years Peter’s definition of long-term at this point they are going to drive local electricity costs I expect down to near zero and maybe other utility costs as well because they need so much of it and they unlock so much value that they’re going to end up doing

[02:16:01] the moral equivalent of paying the taxes for all of the residents of a given area. And there’s employment in the manufacturing of them. And then it’s a cottage industry that grows up around the data centers. >> There’s a you know, you know, there’s going to data centers are going to be the central innermost loop. And then they’re going to be the ring roads around the data centers being built out. >> I I should add one more snide remark on on data centers creating wealth for locals. I do expect on the time scale of 5 to 10 years, maybe longer, maybe sooner, many of the locals in our solar system are going to be uploaded humans or derivatives of uploaded humans who will actually live inside the data centers. So, we wouldn’t want to deprive them of we wouldn’t want to deprive them of their condos in AWS US East1A >> data center uh old age homes. I love it. Um >> Dave, uh you want to take seven? >> Seven. Okay. With Elon’s exponential ambition, does money stop mattering sooner than later? And will his

[02:17:02] ambitions drain supply lines in materials and talent even with working robots? So, uh, and this is from, uh, no now 6361. Um, couple ways I could interpret the question, so I’ll take my best shot at at does money matter to Elon? Not at all. Um, he’s way beyond that. He cares now about the future of the world and being an interplanetary species and that’s his total focus and you know it takes money to get there. He doesn’t want to lose all the money but he has plenty. Um will his ambitions drain supply lines and materials and talent even with working robots? The answer there it’s a great question. I think the answer there is no just because of the way the timelines work out. So he would exponentially expand at any rate he possibly could, but he’s limited by ASML machines and a few other constraints that will keep us on Earth for three or four or 5 years. Then we’ll be in space.

[02:18:00] We’ll be mining in space. We’ll be constructing in space. We’ll be deploying all the dirty stuff in space, the nuclear reactors, fusion reactors in space, and it won’t drain the Earth of materials at anywhere near the rate that there’s anything to worry about. So, I think there’s only two outcomes for the world. There’s a world where a terrorist uses AI to destroy us all. And there’s a world where the Earth is a shining jewel of perfection for thousands and thousands of years that hasn’t been drained of of critical resources. >> And it it’s just perfect forever. So there’s two likely outcomes. >> Beautiful. >> But I’m going to add I I think the question here is do we enter a postc capitalist society >> um where money means less and less and you know Elon did say that you he said don’t save for retirement. Uh you know in the last conversation I had with him during the abundance summit I said so just as you’re becoming a multi- trillionaire money means less and less and he said yeah kind of. Um and

[02:19:00] >> that would be a fun like debate or discussion episode. What does post capitalism even look like? Star Trek economics. >> Yeah, there’s a great book called Zero Marginal Zero Marginal Society that Jeremy Riiffken wrote in which you know at the end of the day everything costs uh energy, raw materials and information. Um and those trend towards minimum zero cost. Information is open source. Energy is from the sun or fusion or zero point or whatever comes next. and uh material costs. Well, you know, as robots and mining material and mining robots get better and better, the cost of that goes down as well. So, we do enter a postc capitalist society. Hate to say it, um but you know, that’s ultimate abundance. I’ll take number six uh from uh M openness uh_riter. Each of you have high openness, high pattern recognition, and outrageously high optimism. Really? Uh, do these

[02:20:01] traits complicate your ability to objectively predict AI trajectories? Um, you know, here’s the here’s the reality. Most people are hobbled by their uh cognitive biases of negativism. Uh, where we tend to actually uh not project exponential change but project linear change and we tend to project negative outcomes versus open outcomes. I think we’ve all trained our mindsets differently to be an exponential mindset, an abundance mindset, a moonshot mindset. And I think those mindsets are far more aligned uh with this period of the singularity than the historic mindsets that evolved on the savas of Africa, which most everyone on the planet unfortunately uh are hobbled by. I don’t know if you guys agree to that, but um that’s my my point of view. >> Yeah. Well, and the second part of the question is are we, you know, are we

[02:21:01] excessively optimistic about AI’s trajectory? >> And I guarantee we are not. We we get the courtside seat that Elon was talking about. We we get that view. And um you know, Alex is hands-on with every detail. Seems playing with every model as they come out. I’m telling you, everyone is the opposite of that. You know, >> they’re way underreacting. This is much sooner than everyone. >> Eric Eric Schmidt said it nicely. said we are underhyping AI and the impact of AI. >> You know people are feeling right when I was right when I was 18 I started in AI and it was always way behind way behind like it was everyone was saying 20 years from now and then 20 years would go by and nothing had happened. This is the opposite and that’s another reason why people in academia who should know better are underreacting but they’ve been through this so many times they’re kind of jaded. Sorry Alex I cut you off. >> I was just going to say two things. one for a number of years I left AI to focus on nanotech thinking nanotech was the critical path to singularity. So I I don’t think I can be accused at least

[02:22:00] over the long term of being overly optimistic. The second point is if you’re not feeling the AGI right now you’re just not paying attention. >> Yeah. It it it feels like AGI. It feels like the singularity. All right. I want to do a call out to all of the creators out there. Um, if you want to give us an outro song or if you want to give us an intro song, uh, please send it to media diammandis.com. Also, if you’re a creator, go check out futurevisionexprize.com. It’s a, uh, competition, uh, the largest competition for basically trailers for the movies you’d like to see created. the future versions of Star Trek. Uh we’ve raised $ three and a half million dollars to award creatives with uh with creativity in particular hopeful abundant uh mindset creativity. All right, let’s check out a quick point. Can I make a very quick point? Uh you know how people have pets that sometimes

[02:23:00] look like them? >> Yes. >> Um what I really love is we’ve got uh people submitting intro and outro music that must be much like them. CJ Truheart, right? We know CJ, he’s got of true heart. And here we have David Drinkl. Also, >> the term the term Selm you’re reaching for is nominative determinism. And yeah, see see it everywhere. Names determine outcomes. >> Yeah, my my son my son’s named Jet and he is a speedrunner in track. So there you go. All right. Uh this song from David Drinkall already inside 2028. Let’s take a listen. Kids laughing at breakfast.

[02:24:02] I stand up toward the door. You see me, you know my day. Autonomous Uber pulls up right before. No call, no app, no need to say, helping me along the way. Here it comes. Sliding in smooth. Door opens wide. No driver, no keys. Seamless rides tuned to my life. Takes mewhere alive. >> Wonderful. >> AC ride.

[02:25:00] Lifts off gentle. No traffic around. gets me there fast. Right on time. No hail, no wait, no questions asked. We work together on every task. Here it comes. Sliding in smooth. Door opens wide. No driver, no keys. Seamless rides tuned to my life. Autonomous future. We’re already inside. Let’s ride. We’re inside. Let’s ride. >> Wow. That’s really professional. Amazing. That was that was like TV

[02:26:00] quality, man. That was >> Yeah, David captured my uh my scenario for Automagical Mornings. Amazing. >> Wow. I thought that was uh you know, live footage in the beginning. That’s so good. >> Then, you know, >> gentlemen, it’s so great to be back with you guys after a 10day hiatus. Uh to all of our >> I feel replenished. >> I feel replenished, too. A lot more coming. Thank you for staying with us. Uh excited for 2020. What year are we in? 2026. Yeah, it’s going to be an awesome year. >> We’re going to have to count the seconds soon. >> Uh, love you guys. Be well and see you tomorrow. >> Welcome back, Peter. >> Thank you. Great to be back. >> If you made it to the end of this episode, which you obviously did. I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called

[02:27:00] Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrens. That’s diamandis.com/metatrends. Thank you again for joining us today. It’s a blast for us to put this together every week.