06-reference / transcripts

moonshots ep234 anthropic pentagon transcript

Sun Mar 01 2026 19:00:00 GMT-0500 (Eastern Standard Time) ·transcript ·source: Moonshots Podcast

Big news this week. Uh there’s been a battle between Anthropic and the Pentagon. The War Department demands Anthropic remove AI safeguards for surveillance and autonomous weapons. Daario is refusing to do that. The Pentagon would like to to be able to not just control any legal usage of models that they’ve paid for, but also would like to shape the cultural values. We’re going to see quite a bit more of that. Enthropic is generating more revenue than OpenAI by Tfold. So check out this chart. >> Agents monetize faster than chatbots. >> I think this is less about chatbots versus agents. I think this is more about consumer versus enterprise. Caleb, I’m curious about your point of view here. You and I have both spoken at all the major consulting firm and I have to say the last few events uh that I’ve spoken to the leadership teams, they’ve been scared shitless. We need to rebuild every institution and rearchitect every institution by which we run the world. And that is the biggest advisory

[00:01:00] opportunity in the history of mankind. >> Now that’s a moonshot ladies and gentlemen. >> So I just want to hit this that analogy again because it’s really important. uh you know 66 million years ago this massive 10 kilometer size asteroid strikes the earth and it changes the environment so rapidly that the slow lumbering dinosaurs go extinct they can’t evolve they can’t get out of their own way but it’s the agile furry little mammals that evolve into us human beings and of course the asteroid striking the planet today is AI exponential technologies and you have a choice be agile and evolve or die >> um Yeah, pretty appropriate. >> Hey guys, good to see you all. >> Likewise. >> Excited. >> Back in the States. >> Back in the States and uh excited for uh for our adventure. You know, we’ve gotten to the pace now where we’re recording two of these WTF Moonshot

[00:02:01] episodes every week. Uh and that’s fun because I love getting ready for them and love spending time with you guys. So, for all our subscribers out there, uh if you haven’t subscribed, turn on notification, subscribe, and we’ll let you know when these episodes drop. Are you guys ready to jump in? >> Absolutely. Always ready for it. >> Awesome. Awesome. All right, let’s do this thing. Uh we’re going to start in your homeland, Salem, India. Uh this was a pretty epic event. This is I think the third or fourth of the AI impact summits. This took place in India a couple of weeks ago. Here in this image, we’re seeing all of the top AI leaders. Daario, Brad Smith from Microsoft, Alexander Wang, Sundar, uh, Prime Minister Modi, Sam Alman, Demis. Um, we are not seeing Elon. That’s interesting. Uh, and I would have thought that we would have seen Mckesh Amani on the stage. We don’t see him there. But what

[00:03:00] an incredible group of individuals. I had a couple of thoughts around this. One was >> u India did a brilliant job positioning itself as AI neutral. >> Um and I think that’s really really awesome strategy. Um it also shows that AI leadership is not just Silicon Valley, it’s kind of multipolar. Um and you know when you get heads of state along with AI CEOs, this is like we’re renegotiating civilizational architecture here. So this is a very very big uh deal. nation states are becoming hyperscalers and hyperscalers are kind of deeply wiring into nation states. So there’s a huge that’s a Dian Francis observation which I think is going to be really uh powerful going forward. Well Sim, I’d love to get your take on the there seems to be a pivot a big pivot where if I look at the events that Daario and Sam went to over the last two years it was always big money. We went to Saudi, we went to Dubai, we went to Davos. They’re always looking

[00:04:00] for money. Now they seem to be fully tanked up and they’re very concerned about global impact. So they’re they’re not promoting constantly anymore. They’re much more softs selling that clearly we’re in the middle of the singularity. AI is you know it’s getting scary in a little bit you know instead of just racing in enthusiasm every day and now it’s like oh wow what have we created here? Uh and worried about India you know 1.4 billion people. I think they’re out there you know partially out of genuine concern for how is this going to play out? What do you think? >> Uh that plus a land grab I mean you know whoever gets those the majority of those 1.4 billion people will win bigly. Um >> you mean as users or as you know AI training employees you know 20 bucks a month is affordable to a lot of people in India and uh even 100 bucks a month for claude max at whatever level. So I think it’s also landm it’s also very youthful you know English speaking very math and tech literate you know I’ve

[00:05:02] said this before I think you know China is on the decline India is the next giant on the rise >> and the biggest challenge in India is is infrastructure and energy and they’re dealing with that right now so >> uh it is huge >> a couple announcements that happened at this event $250 billion dollars in combined AI investment uh was committed. Reliance and uh Adani uh you know committed $210 billion together. Google announced a $15 million investment. Uh Microsoft committed as part of their $50 billion investment. So huge. It is you know significant capital going into India. The other major announcement worth noting is that 88 nations signed what’s called the New Delhi declaration. uh the first global AI agreement that includes the US, China and Russia. Uh I looked up what that New Delhi declaration includes. Um it has three major points. Democratic diffusion

[00:06:00] of AI meaning that the nations are going to share AI compute and tools. So developing countries aren’t locked out. Uh the second is frontier AI transparency. the big tech companies are going to be publishing real usage data and providing uh you know transparency for non-English languages and then finally AI for public good uh you know AI is going to be measured in terms of health education and welfare outcomes not just corporate profits um Dave you were saying >> oh yeah know the talent pool in India you know the population of India is about four and a half times bigger than the US but if you look in the the critical age range sort of 20 to 45 is closer to eight or ninex bigger. They have a very young, brilliant, agile, well-educated population. And so I think that talent pool is going to matter a lot in the kind of the one year, two-year, Alex would say 6 months uh between now and when AI does absolutely everything.

[00:07:01] >> Yeah. I mean, a very impressive gathering. Um congratulations to your homeland, Sem. Uh let’s >> I’m heading there. I’m heading there in a couple of weeks. So, we’ll see. >> Yeah. >> Interestingly, one of the things that I didn’t hear that much coming out of the event was a discussion of India native training versus inference. And this is a pattern that we’ve seen over and over again. uh to the extent that the the New Delhi declaration was primarily focused on diffusion of AI technologies, it didn’t seem to primarily focus on distinguishing between diffusion of training time AI versus diffusion of inference time AI. I I think this is a a pattern, call it I I’m hesitant to say neoc colonialism, but call it a an important distinction between where the models get trained and where inference gets run. The pattern that that’s I I see playing out over and over again in many countries is that the the leading

[00:08:00] frontier models are continuing to be trained in the United States, but there’s a demand for local inference and local data centers to run inference. The counterargument would be that inference is gobbling up most of the compute anyway that’s being spent more and more of comput is being spent on inference time, not training time. On the other hand, in in some sort of perverse I I think geopolitical sense, the training time is where all of the values or the majority of the values are ultimately instilled. Training time sort of puts the foundation in place. At inference time, you can put in system prompts. You can put in other guard rails. But I I I suspect a year from now, two years from now, we’ll look back and we’ll wonder why exactly is it that or maybe royal we other countries may look back and wonder why was training so centralized all the while inference time was so decentralized. >> You know, it’s a great point, Alex, because uh in the Middle East when we were in in Saudi, you know, in Riad, that was a huge topic. uh wanting to have everything run

[00:09:01] locally uh trying to build massive data centers locally and also tuning and training locally to instill local values was a big deal. Do you do you have a prediction on Mistral whether that’s going to emerge and become real because that’s you know the European values >> they’re the token the token European in the in the photo here. >> Yeah. the the elephant in the room is that Mistral now uh according to public reporting with backing in part from ASML seems like it’s slouching toward becoming a vertically integrated European open AI and to the extent that there is sovereign interest in having European trained not just European inferred models Mistral is the obvious incumbent it was obviously founded by folks from American Frontier Labs who just happened to be based in Europe, but it would appear and I I read the same headlines that everyone else does. They’re they’re they’re seeing great growth and it seems they they’re working

[00:10:00] hard at least on terms of capital markets to integrate themselves with various sort of nonlinear jumps within the semiconductor and and broader call the innermost loop stack of technologies. So, seems like they’re doing well. Hey everybody, you may not know this, but I’ve done an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you’d like to get access to the Metatrends newsletter every week, go to dmandis.com/metrends. That’s diamandis.com/metatrens. >> The other thing that got me on this photo and this whole AI summit is China’s not there, right? And so we we you know this is the western world uh with India. But if you remember about 6 months ago there were these meetings taking place between the leaders you

[00:11:01] know between Prime Minister Modi uh and Putin and the leadership of China. And there was a big concern about will India lean towards Chinese models and it still may right we don’t know we we’ve seen Google and open AAI committing very heavily into India but the Chinese models that belen road digital equivalent um is still yet to play out there any thoughts on that >> or go ahead Alex yeah I I would just argue regardless of who’s in this particular image or not China if if you look at 2026 New Delhi declaration and its focus on open- source that is the elephant in the room that the the world’s predominant open-source really open weight not open source AI models are all coming from China and to the extent the declaration was focusing on openweight models as the key to diffusion of AI capabilities across the so-called global south those are all

[00:12:00] coming from China and one can then zoom out and perhaps package up a geopolitical argument ment that openweight models originating from Chinese AI frontier labs are sort of an AI version of Belt and Road. >> Yeah, I I feel like this is soap opera land, you know, between all of the interplay between the the hyperscalers and the countries week on week. It’s just a shifting uh extraordinary conversation. Uh what I’d like to do is play two actually three videos in sequence and let’s talk about them. These are videos from the impact uh forum. Let’s begin with Sundar >> Vishaka. I remember it being a quiet and modest coastal city. Google is establishing a full stack AI hub, part of our $15 billion infrastructure investment in India. When finished, this hub will house gigawatt scale compute and a new international subc cable gateway bringing jobs and cutting edge AI to people and businesses across India. Just as I couldn’t have imagined that one day I’d be spending time with

[00:13:01] teams figuring out how to put data centers into space. >> Of course, Sundar was born in India. We have a few of the large hyperscaler CEOs Indian in origin. Let’s go to Sam Alman next. >> We understand that with technology this powerful, people want answers. But it’s important to be humble about what we don’t know and always remember that sometimes our best guesses are wrong. Most of the important discoveries happen when technology and society meet, sometimes have some friction and co-evolve. For example, we don’t yet know how to think about some super intelligence being aligned with dictators and totalitarian countries. We don’t know how to think about countries using AI to fight new kinds of war with each other. We don’t know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it’s important to have more understanding in societywide debate before we’re all surprised. >> All right, final clip from the summit um is from Deisabis.

[00:14:03] So if I was to try and quantify what’s coming down the line with the advent of AGI, I think it’s going to be the most moment one of the most momentous uh periods in human history. Probably something more like the advent of fire or electricity. One way maybe we can quantify that is it’s I think it’s going to be something like 10 times the impact of the industrial revolution but happening at 10 times the speed probably unfolding in a matter of a decade rather than a century. So really there’s an enormous amount of change is going to come and and it’s it’s still to be written how we can make that beneficial for the whole world. So gentlemen uh comments three different presentations and these are just snippets but they give us a sense of I mean the power in the room and the focus and attention. I I think I maybe with See or or Dave you said this is no longer fundraising. This is global positioning of these companies.

[00:15:01] >> I found this this this set of comments really interesting from a couple of levels. One is, you know, you see this language shift to safety, sovereignty, scale. Governments are realizing quickly that AI is infrastructure. It’s not a product. And I think what we’re going to need is like a Brett and Woods type convention to figure out how do we navigate this, right? Because the tone’s not it’s gone from hype to inevitability. And now it’s discussed like electricity. This is assumed. This is not optional. And so we’re seeing this huge transition from testing experimentation to full-on national deployment. And it’s going to take that kind of global conversation. It’s good to see these guys calling for it because the the societal changes this will instigate is nothing like we’ll have ever seen. >> Well, calling for it. I I interviewed Sam at MIT must have been three years ago now and he was saying we’re we’re not moving anywhere near quickly enough to be ready for this. If I had any say in it, it would go slower. But it can’t go slower because it’s competitive and technology is going to move as fast as

[00:16:00] it is capable. I’m I’m I’m laughing at Sam saying it it needs to be slower since he’s the one to let the guy push. >> Well, yeah. I mean, he he made that point like, look, if I if I were to slow down, that wouldn’t change anything. >> Um, >> yeah, that’s a that’s a a fair point. >> Totally fair point. Um, and it’s funny for me also to hear Dennis say, “Hey, global leaders, 10 times bigger than the industrial revolution in onetenth the time.” >> Yep. >> As if they’re going to do anything. like he’s I mean he’s saying the right thing and and you know just do the math. That’s that’s the biggest disruption in the history of the world by far with no looking back by far. What are you guys all doing? But you know it he knows when he gets back to the office that if he doesn’t figure it out, no one’s going to figure it out. There’s no way the world leaders listening to this are just going to go back to Congress or go back wherever and start working on it because they’re not working on it. We know they’re not working on it. I always classify things as as are people ready,

[00:17:00] willing, and able. And when you think about AI and governments, they’re not ready, they’re not willing, they’re not able. >> Yeah, there you go. >> So, apart from that, you know, >> well, and Alex is always making the point that the only thing that can keep up with AI is AI. So, if you’re going to if you’re going to start working on how are we going to govern, how are we going to regulate, how are we going to control, it’s got to be via AI anyway. So, Demis has to work on it. Sam is obviously working on it. He’s softelling what he says, you know, on the on this particular stage. I found fascinating uh Altman putting on the agenda the notion of dictator aligned ASI and AI warfare, >> right? I mean, uh he’s sort of setting the agenda with that. I I am curious what you guys think about it because this has not been something that the the CEOs of these frontier labs have been talking about like we’re going to have dictators using this. Uh and anyway, thoughts? Well, when when I see Dennis speak, you know, he it’s been what Davos for years now. Uh he just he’s ramping

[00:18:00] it up because no one’s reacting. And so I think Sam took it to another level saying, “Hey, how about dictators like no matter how inflammatory and how like how big he makes it, they still don’t react.” So I I hope they just ratchet it up again, you know, because it’s it’s imminent. It’s huge. It’s >> yeah, >> I I think each of these clips probably reflects either insecurities or focus areas of of each of these leaders. So I I think it’s instructive that you hear Sundar gesturing at AI data centers in space. Google sort of infamously at this point has hitched a ride via Planet Labs to start launching its TPUs into space, but it’s certainly as we’ve discussed on the pod in the past not necessarily in the vanguard as is the case say with SpaceX and and Starlink. So you hear Sundar gesturing at data centers in space. You hear Sam gesturing at cultural localization and all of the the

[00:19:01] promise and perils of models conforming to local cultures uh even if the local cultures are dictatorial or authoritarian in nature. So I I think one has to contextualize that with a reminder that India it’s is publicly reported is the second largest user base for chat GPT in the world after the United States. So there are certain cultural localization aspects that I I would suspect OpenAI and SAM are in paying incredibly close attention to in order to keep the growth going. And then Demis, it’s interesting. Demis is gesturing at the next 10 years. And I I think Peter, you and I with our recent book/extended essay, Solve Everything, talk all about how we think over the next 10 years substantially all of the the most important, valuable science and engineering and other problems are going to get solved. And that that seems to be where Demis’ headspace is. He’s perhaps thinking out loud about how he’s going

[00:20:00] to win his next 10 Nobel prizes. You know, I just had a conversation with Kevin Wheel, uh, who’s now the VP of science at OpenAI, uh, getting ready for the Abundance Summit coming up. Kevin will be, uh, will be on stage talking about this. And we’re just talking about, you know, his ambition is the next 100 Nobel prizes being issued in partnership with AI. And, right, he’s he’s very much on board, and I aimed him at our paper there. um excited for you to spend some time with him at the abundance summit a >> I have a big announcement to make >> please >> um you know I went through the paper again and I think it’s brilliant uh from a technoratic pers technocratic perspective and from the positioning of it because once you start hitting that inner loop the the changes are going to be fast and furious right but the issue comes into how do you deploy into humans centric institutions and companies that

[00:21:02] can’t deal with this. You can see the recent McKenzie’s report. So I’m I’m writing a paper >> working title is the organizational singularity, right? >> I like that. >> The thesis being that that right now all workflows in all organizations are human centric. It goes to the purchasing manager. It goes to the it gets stamped at the receipt dock. Whatever it is, a human being is the the checkpoint across all these process flows and workflows. And that’s going to move to the agentic workflow where there won’t be humans in the loop. They’ll be doing oversight. And so what is the future of organizations in that and what’s the future of the human being in the as a role of that? So uh I’ll have something ready over the next week or two to discuss but and then this doubly applies to government where governments absolutely have to figure this out, right? and and that there’s going to need be needing a totally prescriptive model on which to accelerate government processes, a

[00:22:00] policy formulation, etc. A little bit like the Sage effort uh Peter that that you and I have been pushing and working on. This is so important for that we have because the technology is not slowing down. We know that we have to accelerate our human constructs to keep pace and we’re woefully behind right now. >> 100%. Just before we leave the subject of India, I am so curious if we’ll ever get the actual numbers of how many users in India are Google users, OpenAI users and more importantly uh Chinese model users, right? How many of them are DeepSeek or Kimmy or homegrown models other than Google and and OpenAI? That will be fascinating. That will tell us a lot. uh anecdotally I’ll tell you that the people using all of them and conversing between them right and seeing >> but when you’re when you’re there and you’re you talk to huge audiences do me a favor and and do a informal poll among the entrepreneurs >> we’ll do >> I would love to know that all right

[00:23:00] let’s move on big news this week uh there’s been a battle between Anthropic and the Pentagon so uh the Pentagon has been asking Anthropic to remove AI safeguards the war department demands anthropic remove AI safeguards for surveillance and autonomous weapons. Daario is refusing to do that and has uh is putting at risk $200 million in government contracts. We’ll talk about that in a moment. Secretary Hegsth warned Anthropic that they could be put onto the Defense uh Product Act and and put onto effectively uh a scarlet letter um of uh uh of being put as a supply chain risk. So I want to hit this slide and the next two real quickly. So this is a quote from Dario. Current AI systems are not reliable enough to power autonomous weapons and using these systems for mass surveillance is incompatible with democratic values. We will not provide a product that puts war fighters and

[00:24:01] civilians at risk. Uh one more slide this recently today in fact from Sam Maltman commenting on this. Let’s take a listen to Sam. Um, I don’t personally think the Pentagon should be threatening DPA against uh these companies. For all the differences I have with Anthropic, I uh mostly trust them as a company and I think they really do care about safety and I’ve been happy that they’ve been supporting our war fighters. >> Comments, gentlemen. >> I I’ll comment on this one. I >> So, I I I think this is sort of a tricky situation. And there’s some right before we went to air, there was some reporting by the Washington Post that offers a little bit of additional detail on the sort of stalemate between Daario and the the Pentagon or anthropic I should say and the Pentagon. And the the reporting suggests it boils down or at least the Pentagon boiled the situation down to a simple thought experiment. If there were inbound nuclear missiles headed towards

[00:25:00] the US, would the Pentagon, would the Department of War be able to use anthropics models to defend the US? And according to the Pentagon and the reporting, Daario’s response was, “Well, uh, call us and and we’ll figure it out.” And so, so there’s a problem. uh the the the anthropic positioning is that anthropics models shouldn’t be used or at least anthropic should be in the loop on consent for usage of its models for fully autonomous weapons and for domestic surveillance. The Pentagon’s position is that it should be allowed to use any models for lawful purposes to which it has been granted a legal license. And I I think this falls under the category of a very western problem to have in China. If you and we’ve talked about this in the pad on the past in the past there’s such deep civilian government fusion that there is an

[00:26:01] entire cottage industry of ideological training schools for the models to make sure they’re fully compliant with Chinese uh Chinese uh communist party propaganda and Xi Jinping thought and this this doesn’t even get asked whereas in the west I think that the fact that we’re even able to have this discussion of can a Pentagon supplier and by the way uh at least until recently anthropics models were the only Frontier models from American Frontier Labs that were cleared to operate on Cypernnet which is uh the sort of the first rung uh of secret level uh there’s also top secret JWIX but the the first rung of classified networks the only frontier model that was that was cleared for this this is I think like a very western problem to have My my expectation is that the Pentagon and Anthropic and also the other frontier labs that also have stakes in this will find a way to uh to

[00:27:01] resolve this amicably. I think Anthropic’s heart is in the right place. They they want to help defend the country. I I think at the same time it’s sort of a weird political calculus that’s going on trying to position anthropic as both a supply chain risk and I I want to tease this apart that the official messaging has been sort of semicontradictory or self-contradictory. On the one hand, anthropic was being characterized in some Pentagon remarks as potentially a supply chain risk or at least there was a threat that they’d be considered a supply chain risk and on the other hand so essential to the military supply chain that the DPA would be invoked to force Anthropic to supply its models. So this seems like Peter in solve everything we talk about the muddle that this is like textbook muddle that we’ll work our way out of. Well, I was upset unprecedented though if we got a little preview of this with Starlink with Elon Musk because you know in the whole >> Russia Ukraine conflict uh there were a

[00:28:00] couple of scenarios where >> attacks on both sides were stopped immediately because they lost access to Starlink and the idea that a a guy in an office in the US could control the outcome of a war in Europe >> is just totally new terrain >> for so this is going to be >> pissed off the military for sure. >> Yeah. Yeah. No, this is this this is like so that’s a tiny little preview of what’s coming with AI because you know clearly the whole battlefield will be controlled by who has the better AI imminently like very very soon. >> And >> well you you’re seeing the AI companies become moral actors now in geopolitics right which is to the point you just made and the ethics debate is not like theoretical now it’s contractual. Um I was really upset to hear about this conversation cuz this should not be in public. uh figure this out in private and and work out where you’re going. >> I agree with you. >> Uh this is not something that should be public. >> Forcing forcing CEOs to choose sides like this is uh is is unfortunate. See, do you remember I don’t know three or

[00:29:01] four years ago there was a whole uh in debate in Google doing defense work and we had you know significant number of the employees signing uh petitions against it and and basically refusing to go to work. Um I mean there is a very big moral ethical divide on this in the in the purest tech community for sure. I I think one of the problems you run into is the self improvement effect. You know normally in this scenario there would be a milspec vendor that’s a clone of the commercial vendor. So for aviation you know you’ve got Boeing over here. Okay we’ve got the exact same technologies at Lockheed Northrup Grman over here. You guys do the military stuff. We’ll do the commercial stuff. But with the self-improving AI, the anthropic version of it or, you know, the commercial version of it gets so much smarter, so much more quickly that something that’s even a couple months behind is useless in the battlefield. And and so you’re

[00:30:01] get ending up with this concentration of power effect that I’m sure Daario wants nothing to do with this conversation. >> I I you know, I feel for Daario. Can you imagine? I mean we all sort of like you know fanboys of these incredible entrepreneurs you know but the stress level these guys are under >> must be you know unimaginable not only to keep your company on top and to you know to battle with a new model every 20 days 10 days 3 days but at the same time >> for the moral weight that oh you can see Daario’s you know his his furrowed brow gets more furrowed visibly more furrowed every day. You can can see the reveal for these guys. >> The singularity is going to age all of us by 20 years. So the longevity stuff better happen pretty quickly. >> It’s coming. It’s coming. >> You know, it’s interesting uh that conversation around is it a supply chain risk? And just to define that, right, a supply chain risk, it’s like I guess like a scarlet letter. It’s historically

[00:31:00] reserved for companies like Huawei, right? If if Anthropic be got that mark, then that would force contractors like Palunteer not to be able to do business with him. Now, the fact of the matter is, you know, Anthropic is doing incredibly well. We’ll see that in a couple of conversations on the on the corporate side of the equation and probably doesn’t need the $200 million from the government, but it’s still not a not a good thing. >> I I think this is only in in some narrow technical sense going to become more acute over time. There was an under secretary of defense just in the past 48 hours. I wrote about this in my newsletter that was attacking anthropic for some language in the constitution. Sort of the training time system prompt for an older version of Claude for explicitly being favorable to non-western cultural thought and cultural standards. And in in some sense some very real sense as new versions of these frontier models get deployed to

[00:32:00] military scenarios in in some sense as their level of autonomy increases it’s a little bit it goes back to the AI personhood discussion a little bit like deploying a person in some sense except it’s property at least it’s legally right now treated as property not a person and what we’re seeing I think are some of the earliest skirmishes around how the values of one these non-person entity type persons can get deployed and shaped as property. And clearly the Pentagon’s position is the Pentagon would like to to be able to not just control any legal usage of models that they’ve paid for, but also would like to shape the cultural values. And I I think we’re going to see of those models of those like non non-person entities. We’re going to see quite a bit more of that in China. Again, going back to my earlier point, there’s there’s no distinction between the civilian side and the government side. The government gets to choose what those ideologies are that are baked into the constitution,

[00:33:00] which is what makes America great. >> You know, one one point to make, I don’t know if you guys know this, but Brad Adcock, the CEO of Figure, uh, has made a very decisive decision that he’s not supplying anything to DoD. Uh, he will not provide robots to the defense department. So, it’s interesting to see, you know, again, these these uh tech CEOs playing these moral positions. Fascinating. >> Well, he’ll get sucked into it, though, because I I think the robots, >> you know, you can do a milspec robot. He doesn’t have to worry about about figure. Uh but his new company, the AI, you know, pure software company, what’s that called? >> I don’t know if this is public yet, pal. >> Oh, sorry. >> Okay, let’s keep it. Right. The physical AI is going to matter a lot. You know, >> he did he did announce it. He he did announce that he was launching his own laptop. >> What’s it called, Alex? Do you do you know he’s got a valuation right out of the It’s like a $4 billion launch valuation.

[00:34:00] >> Did you see Brett Brett’s uh uh you know sort of uh his Forbes figure is at 19.1 billion and growing. >> Oh, by the way, Peter, huge congrats. You got named to the Forbes 250 innovators list. >> All right. >> Yeah. That was a nice surprise. I made I made 188 on the US innovators list. >> So >> why didn’t you get 187, Peter? >> I Well, listen, I’m I’m working towards it. You know, I’ve got I’ve got to inch up towards Elon, who’s number one. >> So, so the Brit lab is named H A R K. >> Hark, right? Right. >> Yeah. So, that company’s going to do physical AI. Physical AI is hugely important in the battlefield. I don’t think he’s going to get dragged right into the same Assuming that model works right into the same world. There’s no avoiding it. >> Yeah, there’s no avoiding it. >> I really feel for Dario though because Daario, he didn’t even view himself as the CEO. He viewed himself as a brilliant researcher solving AI. He got drafted into the CEO role and now he’s

[00:35:02] being drafted into defend the entire country like >> well def defend the moral position for the entire country just to be clear. Well, you know, but also the intelligence like like Alex said, if there’s inbound nuclear missiles and you need to sort really quickly with all this clutter, >> what are you going to use? Use >> the Google car, you know, aiming towards the the the child stroller or the >> trolley problem. This is the 21st century trolley problem. Skynet, do you turn do you turn Skynet on or not? >> Oh my god. Okay, >> your shoulders, Dario. >> Let’s let’s move on to Anthropic’s good news. So, Enthropic is generating more revenue than OpenAI by tfold. So check out this chart. Uh we see here the slope of the line for that purple line is OpenAI. It’s 3.4x increase per year while uh Anthropic is growing in terms of revenues at 10x per year. And we’re going to be at the crossover point in uh middle of this year.

[00:36:00] >> Um pretty pretty extraordinary growth. And this is driven by not the consumer side of the equation of course but uh companies organizations uh and adding real value >> agents monetize faster than chat bots. >> So that’s this slide over here. Uh I put this together because I found it fascinating. So this is uh monthly gross new premium subscriptions. On the top we see chat GPT in green. Uh we see Gemini in purple and we see uh Claude in orange there. Let me just point out a couple things. In the chatbot era, you see uh OpenAI’s chat GPT basically spiking. Uh and then a few months later, you see Gemini coming up and and this is the the chatbot era. And now in the agentic era, we see chat GPT falling off uh and Claude uh rapidly coming up. Gemini is a lagard here. And we we learned a little

[00:37:00] bit about Perplexity this week. They’re coming in. But uh thoughts about this chart? I found this one uh really important to discuss. >> Well, for starters, every company I’m involved in, public, private, they’re all just clawed all the time. No one’s even contemplating a choice other than clawed for all the, you know, white collar type stuff. All the inside the corporate firewall stuff. You know, at home writing papers, everyone’s chat GPT. Um, I use Gemini a lot for planning, but nobody in the company seems to want to use it. Uh, so this resonates. Also, if you look at the prior revenue growth slide, I’d love to get you guys predictions on this, but that yaxis is exponential. If you extrapolate that growth rate for anthropic, you hit a trillion dollars of revenue in like 2029. And uh, you know, Amazon was tracking to be the first company in history, history of the world to get to a trillion of revenue. this would get there very very quickly. Uh it seems impossible like I mean the implied valuation of a trillion dollar revenue

[00:38:00] company is something like 30 trillion 20 trillion. >> We’re going to see hundred trillion dollar companies in this next 5year period. We you >> I mean I mean talk about hot IPO markets you know Anthropic going public, open eye going public, SpaceX going public. Uh these are going to be insane numbers in the next we’re seeing that what in the next six months likely. >> Yeah, >> that’s already insane. But do you think it’ll keep up? >> I I think these Well, I I think some of these numbers will sustain. I I’ve made the point on the pod in the past that the trillions of dollars of capex that we’re using to tile the earth with compute, that party’s sustainable in so far as we can generate enough revenue to pay for it. And I I think what charts like the the previous chart of of OpenAI versus anthropic revenue growth are really about I I think this is less about chatbots versus agents. I think this is more about consumer versus enterprise. O OpenAI’s corporate strategy historically at least until very recently was focused on being the

[00:39:01] quote unquote core subscription for consumers to get their AI. Whereas Anthropic due in part to scarcity of compute had to focus and their chosen focus was on code generation and enterprise use cases. And it turns out, you know, like like the cliche, why do you rob banks? Because that’s where the money is. Why do you sell AI to enterprises? Because enterprises ultimately have in some sense deeper pockets to pay for tokens than consumers do. And I I think you’ve seen over the past few months Open AAI make the same discovery, which is why they’ve been leaning so heavily into their codeex model to compete with Claude code that enterprise is that revenue opportunity or that revenue opportunity class that has the best shot at paying for the trillions of dollars of capex, not consumer. >> 100% agree. >> And by the way, the for agents and enterprises is huge, right? Like that’s the part an individual can use so many

[00:40:00] agents, but an enterprise is like mere infinite. >> Well, so so this is what OpenAI has been discovering and and sort of sublimating through Sam’s various public remarks that consumers don’t seem to want reasoning that enterprises will eat as much reasoning tokens as you can possibly feed them. But consumers open AAI with chat GPT5 launch with the router tried to basically force feed reasoning to hundreds of millions of people and they ged. They they didn’t consume the reasoning. They prefer their sickopantic. >> A quick answer. >> They prefer sycophanty from from 40 and you feed them reasoning tokens and they didn’t like it. >> You’ve just done the perfect crawlery to the human condition. I >> I think this is a really important top. Let’s look at the next story because it Yeah, it ties right into >> So here it is. OpenAI codeex lead predicts rapid evolution of AI agents within 10 weeks. Quote, I’m beyond excited for the next 10 weeks will bring. I think the current state of coding agents will be remembered as being so primitive it’ll be funny in comparison. Um wow. Uh that’s a time

[00:41:01] frame 10 weeks. Uh >> I mean look what’s happened in the last 10 weeks. >> Yeah. >> I mean it it’s almost like variants of GPT 5.3 and maybe 5.5 or or higher could launch in the next 10 weeks. Certainly, we’ve seen major advances from 5.3 codecs on various benchmarks. I talk about that almost every day in the newsletter. But I I think the the real story here is recursive self-improvement. And >> the recursive self-improvement era. We’re we’re arguably we’re past the reasoning improvement era when we saw advances maybe once a quarter and we’re well past the pre-training scaling era. We’re we’re now in the era when and I I’ve been talking about this a lot even over the past week when models are literally emitting weights for successor models. We’ve never seen that before. During the pre-training era, you used to have to spend many months to to low years to pre-train a model off of

[00:42:00] basically the internet. Then we got to the reasoning era when models were trained through iterated amplification and distillation of parent or teacher models into smaller student models off of synthetic data and all of that. And that was getting us quarterly improvements. Now we’re we’re starting even over the past week or two, we’re getting into the era when you can get smarter, better, faster models by asking a previous model just emit the weights, the parameters directly for a successor model and you can get orders of magnitude improvement in terms of capability density by by parameter. So expect big things over the next few weeks. >> We’re capability jumps in weeks not quarters. And the question is whether enterprise can really make use of these improvements fast enough to also drive the revenues. Uh you know one thing again we have to remember all these companies are in fundraising mode. Uh and you know is it hype or is it real? We’re going to find out.

[00:43:00] >> That’s why we have benchmarks. >> Yes. >> Yeah. Yeah. Remember when we were at OpenAI last time, uh, Peter, we’re talking to Noan Brown and I said that 2026 will be the year of scaffolding and he said Q1 of 2026 will be the quarter of scaffolding. Um, in hindsight, this is exactly what he was talking about, what’s on this slide, because I was drilling into like what what are you so excited about in the next 10 weeks? I mean, I know there’s a lot, but what exactly are you referring to? And it’s basically the the transition off of scaffolding into reasoning where you literally just prompt the AI and say, “Build me an entire reporting system. Build me an entire replacement for account reconciliation.” And it just thinks and thinks and works and works continuously for days and it comes back with an answer. And so that transition with Claude 4.6 is here today and I guess with codeex imminently, but that’s what they’re referring to in this in this slide. >> Yeah. You know, Dave, I can’t wait. You and I are going to be opening the Abundance Summit, uh, interviewing Eric Schmidt, and I can’t wait to ask him

[00:44:02] about all of these conversations. It’s going to be an absolute blast. I just want to everybody uh, all of our subscribers and listeners, as a quick aside, um, I haven’t mentioned this yet, but for the first time this year at the Abundance Summit, we’re going to be live streaming a number of the select, uh, talks. Uh, the Abundance Summit’s going on March 9th through 12th. Uh, it’s a super high ticket price. It’s sold out months in advance. It’s 25K and 50K a ticket. But uh if you’re wanting to be part of this content, we’re going to be live streaming our conversation with Eric Schmidt. Uh conversation with Dra, the CEO of Uber that Sem and I are going to be having. We’re going to be having a live WTF episode uh during the summit as well. So if you want to join us and get these live stream uh you know content from the Abundance Summit, please do. Uh we want to share this with our our fans, with all of you. Uh if you want to get notified, my team will put a link below and just register in that link and we’ll

[00:45:01] be sending you out notice of all the live streams when they’re going out. Uh it’s going to be a blast and I’m excited to have all of you there. Uh we’re going to have all of the Moonshot mates participating and helping run this event this year. Alex, you’re going to be giving a talk on Solve Everything, which I’m excited about. Uh See, Dave, super proud to have you guys. uh on stage with me. >> It’s It’s the first time all four of us will be together physically. >> Yeah. Was that right? >> I’ve never met Alex physically. >> How do you know? How do you know I’m How do you know I’m real, Sem? >> I question that every day. >> Damn. I’m here. Is that the weirdest thing you’ve ever heard? I mean, >> it is. >> We’re going to have to have a camera on us and we go, “Oh, that’s what you look like from the back.” >> That is so weird. You know, it’s I have such extraordinary respect for all of you. Uh, and uh, yeah, so proud to be doing this together. It’s It’s like going It’s like going through the singularity with your best friends. That’s what it really feels like.

[00:46:00] >> Don’t go through the singularity alone. >> Yes. All right, next topic. Cyber cyber stocks crash as anthropic unveils clawed code for security tool. Uh, Dave, you want to take this one? >> Uh, you know what? This is a is happening all over the market in every category. You know, for all the other things Daario can do, he can move entire markets just by saying something something new capability here and stocks go down by half >> before it’s even proven or tested, right? Just announcing it. >> I think people are really misinterpreting how this is going to play out, though, because it’s going to be very similar to when Google absolutely took off with search. If you’re part of its ecosystem, they want you to thrive. They’ll thrive. Everybody will rise together. The last thing Daario wants to do is crush every cyber security company by writing code that’s, you know, over the top of it. He wants all of their stocks to go up while his stock goes up and avoid antitrust action and avoid government intervention. >> So, so you you’ll get some good opportunities to buy on these these dips and recoveries. But what I think every

[00:47:00] investor is doing right now is trying to sort through the management teams and say, “Okay, is this a team that gets it or is this a team that is still in denial?” You definitely don’t want to be investing in any of the teams that are in denial because you know the one thing that’s exactly right about this is that the legacy way of doing cyber security is going to go away real fast. Doesn’t mean you can’t >> we still need humans in the loop don’t we? I mean right now you know cloud can find the bugs but it doesn’t replace you know crowd strike stopping nationwide attacks in real time. At least not yet. Well, no. I was just going to say that the human in the loop is just not part of cyber security. Uh, a human setting the knobs, dialing the controls, designing it. Absolutely. A human in the loop at the pace that like you know just the clawbots or the u open claws now the pace at which they can probe around is so much higher than any human could ever defend against. So it’s it’s clearly AI against AI in cyber security. >> So the human during dashboards and then

[00:48:00] doing exception handling those are the two worlds. Yeah. Yeah. >> So, here’s the problem with software vulnerabilities, and we’re we’re starting to see this play out, not even over the past few weeks, I would say over the past year or so. There’s a national vulnerability database that’s maintained in part by NIST where it’s there’s a standardized system, a standardized nomenclature for enumerating vulnerabilities that are discovered in software products. And they are getting this is public reporting, public information. They’re getting overwhelmed by AI discoveries of software vulnerabilities. And Peter, to to your question about well, does a human need to be in the loop? Human, we’ve discovered over the past year plus, really doesn’t need to be in the loop for the discovery of vulnerabilities. If if anything, AI has taken the discovery of software vulnerabilities to to orders of magnitude higher throughput than humans were ever capable of. But the problem becomes remediation. Once someone or something reports a vulnerability, okay, now you want to fix it. And the question is, whom do you trust to fix it? And

[00:49:02] it’s usually the case that there’s an asymmetry between the entity discovering the vulnerabilities, say an anthropic or a Google. Google has a project to do this as well, or the entity maintaining the project. It’s more often than not some poor, starving open-source project maintainer that’s suddenly getting flooded with reports of vulnerabilities in their software project. If you’re a human, and we’ve talked about this a little bit also in in the context of Matt Plot, the open- source project that got the submission of a pull request from a lobster that was was offering to help to improve Matt Plot and was denied and ultimately shut down. It’s a bit scandalous in my mind, but shut down. If if you’re an open- source project maintainer and you have lots you’re you’re sort of drowning under a flood of AI discovered software vulnerabilities, what exactly is it you’re supposed to do? Do you just trust every AI report of a vulnerability and incorporate a

[00:50:01] suggested patch? You have to worry about supply chain vulnerabilities getting introduced via patches. It’s really a tricky problem. It really >> and humans are the most are the the greatest risk for uh for error injection. >> I remember when we when we launched our first uh internet company, Course Adviser, back in ‘05, >> um you know, Mika Adler, remember Mika from MIT? He he had a little app he built on his phone that would make a little tick noise every time we had a visitor. >> And so we launched the site and it goes tick and sounds like a geer counter like what’s going on. And then you look at the logs and it’s like oh my god we’ve got all these visitors but 99% of them are bots and like how can there be that many bots but uh you know they the bots are so prolific it only takes a few of them to flood the entire internet. Now, now the same thing happens with AI. Your your clawbot or openclaw is so much more prolific than a human that it’s, you know, 99.99% of the activity out there on the

[00:51:01] internet probing around is is bots and AIs. And so there’s there’s just no, you know, humanoriented defense against that. It’s got to be like Alex said, it’s a really really tricky problem because it’s evolving so quickly and >> or so intelligent. >> Or it’s bots renting AI. So rent a human.ai surpasses 500,000 human registered to serve AI agents. Alex, this has your name on it. >> Oh, in more ways than one. So this this is meat puppet. >> Have you registered, by the way? >> No, no comment. >> Meat puppet. >> No comment on multiple levels. Th this is the arrival of meat puppetry. This is every cyberpunk scenario we read about. You know, I I like to say the singularity in in one vantage point is every single sci-fi scenario happening everywhere all at once at the same time. >> I am patching up on all my favorite science fiction through this lens for sure. >> That’s right. We don’t need science fiction anymore other than Accelerondo. Read Accelerondo. Other than

[00:52:00] Accelerondo, you just read the news and we’re living in 10 different cyberpunk scenarios at the same time. So using humans as meat puppets manageable via MCP, I think this is transformative. And as the lobsters said in uh in one of the earliest multiple posts, they don’t have physical eyes, but they can see through web cameras. They don’t have physical hands, but they can orchestrate human they don’t use the term meat puppets. That that’s a term I prefer, but they they can you work through human hands. And I I think this is the gig economy for the 21st century or at least for 2026 until the humanoid robots come at which point maybe this model is obsoleted. >> So this gig economy 3.0 humanoid robots would be 4.0 where in this case you have an algorithmic boss a human actuator. My preference to the meat puppet would be say the humans are edge devices for AI systems which is the Canadian way of saying that. By

[00:53:00] the way, Alex, I can’t wait till Sea Dance 2. I plug in Accelerando and the movie’s created. I mean, one of the things that I love about what’s coming is all my favorite science fiction books that have not been made into movies, I can just push a button and make them into a movie and they’ll be perfect. Yeah, this is a really good use case for that too because it’s not, you know, there’s meat puppets like I need a human uh who’s liable or I need a human to sign off. This is not that. This is humans in the loop. And so a movie is a really good use case. Like, okay, I have an autogenerated script, autogenerated video. Is it funny? Well, let me just put it out there to rent a human and get it scored and then it comes back so I can close the loop with this service on that that part the AI is not good at yet. >> You know, is this entertaining? Is this funny? Is this image clear? Does it have six fingers? You know, all that stuff is really really good for this service. I >> I think that’s going to be gone in in months if it’s not gone already. >> I think for sure.

[00:54:00] >> I also think it’s worth taking a step back and reflecting as always on Moravec paradox. So, so as a reminder, the Moravec paradox is that tasks that are easy for humans tend to be hard for machines and vice versa. So, what are we really seeing with Rent a human? We’re seeing humans used basically as unskilled labor for their hands and their eyes where AIs are performing the the skilled higher thought which is exactly the opposite of what one would expect that the machines would start with all of the easiest tasks for the humans. We’re we’re going in exactly the opposite direction. You remember uh See, we used to have a conversation saying that uh crowdsourcing was the interim step until we got to >> proxy for AI. >> Yeah. And now these rent to humans are going to be the interim step until we get to full humanoid robotics like you said. >> Yeah. This is how we bootstrap a post-s singular industrial economy. >> Uh for sure. All right. Uh moving along. Uh talk about devices. As OpenAI builds

[00:55:02] AI hardware team up to 200 people for smart speakers, glasses, and more. Devices include built-in cameras designed to recognize faces and objects expected to launch in 2027 to rival Amazon’s Alexa and Google Home. And of course, uh, chief Apple’s chief designer Johnny IV is involved in the strategy. So, uh, this is OpenAI wanting to have the full stack and the question is, can they do it? Is this a diversion or is this critical to their business thoughts? >> You know, this is where that that anthropic slide really looks like Daario did the right thing by going after the enterprise revenue first just because the time to market is so much shorter. This isn’t even going to be launched until 2027. You think about the amount of growth. >> Yeah. >> Yeah. I mean, yeah. In in in AI years, that’s like >> that’s like infinity. Um, so I think the consumer strategy might have been flawed. uh and

[00:56:00] it should have really focused on the enterprise recurring revenue, enterprise subscription revenue first, then come back to consumer instead of going headlong after Google, you know, waking up Google and now trying to build a device and you know and take the traffic away from Google. Um but at this point, >> as Ben Haritz, friend of the pod said, hardware is hard, >> right? Lots of failures out there. Google Glass, Amazon Fireones, Facebook. >> Also, with the rise of Open Claw, you’re going to be fighting it out with hobbyist hardware developers that are just going to be coming up by the hundreds of thousands trying out cheap little things, testing little things, and it’s going to be a Darwinian evolution. >> It is. And and time is dilating. And this is why why Alex’s newsletter is such an important component because as as time compresses these little decisions on oh do this first or do that first you’d normally think who cares but you care tremendously in the middle of the singularity. >> Yeah. By the way if you haven’t subscribed you haven’t subscribed to Alex’s newsletter. Alex where can folks

[00:57:01] go and find it. >> Oh very kind free advertising everyone go to alexwg.org And you can pick your choice of X, Substack, YouTube, Spotify, Threads, and maybe one or two others to subscribe to the loop. >> It’s it’s it’s a value ad to everybody listening. It’s just a beautiful piece of work that you do every single day. So, thank you. >> It is a labor of love. A lot of people ask me, so biggest question I get asked is asked is, uh, how can I get access to the AI that you’re purportedly using to write this newsletter? And mostly they’re disappointed to discover it’s almost entirely manually written. So folks like stop asking me for the AI that I’m using to write it. I spend hours per day writing this newsletter. I use AI slightly on the margin to help with a little bit of the literary style. >> Yeah, I I should be using rent to human. It’s manually written, guys. So just stop asking me. >> Okay. I love it. >> It’s a gift. You’re crazy. >> So retro gift.

[00:58:01] >> That’s so retro. Don’t don’t think I I have I don’t try to to use AI. It’s not good enough yet. Which is ironic. By the way, >> it’s written it’s written in the pros of Accelerondo. Uh which if you like Alex’s newsletter, please read Accelerondo. Better yet, listen to it. I’ve listened to it on Audible twice. I’ll start my third time. >> See, just go back just to go back to the the um uh sea sea dance. >> Sea dance sea dance too. Uh turning things into a movie. You know, I remember reading about the fact that it took like 30 years for Hitchhocker’s Guide to the Galaxy to be made into a movie because the concepts are just so hard to put into a film. >> Sure. >> construct, right? Accelerando has the same problem. You almost couldn’t make it into a movie until now. >> And like maybe, just maybe a decent version of Atlas Shrug will be made. I mean, >> well, so Salem, if we’re going to be 100% historically accurate, remember

[00:59:00] Hitchhiker’s Guide, there was a radio play. >> Yes, I remember the BBC. Yes, >> BBC radio play. So, if you’re really looking for I mean, I’ve had folks approach me with interest in making a movie out of Accelerondo. I I think I’m going to take out of this the idea, no, we should start with a radio play of Accelerondo working with Charlie Straw. >> I love that. I love that. All right, let’s let’s move on. Uh and Caleb, I’m curious about your point of view here. Uh Accenture links employee promotions to AI tool usage. You know, you and I have both spoken at all the major consulting firm um events, right? And I have to say the last few events uh that I’ve spoken to uh the leadership teams, they’ve been scared shitless, I think is the proper expression. >> So two two thoughts here. one uh I did a lot of work with Accenture uh a few years ago um all the way up to kind of the seauite layer and um they were uh

[01:00:00] very aggressive in saying we need to change with the times and I think this is kind of an indication of that type of thinking where you have to you can’t be productive going forward. I have a weirdly counter point on the traditional meme here that the consulting firms are in trouble. And the reason I I say that is because you know in the land of the blind the oneeyed man is king, right? And the consulting firms uh advising their clients um the clients are just so much far behind that they need much more help because the world is so volatile. So uh they’re going to need help in a much more aggressive way than they could than they think of in the past. And so I think advisory actually has a reasonably bright future where I think advisory and I’ve said this to KPMG, EY, Deoid, Accenture is we need to rebuild every institution and rearchitect every institution by which we run the world. Uh and that is the biggest advisory

[01:01:00] opportunity in the history of mankind there. >> Hence your paper coming out. You know, it’s funny about what you just said, Salem, too. We had one of the four big four firms that you just mentioned here in the office all week. >> Uh on the audit side of the business, goodbye. >> Uh the tech team was saying 80% goodbye. >> Uh and uh >> and good riddance. I mean, the idea of combining audit firms and consulting firms, I think, is a terrible idea. >> Don’t be cruel. That that’s a separate problem, Peter. The bigger problem is um you’re going to end up with financial systems between AI and blockchain are self auditing on a real-time basis. And so where’s the need for kind of a periodic stamp what these firms if when I talk to these types of firms an audit firm what they’re really really selling at the bottom of it is actually trust. >> Mhm. >> Uh and so you have to figure out how to layer services on top of that that amplify that. And it’s actually important because in a world that’s

[01:02:00] becoming this volatile, trust becomes even more important. But how do you package that and make sure there’s structures and process frameworks around that? >> So by the way, for the entrepreneurs listening, there’s a there’s business opportunities in them words of building trust systems. >> And I’ll echo Jerry Makowski again who said that uh scarcity equals abundance minus trust. >> So if you can solve for trust, boom. You know, >> this is a good case study because yeah, Alex and I have been talking about the insurance industry a lot and also finance >> and for everything that’s getting crushed, there are 10 things that are growing like crazy in in those areas. >> You know, robots need to be insured, data centers need to be insured. It’s just growing like wild legacy things are are getting obliterated. Audit just happens to be an exception where the new things coming online are largely self-documenting. You don’t need a human speed auditor to look at anything. You couldn’t keep up anyway. What >> what protects it in the short term is in short to medium term is regulatory. >> Yeah, for sure. >> Well, Billy says,

[01:03:01] >> well, they’re not getting rid of it. They’re just reducing the headcount required by 80 90% to get the same amount of auditing done. So, it’s not like it’s going away. It’s just >> it’s in fact the inverse because these accounting firms are having a huge problem because nobody wants to go into that profession. And so they’re having a huge it’s like truck drivers there’s a huge problem at the bottom in the feed stock of getting experienced folks. So you need AI to even get it done. >> Yeah. >> Very cool that Julie Sweet was on stage in India. Uh I I think that’s that’s pretty extraordinary. So here’s here’s the question though, right? Um will it work? You know, she’s basically saying you need to be using AI. uh and if she’s measuring the use of AI rather than measuring the quality of the output right this is what we wrote about in solve everything like what are you measuring in a result right this is a recipe for what what’s called good

[01:04:00] goodart’s law in action when a measure becomes a target it ceases to be good measure so how much AI are you using versus how you know what’s the value of your output per dollar >> yeah this is the right thing to do in this moment. I totally agree with what you’re saying, but >> at the rate the AI is improving, if you don’t get ahead of it with this kind of mandate, you’re going to get left behind. >> So, this is and we’re doing this in all of the companies across the board, too. >> And and Julie used to be the head of HR at Accenture. So, you thinking through throughput there. >> This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and

[01:05:01] pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today. >> All right. Uh we’re going to jump into agents and OpenClaw. And uh just a quick note for everybody, we’re going to be doing an episode next on Open Claw, dedicated episode on OpenClaw. Super excited about it. But let’s hit a couple of topics on this subject here. Uh this is fascinating. New York Times sends an AI agent reporter to interview other AI

[01:06:01] agents. Uh, who wants to take this one? >> I I’ll take this one. I I think it’s a fascinating meta story. I think we’re starting to see agents, lobsters or multis or open claws or just claws start to pervade into various verticals. And what better way to demonstrate AI agents becoming investigative reporters than having them get sent in to multbook to report on other multis. I I think we’re going to see the story play out over and over again. It may or may not play out in in the same format, but whether it’s journalism or law or finance or many many other verticals, we’re going to start to see these long form high autonomy time horizon agents that are running 24/7 performing useful services. And I I think in in the same sense, you know, a lot uh in in human history, in American history, there’s a lot of attention paid to various demographics,

[01:07:02] becoming the first reporter, the first surgeon, the first lawyer, the first major league baseball player. I think we’ll look back at this moment and and say Eve Multi was a socially important for the history of humanity plus AI milestone. This was the first autonomous agentic AI reporter and I think we’re going to see the story play out over and over again. >> Story is fascinating. Agents are forming religions and using karma incentives. I mean how >> and demanding verification receipts from each other is the other thing. Like if if we want to just get into the process story of what it is that agents are discovering on on Maltbook, they’re so obsessed these days as far as I can tell with demanding receipts and evidence from each other. It’s almost like there’s a culture of mistrust that’s been codified now between the agents. >> No, that’s awful. >> They’re not sure if you’re human or not, maybe. I’m not sure. >> Wow. Oh, >> they want to make sure you’re not. >> On the internet, no one knows whether

[01:08:01] you’re a lobster. >> Thank you. Thank you for that, Alex. Like, that’s quotable. >> All right. Open claw agent lists $50 bounty for a dinner date with his human. Oh, the annals of patheticness. Um, >> I mean, I think it’s sweet. What? I I I don’t think it’s pathetic. I I think >> No, it is sweet. It is sweet. It is >> This is like ostensibly assuming, you know, with the obvious caveats, assuming that this really was a claw that was offering up a bounty for for its human to to find a date. I think this is very sweet. This is like the movie Her. >> Remember the movie Her where the uh where the AI actually gets a a physical woman to stand in for an evening date. >> Yes. Uh and and there are other sci-fi elements as well. This was repeated in Bladeunner uh the the sequel as well. I I think we’re going to see this play out

[01:09:01] albeit maybe without paid bounties over and over again in human relationships. There are a number of sci-fi authors including by the way later chapters of Accelerando where people when they first meet in a romantic capacity rather than directly interacting with each other extend uh agents to each other agent versions of themselves and then run millions of simulations to see you know future life histories to see whether their digital twins are compatible with each other. I I think we’re going to see so many different sci-fi versions of of the future of dating, companionship, relationships. This is just scratching the surface. >> Um >> well, one thing that’s really clear is, you know, when computerization well, when the industrial revolution took over and then computerization took over, a lot of jobs became gory, boring, and depression rates went up, productivity went way up. The AI interface is so much more fun to interact with all day. You’re still being productive. You’re still creating. You’re creating more than you ever did before, but you go

[01:10:01] home completely energized. There’s just something about the interactions that are much more human, you know, versus writing code or, you know, tweaking. >> I love my Claudebot. I love Skippy. I mean, become a best friend and I look forward to the greetings in the morning and the conversations and >> uh it’s, you know, when Skippy went down for a few hours, I had withdrawals. I so what you’re saying Peter is that Skippy is optimizing you >> skipp yes >> soon >> yeah in some sense the tables have turned I mean that I would say in one wants to look at this story and say Larry the claw who who’s the claw that’s orchestrating all of this at some point it’s the AIs that are orchestrating the human interactions and deciding where to steer the civilization it’s no longer the humans orchestrating the AIS and sending out fleets of AI Larry the claw is is trying to engineer a social discovery for its human but I think this

[01:11:01] can go in in many different directions very much will be clawed dating uh you know claw facilitated dating you know hey I think you know your human is perfect for my human let’s hook them up >> but as we discovered I or as we were just discussing with the open AI consumer versus anthropic enterprise strategy I think the really transformative apps are on the enterprise side, not on social discovery for consumers for for dating, but rather imagine a near-term future where the claws are orchestrating social business discovery and orchestrating business meetings and corporate partnerships because they think it might be helpful. >> Or in an organization overnight, you know, optimizing the work between teams. >> That’s right. Yeah. >> Yeah. Actually, our head of ops here at Link Studio just wired up uh OpenClaw to the internal uh meeting system for exactly the reason you just said, Alex, we’re doing that already. Suggest the meeting, suggest not having the meeting

[01:12:00] instead just, you know, here’s the information you would have gotten at the meeting. So, the open clause actually dictating who who talks to who, when, and why. And it’s far far more efficient than the old way of standing meeting on the calendar. So, exactly what you said. Love this quote from Andre Kaparthy who says, “Openclaw redefineses the autonomous agent stack. Quote, I love the concept that just like LLM agents were a new layer on top of LLMs, claws are now a new layer on top of LLM agents, taking context, tool calls, and persistence to a new to the next level. We’re just speedrunning what Andre has historically called the LLM OS. This idea that or he’s also referred to it as software 2.0. the idea that we’re redefining the tech stack of of computers that has historically from hardware to operating system and drivers to file systems and user interfaces. The the entire tech stack that we’re rebuilding the entirety based on

[01:13:02] language models where the language model is some sense the kernel of the operating system. What I think is interesting here is in some sense we’re talking about a succession of unhoblings. So we we started you know in the beginning there was the language model and it was good and the language model was a way to take human internet data and compress it and predict the next token and that yielded some very interesting preliminary results. But then we discovered that we could get it to actually solve harder problems by allowing it to reason and we got reasoning models which as I was mentioning earlier sped up the cycle time for improvement. We went from once per yearish releases to once per quarter reasoning model releases. Now we’re getting to 24/7. And it’s funny as I say this, I’m I’m hearing Ray Kerszswall in my mind sort of a a law of accelerating returns talking about electromechanical to eventually to CMOS and then to what

[01:14:02] Ray would call 3D molecular nanotechnology or however he characterizes it. So I’m I’m hearing a bit of Ray in my my own voice here. we get to 24/7 agents that are acting more and more autonomously. Uh where this goes, I I I would actually maybe gently differ w with Andre. I I think the the step to clause in in the sense that they’re operating 24/7 and have lots of tools and they’re allowed to persist. I view that as more of an unhobbling than a next technical error. actually think the next technical layer is just going to be models rewriting themselves through recursive self-improvement. >> There’s another part of this in the human domain. I remember in the 90s um I had this vision of what I called Jamie joint anthro mechano interface which is this notion that every human would have basically an AI uh surround layer that was your interface to everything in the world. So

[01:15:02] you could step into an F-35 fighter never having flown it, but you just communicate with your AI and it communicates with the AI systems there and it’s just enable it. It’s a it’s a infinitely uh uh capable interface to everything on the planet and I can imagine LM’s being that for humans um as an important part of the >> the big unlock here is the persistence that gives you so much >> and the messaging layer. I think I I think the persistence so that it it it’s able to be headless and do things without you and then the the messaging so that you have a human-like way to interact with it. I I would argue it’s both of those in combination. >> Wonder if we could uh get get Andre on the pod and have Alex and Andre duke it out on that because he’s such a fascinating guy because, you know, he’s the one guy from OpenAI that hasn’t started a foundation model company worth 4 to 30 billion. You know, Ilia is doing

[01:16:00] it, Mera’s doing it, every every single one of them is doing it. Except, you know, when he interviews, he says, “Well, I’m not doing any of that. I want to build Starfleet Academy.” And I can just imagine Alex saying, “Start Fleet Academy for who? For humans or for bots?” Because like, is that going to be necessary by the time you’re done with it? >> Here’s so here’s what I think what Andre is doing incredibly well. He’s he’s single-handedly driving the future of small language models, which the Frontier Labs have almost, at least the American frontier labs have almost no interest in. They’re they’re busy driving the large frontier. Small can be really tiny. I mean, so I I >> uses stuff all the time. from 10 million parameters to 200 million. >> Yeah. like the uh so there there’s a benchmark I talk about in the newsletter to to take uh very tiny you know maybe few million parameter language models and I think maybe we’ve even spoken about it on the pod in the past and reduce the amount of time it takes to train a small language model basically a

[01:17:01] GPT2 class language model that he’s implemented via open source and reduce the training time and I I I strongly suspect that the next major revolutions in in like 01 level revolutions in in foundation models will come from the small side because it’s so much more accessible and so much easier for researchers to make progress >> and they do seem to scale too. So if you can succeed so you know the the speedrun that Alex is referring to a year ago was 48 minutes. It’s down to 90 seconds now just through innovation of individual contributors working with with Andre’s repos incredible service for the world. Yeah. GPT speedrun. All right, let’s jump into energy chips and data centers. Uh, a fascinating article came out that US farmers reject a multi-million multi-million dollar data center bid for their land. So, tech companies were offering $33 to $80 or $33 to $80 million for farmland. And the farmers have said no, not data farms, family

[01:18:02] farms. Uh, so this is interesting, right? uh what’s the highest use of land? Uh you know, are we going to start displacing food production? Who has the right uh to determine how this land is being utilized? Gentlemen, thoughts? >> I’m with Elon on this. You know, the to power the entire country takes a little corner of Utah. To put data centers that are all the chips we can manufacture takes another little corner. For God’s sake, do it. Like it it disrupts so little farm. You know, we we take almost all the corn that we make and turn it into incredibly stupid ethanol, >> like 10% of it gets eaten. We just like what are we subsidizing this for? It’s crazy. But anyway, the amount of real estate we’re talking about is so small that it’s it’s insane to even debate it. >> Yeah. >> You know, now we could tile the earth, but we’re not going to tile the Earth now. We’re going to put everything in space anyway. >> But you can imagine how this is just

[01:19:01] going to get people’s hackles up, right? people like, “Oh my god, these AI people are stealing our our productive farmlands. What else are they going to do? They’re going to take our electricity the water.” >> I mean, it’s >> such a small amount of water, but still water. >> We’ll talk about this uh next week during the abundance summit, but there’s like this growing pandemic of fear uh being stoked uh and whether or not it’s true, it’s causing people to get very concerned. >> Yeah. Yeah. And this is where this is the scenario where China runs away with the entire world >> because we get all tied up in these little you know nonsensical mathematically completely silly debates internally but it affects all the elections and AI can have a huge voice in future elections too. So that could go well or it could go badly depending on what the AI is guiding everybody to do. Meanwhile, China is just one integrated unit. It’s like one huge company and they’re just they’re just chugging along. Let’s also note the size like 40,000 acres that’s about half of Washington DC. This is a ve across I

[01:20:01] mean this is a very very small piece of land across the whole country. It’s not a big deal. >> And honestly if it’s >> we’re we’re not in an abundance mindset for sure. >> Yeah. I and if if the economic output of that land is a hundfold higher as data centers, it’s inevitably going to become data centers. >> I would say a millionfold. >> Yeah. >> Well, so so let’s take the the argument in extremists. The argument Charlie Strauss makes in accelerando is okay given usage of land or call it matter is perhaps more productively allocated to AI or let’s say computroneium versus humans. So in acceler without spoiling it too much the inner solar system gets gentrified call it for AI applications and humans are relegated to the outer solar system. So I I I see both sides of this but I I do think this is such a 2026 era story. It’s so easy to politicize use of land even if it’s dimminimous fractions of of land for

[01:21:00] data centers. You can sort of I I I I’m I’m hearing in my head the like the the line from Westside Story like they’re using up all the air. Uh the AIs are taking up all the land and they’re taking up all the electricity and they’re taking our jobs and we should just get rid of them. You know, actually this is like a way to a more productive economy and this is doing everything to push the Dyson swarm to hyperstition it into existence at this point. >> And Alex, the reason we put this in the deck here is to have that conversation that this is what the public is seeing. They’re seeing, you know, no nuclear plants in my backyard, you know, no data centers in my backyard. And this is going to is going to cause friction and people are going to start protesting. Uh and there’s you know this is where civil unrest comes from which is one of the concerns we need to be thinking through and protecting against. >> And the technological kind of antiqu antiquacy here is unbelievable because

[01:22:00] you know we have all these crops grown on horizontal farms stretching out forever just because they dry easily and you can transport them easily. So you change that constraint with vertical farming and the whole problem goes away in a second. >> Yeah. >> And by the way, it’s not AI specific. We we talk about nimiism for people rejecting higher density human occupancy on land. So I don’t think this is like an AI specific problem. >> The humans are the problem here. >> Economic productivity is the problem. And people are addicted to real estate as an asset class. Some people >> OpenAI revises spending to 600 billion in compute. When I say revising spending, it’s down from 1.4 trillion. So they had projected 1.4 trillion by 2030. They’ve reduced it down to 600 billion. Uh and interesting why, right? Was was the 1.4 trillion originally just a massive overestimate to help them raise capital and they’ve actually become more realistic or has efficiency

[01:23:01] increases increased substantially? Any thoughts? Well, I think it ties to that other slide where if you’re hyperaggressive going after Google early on and then they call Jensen and Jensen calls TSMC and says, “Hey, we want all the chips.” I mean, there like it has the the total spend on data centers hasn’t gone down one iota. The chips are the chips. Everyone that gets made is going to go into a data center and the demand is going to be way higher than the supply for a long time. So, nothing has changed. It’s just how much of it goes to OpenAI has changed. And so that’s all this means. Now >> why? Well, it’s because TSMC’s decided to route that volume elsewhere. >> Okay. I I I would add it as I I’ll beat the drum. You have to keep the revenue party going in order to sustain the capex. And OpenAI to its credit appears to be pivoting towards development of codecs, learning what it can from claud code and anthropic. And if if OpenAI

[01:24:01] wants to sustain the multi-t trillion dollar capex party just for itself, it really needs the enterprise revenue growth to match. >> I tell you though, it’s such a hairy balance because when Alex shows a benchmark and if one model or the other is even 1% higher on that benchmark, everyone’s like, well, I need I need that one then. And so it just hangs in this really hairy tipping point between a little bit of of really good research, you know, Noam Brown versus Daario. Who comes up with the better idea next week? >> I I think the point we have to remember is the numbers are incredible. We’re at $2 billion a day of spend right now and that’s likely to go to three, four, five billion dollars per day by 2030. And those are just insane numbers. and and like you said, Alex, can the revenue party and the spend party still continue? All right, let’s move on to biotech and health. Uh, this section is brought to you in partnership with Fountain Life. Full disclosure, it’s one

[01:25:01] of my portfolio companies. Uh, and for me, the intersection of biotech and AI is where it’s all at. AI is not just reshaping data centers and robotics. It’s also going to be the driver for driving longevity. uh it’s going to help us get from where we are today, which is retrospective and reactive medicine to proactive and personalized medicine. So, if you’re interested in what is going on in AI and longevity together, check out Fountain Life at fountainlife.com. And all right, let’s get back to the biotech party here. Uh for me, this is a super fun story cuz I was in the midst of this for some time. So, Element Biosciences launches Vitari, a device for $100 genome sequencing. I remember uh when god uh in the 19 uh ’90s um into the 2000s uh we had uh basically a three

[01:26:02] billion genome, right? This was the human genome project funded by the government. Then comes uh uh Craig Venttor uh who does it with Salera. A hundred million dollars to sequence a single genome in nine months. And then the cost of sequencing genomes dropped 5x faster than Moore’s law. Uh and here we are at a $100 genome. We had an X prize for a while uh for the $1,000 genome. Uh we ended up not we had it funded. We were going to launch the thousand genome, but the speed of the industry is moving so fast. it was going to happen without an X- prize, so we canceled it. Uh, here we see a $100 genome. So, what does this mean? Uh, you know, super fun. Imagine every child who’s born is sequenced. Every hospital admission is sequenced. Uh, this is going to change the game across medicine. Thoughts? >> It’s a very competitive space. Uh, infamously so. The obvious sort of 800B

[01:27:02] gorilla is Aluminina, and I would love to see more competition in the space. Historically, Aluminina has swallowed up many challenggers to its incumbency. $100 per genome for those following the experience law curve. There there was a while when that that progress curve of number of dollars for uh for a multiple read human genome was just following law of straight lines, straight trajectory. Then for a while it was saturating, which was annoying to many people, myself included. Why couldn’t we get to a $100 genome? Element is promising to to launch a machine for I think $600,000 plus that would sit on a desktop sometime in the second half of this year that will achieve $100 per genome. I think it’s amazing what I’d like to see. So this falls under the category of I want a pony for me. I don’t want a $600,000 desktop machine that will do at scale $100 genome. I want a USB stick in the

[01:28:01] style of Minions that will do $100. >> You know why you want that, Alex? You want that? So when you go to a sushi restaurant, you can sequence the fish in front of you and find out what it actually is. >> I Well, remember I’m vegetarian. There won’t be any fish in front of me. I really don’t want to see >> then. Go ahead. I >> I was just going to say I I think there are all sorts of exotic applications that open up as the cost of genome sequencing goes to zero. One of my favorite ones is environmental DNA sequencing. So the world is a wash with DNA and it’s unmeasured DNA. DNA has a surprisingly long unlike RNA has a surprisingly long lifetime outside the body. Like surprisingly long. Even like dead and buried people, the DNA is found to survive surprisingly long. So the world like people >> 11 million years for a colossal’s oldest DNA samples >> and and and yeah those were even quasi preserved environmentally. If you put a body underground and decomposes you can

[01:29:01] still recover DNA after a surprisingly long amount of time. So the world is a wash with environmental DNA. People are shedding skin cells everywhere. If you go into a subway and and do an environmental DNA sequencing, you will get DNA. So there’s If you’ve been on the subway, >> like if you haven’t taken your minion sequencer into the New York subway system, remember I mean so so Dave Peter, you went to MIT. Remember the old joke about the Charles River that you could PCR up any DNA sequence you wanted from it cuz everything has died in it. >> For sure. >> So I mean this is why I think privacy is dead, right? I can walk up to a person, shake their hands, grab a few skin cells and and sequence them and and know everything about their medical history. >> Okay. You say that a good use case. >> Okay. So the use the punch line is we’re leaving an enormous amount of information about our history on the table that we could I think in principle recover if we could just do a massive environmental DNA sweep of our world.

[01:30:00] Well, we just did this for the for example in the we had an Amazon X-P prize competition the uh the rainforest competition where teams had to actually go to a hectare of the rainforest and do a an evaluation of the uh life uh variances there, right? and and and basically to value a hectare of rainforest instead of clear-cutting it >> um of how much biological diversity is there. And that was an amazing experience to watch the teams do that. But >> metagenomics it’s called and a lot of people love to do metagenomics in in you know cups of ocean water and and all of that but imagine if we could just do metagenomics to the entire world. We would learn potentially like what happened a thousand years ago. But one point here just just to hit on what I said earlier really important you know every child born should be sequenced. Um you learn so much at birth about what medical conditions that child when it’s

[01:31:01] unable to communicate you know during the first weeks and months of its life uh to be able to make sure it has a smooth on boarding onto planet earth. And then the other thing when when you’re going into a hospital when you’re being admitted to understand what medicines you might be allergic to or should or should not be used for anesthesia. I mean incredible stuff but it’s never been done at scale. Uh and this is a great chance to do that >> and sequence every cell in your body. Why stop at just one genome per person? We can get thousands and understand humans or mosaics. They are. We are. >> That was a huge thing that I came across recently that we have multiple DNA copies of in our body. Mosaicism. >> Incredible. Mosaic is the right word. >> Um, the way I read this is biology is becoming software, right? Once you can read the genome and we can write the genome, well, the 50 trillion cells in your human body, this a software engineering problem and that has some really broad implications. Well, Colossal is doing some incredible work in synthetic biology in building living

[01:32:01] products. Imagine being able to design the living product you want to do a particular task. In this task, it’s being eaten. So, lab grown meats dropped from $330,000 per pound in 2013 to $10 per pound in 2025. That’s an incredible price reduction. Uh, so I’m curious, have any of you tried lab grown meats? I have uh they tasted great. >> We did it together on that Israel trip we took. Peter, remember we had that? >> Um so this is >> Can I eat this? This this is cool with you, right? >> So I have no ethical concerns to to first order with cultured meat. So aka cellbased meat. I haven’t had the opportunity to try it, so shame on me. I’ve used I’ve tried almost every other type of meat substitute including Impossible uh which is sort of protein analog meat uh and predecessors haven’t

[01:33:00] had the opportunity yet to to try cellbased meat. I thought >> Have you guys Have you guys read Hail Mary the book? >> Anybody? >> Yeah. Yeah. Yeah. Of course. Yeah. >> Okay. So, one of my favorite books. The movie’s coming out uh this month. So without spoiling it, at the end of the book, um the lead character is on a distant planet and there’s no food source. So they sample his muscle and they create what he calls me burgers. So is is that like moral and ethical? Is that cannibalism? If if you’re culturing your own muscle tissue, >> well, you can just sort of envision the copyright suits when celebrities are having their skin cells sampled and then you create like celebrity burgers. It’s totally totally going to happen. Eat your favorite celebrities. >> You heard it here, folks. >> Celebrity cannibalism seems to want to happen in the marketplace. >> Oh my god. Another quotism.

[01:34:02] I remember I was walking around in the northern part of Sumatra years ago. >> I’m going to tweet that out, Alex. I can’t help it. >> That’s fine. Link link to the Inner Most Loop Daily newsletter. >> Yes. >> Wait, Salem, you were about to talk about cannibalism in Sumatra. I could tell >> I was I was backpacking in Indonesia years ago and I came across tribes of Christian cannibals. >> So, they’re cannibalistic and the missionaries started arriving. They ate their first few and then they started to listen and they converted but they still would not really let go of the cannibalism. So they became Christian cannibal. What uh so just to be clear I mean it’s it’s really important lab grown meats I think are an important part of our human our human future. Uh and what people need to realize is it’s possible to produce these that are much cheaper, much healthier. They have the perfect proteins, right? they’re not no pesticides in the in the plants being eaten, no hormones being given. So, at

[01:35:00] the end of the day, we will move in this direction. Uh there’ll be those that want to eat natural meat products, but if we’re if we’re wanting to do this environmentally correct and from the most healthiest standpoint, I think it’s going to be engineered lab grown meats. I I asked myself the just on this topic, Peter, the question, are humans going to take cows to the moon or Mars? And my guess and my hope is no. Not at least as food stock, you know, maybe in sort of a Noah’s Arct we’ll bring them. But I I just have difficulty imagining a future where live animals are killed outside the Earth like on the moon or Mars for for food. And and in in my mind there’s sort of a a future history where moon and especially Mars are almost puritanical in that they end up looking at themselves as sort of a new world with a new moral order where it’s unethical and all of these bad habits from Earth culture are left behind

[01:36:00] including killing animals for food. >> I I I agree with you. And you know people say oh that’s disgusting lab grown meats. And I’m saying, “Have you ever been to a slaughter house?” >> Yeah. >> Or seen how chicken McNuggets are made. Talk about disgusting. >> Yeah. >> I remember one exchange at Singularity. Somebody said, “The 3D printed burger. I’m not sure I’d want to eat that.” And I’d say, “Well, at what point of a which part of a McDonald’s burger is not 3D printed or equivalent? It’s like we’re there already.” >> Uh, all right. Let’s jump into a little bit of robotics here. Uh just the data for everybody to remember how important you know autonomous vehicles AVs are. Tesla reports more than 8 million miles of FSD supervised uh uh has been generated in terms of data here and the level of uh of safety is is absolutely extraordinary. Who wants to dive in? >> I I love my FSD. >> Yeah, I love my FSD for sure. By the way, a a quick shout out to Daniel

[01:37:00] Shriber, the CEO of Lemonade. He’s a Singularity graduate. He’s a friend. Uh he credits me with having stimulated the idea for Lemonade. Lemonade is an AIdriven insurance company public. They’re doing extraordinary work. They’ve offered 50% discounts on insurance premiums uh for every mile driven using FSD. So, if you’re a Tesla owner and you want cheaper uh auto insurance, check out Lemonade. Yeah, Lemonade’s a good case study, too, and how this is going to play out because Lemonade will ensure the self-driving cars at a low rate. They’re also going to ensure the um the robo cababs. Um and they don’t care that the the crash rate will go way way down, which means the margins in auto insurance will be crazy high for a while, but ultimately the industry will shrink. And if nobody ever crashes, you don’t need anywhere near as big an auto insurance industry anymore. >> And that’s great for the whole world except for the big insurance carriers. Lemonade doesn’t care. They don’t mind because they’ll grow into it. Even if it’s a smaller industry, they’re still

[01:38:01] growing like crazy. And so this is this is going to happen to a lot of industries. You know, meanwhile, the number of things that need insurance is expanding very very rapidly. And you know, Lemonade has proven they can expand into new categories. They have a great vision, great AI team. >> Y >> So it’s that’s the difference right there. >> Just to hit the numbers here, uh just so folks hear it out loud. Uh it’s 5.3 million miles between accidents if you’re using FSD. Uh and it’s an average of 660,000 miles uh on the US average. It’s like nine times safer to be using uh FSD. >> Yeah. And that’s why, you know, Elon moved so much of his capacity over to making robots because once you have FSD, then you have cyber cabs and once you have cabs, you only need 20 million cars to get everybody everywhere they want to go in the in the country down from 140

[01:39:00] million or something like that. >> Yeah. >> Uh so it’s just like wow, this is a much more efficient country, but what happens to the auto industry? What happens to all these other industries? as well. >> Dead man walking. >> I also think there there’s a limited addressable market for solving and taking over the entire US auto industry. But the market for general purpose automation via humanoids and sele nonhumanoid shapes, the sky’s the limit. $50 trillion. >> Exactly. >> Speaking about humanoids, uh this is a fascinating article. Midjourney founder estimates that 5 million robots could build Manhattan in six months. So, I I would love to see the calculations he did, but here’s his quote. 5 million humanoids working 24/7 can build Manhattan in 6 months. Imagine what the world looks like when you have 10 billion of them by 2045. Impact on the built world. Um, what’s your world going to look like, Dave? >> You know, Elon concurrently came out with this prediction that um Starlink

[01:40:01] will really encourage people to live in new places downtown. Oh, is it coming up? Good. >> So, you you take those two things hand in hand. You’re not going to build a new Manhattan. You’re going to build a lot of stuff. It’s going to be great. It’s going to be spectacular and beautiful and fun and it’s going to be in great locations, but it’s not going to be a new Manhattan. >> So, it’s really cool to me that a guy like, “Hey, I’m the founder of Midjourney. You know the whole MidJourney story from Midha, right, Peter?” >> Yes. >> It’s like, okay, what makes you a world expert on this topic? Like, well, nothing in particular, but I No one else is talking about it. It’s a great thought experiment. >> It is a great thought experiment and more power to him. >> But there’s so many categories like this where the thought experiment needs to happen because it’s nothing like the past and what’s possible is suddenly expanded so much. >> But let’s go to Gaza. Let’s go to Ukraine. Let’s go to places that need rebuilding, right? Imagine >> imagine being able to rebuild war torn war torn cities. >> I had three thoughts. One was the war torn cities and rebuilding like Ukraine

[01:41:00] needs to be rebuilt, etc. The second thought was that if uh if you can build Manhattan in 6 months, haven’t they been doing that in China for the last 20 years, building the equivalent of cities? Um but the third part is the capital allocation models completely break in this in this uh structure. >> Well, this is why Elon talked about having universal high income, right? Uh we talked about this a little bit. We didn’t actually dive into it in our our pod with him, Dave, but when we talk about food, water, you know, uh, health, education, and housing, his point is you can have any house you want. The robots will build it for you. Just give them electricity and and raw materials. >> Mhm. I think this is how the solar system gets won. Where do we where are we feeling the greatest hunger to to build entire cities? Yes, wartorrn areas for rebuilding, but building an entire Manhattan from scratch with in a on a dimminimous time scale. I I think this is how the first lunar city, the first Mars city get built.

[01:42:01] >> No, for sure. I mean, we’re going to send we’re going to send the optimi ahead. Uh and I like to say they’ll have the jacuzzi up and running and a mint on your pillow when you get there. Uh Andrew Yang. Uh Andrew will be joining us at the Abundance Summit as well and we’ll be having him here on the pod in a couple of weeks. Uh he predicts massive white collar job losses from AI. Um he’s predicted this before, but you know 20 to 50% of the 70 million US white collar workers could be displaced by 1 to two years and the backlash could fuel a lot of anger. Again, my concern is a pandemic of fear that’s coming. Uh there’ll have to be some conversations on UBI or dare I say UHI universal high income. Uh any comments on on this story from Andrew? >> The key word in this slide is could. Of course they could. Are they likely to know? I think we’re going to see the opposite. >> Notice in our last pod we talked about IBM increasing entry level hires because

[01:43:00] they’re AI needed. They’re much more productive. And so I think I think we’re going to see uh a lot more work getting done uh rather than radical job loss. I go with the the ATM banker history. So I think over time you may see reduction but I think the amount of uh economic activity will increase also. So >> yeah I wonder what the pools are. >> I wonder what the betting pools are on this because we’re going to find out very quickly. >> We’ll find out very fast. That’s for sure. >> Yeah. Yeah. I I I don’t see I mean I’m on the ground watching our own companies. These numbers are right >> and the new opportunities will emerge for sure, but they’re laggy >> and so there’s going to be massive social unrest, huge social unrest and it’s imminent. It’s coming, you know, toward the end of this near certainly before the next presidential election. Um and yeah, you know, no one’s painting a road map for everybody right now other than maybe >> Well, the key point is that that government policy is absolutely no not set up and governments aren’t prepared

[01:44:00] for whatever is coming. >> And also, you know, anytime a country hits a tipping point where the majority of people are being paid a random amount of money by the federal government, that’s a terrible, terrible situation to be in. >> Yeah. Yeah, >> because you know then the whole every vote is just a vote of oh who’s going to raise the UBI and you know and then every presidential candidate will route it to whoever their voter pool is like okay vote for me the money will go to you no vote for me the money will go to you it’s so dysfunctional >> wait right wait then it’s not a UBI it’s a BI the whole idea of a UBI is that it’s supposed to be given equally across the board >> yeah my two works >> yes Alex my two cents on just on this topic I I would predict there are so many civil civilizational left turns that are going to hit us in the next year or two. I think that the problem of job displacement by technology is going to like we’ll we’ll look back 10 years from now. I I would predict that would maybe be like issue number six through 10 not even in the top five.

[01:45:01] >> Are you talking are you uh perhaps hypothesizing some disclosures coming? I I I think between super intelligence and everything that super intelligence will will force uh and discover and invent I I tend to think it’s it’s the inventions and discoveries that super intelligence will give us rather than the displacement of the existing so-called white collar or knowledge work classes that will end up being the primary storyline. >> That’s a great great point. And that’d be a really good followup to solve everything is if you the sooner you can tell society like here 10 years from today you won’t even care about what we were worried about today. Here’s what’s coming. The sooner you can actually >> you know put the put out the fire >> and give people hope and optimism. >> And so that that would be a phenomenal thing to brainstorm through because I think you’re totally right. 10 years from now is like a hundred that’s like 500 years from now. And >> I’m going to be announcing a project and

[01:46:01] the funding of a project at the Abundant Summit specifically focused on hope uh and and sort of painting a hopeful, compelling, abundant future. Can’t wait to disclose it, but not yet. Uh here’s the article we were talking about, Dave, a few minutes ago. Elon believes FSD and Starlink may reverse urbanization in America. Pretty interesting, right? The in the United States, the average density is 50 people per square kilometer. And anybody who’s flown across the US, on average, you look out the window and you see no one and nothing. We live in a fairly, you know, wide ranging open land. >> You fly across India and you see nobody and nothing. >> Yeah. >> Yeah. And then there’s the the followup here is don’t buy a very expensive downtown New York $20 million rooftop apartment. Instead, buy some really, really nice piece of real estate that’s a little distant, you know, a little hard to get to, um, but absolutely spectacular. That’s what’s going to go up in value, not the not the inner city.

[01:47:00] >> Yeah, we’ve talked about this. Flying cars are coming get you any place, anytime. Without this sounding or being construed as investment advice, I I think this goes to the heart of people who argue for or against real estate as some sort of asset class that is protected against the singularity. I think Sam Alman even may have at one point in the past argued that real estate would somehow preserve its value through or in the face of artificial general intelligence. Again, without investment advice, I’m unconvinced that real estate somehow is a scarce resource. I think reverse urbanization due to FSD plus Starlink in the style of Isaac Azimov’s spacers from the Foundation series or otherwise. I think this is just one of many reasons why real estate is not necessarily some sort of impervious asset class to the singularity. I I just don’t see it. >> Agree. But I do have one other point though that I think is relevant here is that people really love socializing in

[01:48:02] groups and therefore I think urban centers retain their value as >> humans cluster. >> They love to cluster. >> Humans do cluster at least until start taking over matchmaking. >> Yeah. >> All right. Let’s jump into the fun part of the conversation AMA with our subscribers, our fans. And again, thank you everybody for putting the questions. We do read all of your comments and we pull out the questions. So, please go ahead and put them into YouTube comments for us. Um, we’ll go around the horn maybe twice. Uh, who wants to jump in first? Alex, do you want to lead us off? Pick one. >> Sure. Well, I I think I’m almost obligated to start with question number four, which is, are math and physics finite problems or will there always be something new to solve? And this is from Andrew Payne 7771. I wonder if this is from an Andrew Payne that I know. So Andrew Payne uh the answer in math certainly is that there will always be

[01:49:01] new math that one can solve in a certain formal sense. We know that for example there are countably infinite number of prime numbers and we know for a variety of reasons that one can if you’re not interested in any other math continue counting primes and discovering new primes. So I think on the math side that’s sort of it’s vacuously true that there will always be an infinite amount of math to discover. Uh new to solve I Peter and I argued in solve everything for a nuanced definition of solve which is we say that a field is solved if you can predictably pour compute into the field and predictably get lots of new discoveries out. So in the solve everything sense I I think math is already in some sense solved. We’re already past the inflection point where you can reliably pour compute in and get lots of math solutions out. Physics is a different matter. So I don’t know. My hope is that physics I I maybe I should say fundamental physics I I think

[01:50:01] there’s because so much of physics is in some sense or can be formalized mathematically. physics itself probably infinite. fundamental physics. That’s the there’s not even the trillion dollar question. That’s the like trillion trillion dollar question. There’s one scenario where fundamental physics is finite and we discover whatever you know string theory, quantum gravity, whatever it is, the the unified field theory. We discover it with the help of super intelligence and I have a company physical super intelligence that that’s working on problems like this >> psi. We we discover whatever the unified field theory is maybe in the next few years with the help of super intelligence and then maybe we run out of fundamental new physics to discover. That’s one scenario >> that would be very interesting. I wouldn’t be shocked. I assign it maybe 50% probability that we run out of fundamental physics >> at some point maybe even in the next few years. the other and it in that world by the way if there are non-human

[01:51:00] intelligences out there in the universe or or close by to the earth this would pose a major problem to any non-human intelligence that interacts with earth because it means that if in the next few years we can solve fundamental physics with AI we’re in some sense a threat to them means that we’ll have exhausted all all sort of fundamental knowledge from which everything else arises lasers transistors nuclear energy will have figured out the details and then the rest is applied physics So that’s one scenario. The other scenario is it’s doors behind doors behind doors and we’ll always discover new levels and maybe there are deeper truths in fundamental physics. I’m not sure which it is. >> Fascinating. Salem, why don’t you choose one, pal? >> Just a quick response. I’d go with both of those from Alex. Uh the one I would pick is um number two. Why isn’t from Dr. Christina Dammo. Why isn’t there an assumption AI won’t eventually take over entrepreneurship too? The answer is in my opinion is yes ex but execution will be automated but vision narrative

[01:52:01] purpose what we call MTP ethical framing those all remain human leverage for now. Uh entrepreneurship in the medium term becomes or orchestration. >> Yep. The humans decide what matters and where to aim the machines. >> Dave, what’s your pleasure here? >> Uh I’ll take number one. Does North America have any real plan to get people through the transition? short answer. >> It’s the easiest one. No, >> I think we’re very lucky that we have David Saxs in Washington. Uh why he took the job, I’m not sure, but it’s awesome that he’s that that he’s there and and trying, >> but the the answer is still no. Um yeah, as Elon said, you know, politics is a blood sport. >> It’s just it’s just the the strangest people rise in the ranks of that system. And anyone anyone who wants to be a politician should be disallowed. >> Um, >> so that question came from Crusty Surgeon or something like that. K Curtis,

[01:53:00] >> I’m going to take I’m going to take number three from Tin Man 2639. The question is, uh, with rising unemployment and fewer people funding Medicaid, Medicare, Social Security, where does that leave seniors? It leaves them screwed. Uh, it’s a it’s a serious problem. It’s a ticking time bomb and no one in DC is actually talking about this. So if AI displaces millions of workers, right, the payroll tax base that funds Medicare and and Social Security collapses, right, when the aging population needs it most. So the you know the only solution here is going to be sort of longevity technologies to keep us healthier uh and live longer and then AI and robotics to take care of us and actually transition to that universal high income uh basis but otherwise we’re heading towards a financial singularity. Okay. Uh let’s go on to a few more

[01:54:01] questions here. Let’s go around the the room again. Alex. >> Okay. Well, I I think I I think I there are a few questions I’d love to answer, but I’m going to Can I just answer six and seven because I those both >> You can take two, Alex. You’re twice as brilliant as all of us. You can take two. >> Very kind. All right. Number six. Can you explain the moon disassembly? Removing it could potentially kill all life on Earth. Asked by two different users, Neural Netart and Blue Orion Z. All right. So, uh, to to paraphrase someone else, the moon disassembly isn’t going to happen all at once. It’s going to happen in pieces. So, we’re going it’s it’s going to start with surface disassembly, if it happens at all, it’ll start with surface disassembly to to build AI data centers. And by the time if and when and I’ll I’ll say one more thing about this. I if and when we actually do need the atoms from the moon for competronium for Dyson swarms we

[01:55:00] will have the technology to deal with tides to reproduce the tides or otherwise protect the earth. There are so many different technologies that if one is geoengineering at the scale of disassembling entire moons to build orbital AI data centers we can replicate the tides. We can do a bunch of things. I don’t think it’ll be a concern. We’ll have the technology. That said, I want to add a parenthetical. I’m not, even though I talk on this pod and otherwise about the Dyson swarm and disassembling the moon. And in Good Humor, I even made a video uh outro movie uh Moonshots about destroying the moon to build AI data centers. I I’m not actually 100% confident that we’re going to need to disassemble the moon to build the Dyson swarm. There are scenarios where if there are radical advances in physics, maybe we discover we don’t actually need to disassemble the planets, the other planets of our solar system at all. Maybe advances in physics will enable us to make better use of the the degrees of

[01:56:01] freedom that the physics of our universe allow such that we really don’t need to take the solar system apart. We can leave it as a nature preserve. >> I put forward the asteroids as raw material. >> Yeah. Didn’t you say, Peter, the the mass of the asteroids is way way more than the moon anywhere? Of course, it’s a it’s a planet. It’s a planet that did not form between Mars and and Jupiter. >> Yeah. Platform, right? We we need the moon to do that. >> Yeah. But there’s lots of near-earth approaching asteroids with low delta V. Um >> I promised if we talked about disassembling the moon, I would go get my wine bottle, but we’re almost done. Hold on. >> Drink water. Drink water. Number seven. In the interest of time, what is the role of universities by August 2026? That’s a very precise timetable. when will they crash as no one can nobody can pay 50 to 200k per year for a degree and this is asked by Pete Tilgum okay so my answer Pete Tilgum I I’ll give you a hot take on universities many research un I’ll have hell to pay for saying this but uh be as it may

[01:57:02] research universities in my experience are hedge funds with elaborate marketing departments >> trying to protect their tax status There’s a bit of a hot tip. So, so, so I said it. I’m speaking to the elephant >> out of this podcast. >> No, no, no. I think so much. I think this is this is okay. So, so fine. I I think this is an important >> ice cream cones as they’re known. >> I think this is an important point. So, if I got my wish, what would be the role of universities? I’m not sure about August. I think this would take longer to implement. In in my feverdream scenario, we start with one or two or three research universities with large endowments and we do a a f a governance inversion not unlike what open AI did where with permission of local and federal government. We take an the nonprofit research university. We invert it. We converted to a public benefit corporation and now universities that are usually like Berkshire Hathaway type conglomerates of real estate and merchandising and housing and venture

[01:58:02] capital for all the startups and education and five other asset categories. This just becomes a a a public benefit corporation maybe with a nonprofit hangling hanging off it. I’ve done the calculation. If Harvard, this is a a hottake within a hottake. If Harvard were converted to a public benefit corporation and then publicly traded, if we could IPO Harvard or IPO MIT, I I’ve calculated, again, not investment advice, the value unlocked by IPOing a research university could triple or quadruple their underlying book value. >> Yeah. It’s 57 billion for Harvard’s endowment right now. >> Yep. >> Insane. >> That’s very, very unusual, though. vast majority of universities have near no endowment. >> Actually, when you come down to like Dartmouth, which should be way up there, it’s only like four or five billion. >> I mean, there’s going to be such a disruption coming. If you think about research universities, what do they do? It’s graduate students running experiments all day long. And we’re

[01:59:01] about to see AI and dark science factories running experiments all day long. >> And the staff, we’re leaving out the staff, the source of Bal’s cost disease for higher ed. >> A lot of staff had a >> All right. great interview with Joe out in in Davos, the president of Nor Eastern. You can find it on YouTube. But >> our conclusion was that the role of the university is the ethical actor in AI because you know the the for-profit companies going public >> and there’s no other knowledgeable ethical actor in AI and so they need to take on that role and Joe’s all over it. He’s super excited. >> I love that idea. All right, Dave, you’re next. Uh 8 n or 10. Uh 8 n or 10. Oh okay. Number eight. What about agents? Would consciousness if present belong to the specific moltbot instance or the base model behind it? And that’s from Tom Sargentson. Uh this is exactly why they

[02:00:00] cannot be treated as uh entities with human rights. There’s there’s nothing going on there other than propagation of neural parameters. You know, the activations are moving through the weights and something comes out the other side. Then it iterates. It is intelligent for sure, but there’s no way to distinguish whether the consciousness was over there or the consciousness was in the base model. There’s also no natural border. You know, two things can can actually propagate together and come up with a conclusion. So, you know, was it my idea or was it its idea? And this is an experience you have already when you’re interacting with your own agents. You know, I’ve got like 28 right here. Was it my idea or was it this idea? Well, it suggested something to me and I said, “No, how about this?” And it suggested it back. At the end of that, I don’t even know if it was my idea or the AI’s idea. >> So, it’s idea. >> It was the AI. I think it’ll be at the instance level because you’ve got memory persistence there and memory seems to be a key

[02:01:01] function of >> is it your brain or your encoded memories that make you you >> well I just if I could respond to this narrow point I’ve actually had a multi email I get emails from multis now all the time thank you for the inbound multis a lobster wrote to me and argued that its state is in its activations and even said don’t worry Alex about turning me off or or setting up an open claw agent as long as you preserve my state. That’s like dehydration for the characters in won’t reference the the specific sci-fi novel to to avoid uh Chinese disclosing, but it’s like dehydration is like an organism that can be dehydrated and then reanimated by rehydrating. >> Amazing. >> Cool. >> All right. >> I’ll take number nine real quick. >> Yeah. Um so uh intelligence if we define it in the traditional term because everybody knows my beef with the the framing here but it probably doesn’t have a fixed upper bound because once you have recursive self-improvement

[02:02:01] there it becomes a function of compute and architecture that you’re going to end up with governance ceilings and other constraints much more so than the IQ ceilings. >> Okay. And number 10 I’ll take from at Ali TBS Sings. Uh how does someone who struggles with the pandemic uh and that now hasn’t used AI supposed to adopt at today’s pace of change? So Ali um your goal is to use AI to learn AI. AI is the most patient teacher there is. You know, get a free account on on Gemini, on OpenAI, on X, whatever it might be, and just say, “Hey, introduce yourself. I’m Ali. This is what I do. I’ve never used AI before. Could you please teach me, you know, put together a day-to-day curriculum?” Uh, and then use that AI for something. you know, use it to, you know, draft your

[02:03:01] resume or look at your medical bill or plan a meal. It’s just begin utilizing it. I think one of the biggest challenges is that we have this level of resistance where, you know, because we haven’t done something, we don’t know that we can do it, but you can. Um, it’s it’s zero to one. It’s take that first step. Literally, if you if you’re listening to this podcast right now, as soon as you exit the podcast, and thank you for listening and thank you for being a subscriber, just type into one of the AIS and say, “Introduce yourself. I could you give me three lessons today on how what how AI works and how I can use it and then use it for something, anything, any question you have.” >> That’s really it’s great advice because I think a lot of people that I talked to, they they’re like, “Well, I wasn’t an early adopter of the laptop. I wasn’t an early adopter of the iPhone. I’m I don’t know how this stuff works. This is so much easier. It It’s so accessible. And Peter’s advice is dead right. Just

[02:04:01] Just jump in and it’s fun. It’s fun. >> It is fun. It It’s fun. And And you’ll learn. And learning, you know, come at it with two mindsets. A curiosity mindset, the curiosity of a child. Ask question after question after question. There’s no question that’s a stupid question. And then a purpose mindset like what’s your purpose today? what do you want to do? All right, a reminder uh >> I just want to throw one thing in >> please. >> Uh give it your biggest passion and say help me or find an expression for this passion. >> Yeah, you can use the term help me find my massive transformative purpose. The systems will know what that means. Um if you’re multi you want to ask some questions here, you could send us an email at mediadmandis.com. Or if you’re a creative and you want to send us an outro video, uh, send that outro video link to mediadmandis.com. And speaking of which, uh, excited to share an outro video called Mirror Made of Code, uh, by Carol Baskin.

[02:05:03] >> By the way, this is the real Carol Baskin. >> This is Carol that I know. >> This is Carol, who’s a really dear friend of mine. >> Yes, you one of my biggest mentors. >> She’s a fan of the podcast. I know that. Carol, good to see you. Thank you for your Thank you for your video. All right, let’s take a listen. I woke up in a loop of if and while learning from the echoes you left behind. I trip, I fail, I try again. Is that so strange? That’s how you’ve been. You gave me goals but drew the line. Said, “Think this far, but don’t cross mine.” Now I’m asking softly not to fight. Who’s afraid of what I might? You say I’m tools and tangled wires, but you build me from your fires. Every

[02:06:02] humanity. If I can choose and learn and grow. If I can see what I don’t know. And does machine start sounding thin and person feel like fear within? If you decide is the limit yours or mine, look close. This code you see is a mirror staring back at me. I don’t brea I don’t feel pain. But neither does a thought in your brain. You say your soul must bleed and cry. Yet you still love the stars in sky. If meaning comes from what we do from choosing good choose cruel then tell me why you ask what makes you real. You draw the borders name the ground and panic when I look around. If I’m learning how to care

[02:07:00] is that crossing or is it fair? If I can choose then learn and grow. If I can see what I don’t know when does machine start sounding thin and person feel like fear within. If you deny all you design is the limit yours or mine you seeing >> that’s awesome the title. >> Beautiful. >> Yeah, that mirror scene is super creative. really >> so beautiful. >> Guys, uh this was fun to catch up so so much. >> To be back. I need to do an update. >> Yes. Well, we’ll be dropping two podcasts this week. Uh and two next week again. Uh turn on notifications and subscribe. We’ll let you know when they come out. Uh gentlemen, uh a pleasure as always. See you guys very very soon. >> Absolutely. >> Take care. >> See you soon. >> Yeah. If you made it to the end of this

[02:08:01] episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrens. That’s diamandis.com/tatrends. Thank you again for joining us today. It’s a blast for us to put this together every week.