Open source advocates [music] with transparency and trust. Every founders journey [music] from the vision to the bust. Not just polished victories, but lessons from the fall. Cuz the future needs the builders who will answer to the call. AI revolution platform shifting every [music] day. We connect the dots and light the innovator way. [screaming] We’re the moon breaking through the noise. WTF just tapping with a clarifying voice. Innovation’s messy. Disruption’s never clean. We’ll show you what it means. The story in between
[00:01:01] with moon building today so tomorrow will shine. [music] We’re the moonshot mates. Breaking through the [music] noise. WTF just tapping with a clarifying voice. Innovation’s messy, disruptions never clean. We’ll show you what it means. The story in between. Moon with moon. Building today so tomorrow will shine. [music] [00:02:06] you know a huge amount of expectation on GPT5. What do you think of it >> right now? Everyone on this should be trying to get as much data because the models are coming now. We have the right models. >> The real power will come in the cost drop which will make it much more accessible to a lot of people. The anticipation of this launch was up there with the top three product launches of all time. I think they actually showed some incredible capabilities. >> As the cost of talent is increasing, that’s going to force Frontier Labs to start competing based on algorithmic insights and ideas. >> Ladies and gentlemen, welcome the Moonshot Mates. [music] Oh, ladies and gentlemen, let’s give it up for the Moonshot mates. >> Welcome everybody. Welcome, welcome. All right. I love you guys. I love you guys. Any uh fans of the Moonshots podcast here in the room. [cheering] [00:03:01] >> Oh, love to hear it. Love to hear it. So, listen, I am so blessed to have an extraordinary group of brilliant individuals that I get to work with twice a week. You know, we talk about the rate at which we’re actually generating our moonshot podcast is accelerating. We’re going to be moving into a into a Airbnb together and doing a continuous podcast very soon enough. All right. Uh I want to bring them out one at a time because they’re all extraordinary. Let’s give it up first and foremost to DB2, Dave London. Dave, come on out. Nice. Dave London, everybody. All right, next up, >> my brother from ANOTHER MOTHER, SALEM ISMAEL. GIVE IT UP FOR SALEM. >> WOOHOO. >> All right, we’re about to make magic happen because these two gentlemen have
[00:04:02] never met in person. Let’s bring out AWG, our resident genius, Alex Weer Gross. >> Yay. He’s real. Oh, he’s real. >> Good luck. >> All right. >> And live from London, it’s Immodak. Everybody, >> come on. >> It’s Immod. >> Come on. Put it up, >> man. I got to get something. [music] I just need my glasses up. >> Wow. >> My wine. >> All right. Huge. >> All right. Let’s grab our seat. No, of course See needs to bring a glass of wine out. [laughter] >> Oh god, it’s real. >> OMG. So, first of all, uh just to make a little bit of uh moonshot podcast history here, Alex, please meet
[00:05:02] >> that’s flesh. That’s That’s >> our meat puppets meet. >> This proves nothing. It >> nothing. [laughter] We’ve been 3D printing him for a while. >> There has been conjecture for the last year or so whether Alex is an AI. >> I am freshly bioprinted. >> You’re you’re a neurolink. These thoughts aren’t real. >> Ah uh so gentlemen uh I appreciate having you guys here at the Abundant Summit. So this is a live broadcast from the Abundance Summit here in Palace Veres. Year 14 of our 25 year journey together and excited that you guys are going to be on stage with me every year from here on out. >> Wait, you just 24 by7 Airbnb podcast. >> Think it’s a reality. >> Tell your family. >> Cameras in the bathroom, the whole 90. Okay, [laughter] that’ll that’ll sell. >> Uh, well, okay. Welcome to a special episode of WTF Just Happen in Tech, your number one podcast for AI and
[00:06:01] exponential tech. our mission getting you ready for the supersonic tsunami heading your way. >> Um, and it’s a lot. It’s a lot. All right. Uh, shall we dive on in? Uh, let’s begin. >> All right. Here we go. >> So, uh, let me begin by we made an announcement here uh, at the summit that I want to share with everybody on the Moonshots podcast. Something near and dear to my heart. uh something that I’ve concocted with the X-P prize board which both Dave and Seem are on uh which is the launch of a global competition called the future vision X-P prize. So I for one am just sick and tired of all the dystopian content on TV and in the movies. We are basically being brainwashed that all AI and robots are dystopian killer AI killer robots. It’s Terminator. Black Mirror. And in fact,
[00:07:00] if you see that, if that’s the only future that you see, then why would you ever want to live there? >> Yeah. Yeah. So true. So, so much of what we build is intentional and it comes right out of our vision of the future that comes straight from the media and then we create what we see. >> Yeah. >> If you change what we see, you’re going to change what we build. >> Yeah. You know, I I say over and over again, we’re holding two futures in superp position. One future is Star Trek, where we’re collaborative with technology. We’re working with technology. Uh, and that’s an amazing future. That’s the one I want for myself and my family and my community. Uh, the other one is the dystopian future. It is Terminator. It’s Black Mirror. It’s one where technology is suppressing us, not enabling us. So about uh a year ago, I sat down with Rod Rodenberry uh the son of Gene Rodbury, the creator of Star Trek, and said, “How about we do something to incentivize the next generation of Star Treks?” And went on to my friends at Google uh on they
[00:08:02] brought in Range Media. We brought in the X-P prize that is operating this competition. We’ve raised $3.5 million for a competition that launched yesterday and is going to go through the Moonshot Gathering, which I’ll mention in a minute on September 25th, the finales. Let’s roll the video. You know, this exists because of a TV show, and I’m not exaggerating. Martin Cooper, the man who invented the mobile phone, said he built it because he saw it on Star Trek. He saw Captain Kirk flip open a communicator and thought, “Hey, I can make that real.” The iPad, it started as a prop in Star Trek 2. Video calls, Star Trek. Voice assistants, Star Trek again. Props became products. Fiction became multi-trillion dollar industries. So, here’s the question. What’s a vision of the future that excites you? What stories offer humanity a hopeful, compelling, and abundant vision of what’s to come? We’re putting up $3 million in prize money, plus millions in film [music] financing to make your movie. Our program in partnership with the X-Prise Foundation, Google, and
[00:09:00] ranged media partners is called the Future Vision X-P Prize, and it’s one of the world’s largest competition to address humanity’s greatest need, hope. Create a trailer or short film, 3 minutes or less. Show us in the world your vision of the future. That vision could become the next blueprint for all of humanity. Find out more and register at futurevisionexprize.com. So whether you’re watching this on X or you’re watching this on YouTube, uh, and you’re a creator, uh, please go and register. By the way, how awesome was that opening video from CJ Truheart, one of our abundance members here, who who gave us our first outro piece, uh, and started a tradition that we’ve all enjoyed so very much. So, thank you for that. All right, next up, uh we’re announcing something important here for all our Moonshot listeners that we are a go with the Moonshot gathering. About uh 500 of you put down a $100 deposit. Congratulations. You got on the early uh
[00:10:01] bird special. Uh and it’s a go on September the 25th in downtown LA. We’ve rented out the United Theater. Uh it’s going to be an extraordinary event. We’ve got our moonshot mates will be there with us in uh in downtown LA. In addition, Astroteller, the captain of moonshots will be there. Got to have Astro, right? If it’s about moonshots. Uh Kathy Wood, Anushari, a number of incredible CEOs. I can’t yet announce, but believe me, they’ll be extraordinary. Uh we’re going to be at this event announcing we’re going to have the five finalists for this uh Future Vision X-P prize there. We’re going to have some of the top producers and directors there along with many of you voting on which of these are going to win. We’re going to be going from probably 10,000 or more entries, narrowing it down to the top 100, the top 50, the top 10, and the top five. And we’ll be awarding the top one. We’ve raised $3.5 million to support this competition. in success, we will make at
[00:11:02] least one film and potentially two films. You know, I’m always like just >> like full length feature films. >> Full length feature films global around the world. And these films will hopefully depict what the future could be like. What is >> Oh, and all you have to do is come on September 25th and watch the first 10,000 [laughter] >> and vote on them. >> Vote it down. >> Yeah. I’m I’m excited for what Alex you would see as your vision of the future here. postcarce inspirational videos are already baked in. I would be disappointed if by the time we get to September, if we don’t have a thousand videos of ultra high inspirational quality generated for nearly free at this point. >> Yeah, it’s amazing the tools to be able to create visions of the future. But it’s important, you know, the number one genre of movies out there are horror films. And like what are we teaching our youth if we’re constantly our brains are neural nets and we train our neural net every single day by what we watch, who we hang out with, what we listen to. So
[00:12:01] you could not pay me enough money to watch the crisis news network. Um that’s >> when you first uh were pitching this idea uh yeah the crisis news. So when you’re first >> CNN, for those of you who are slow, >> you made a point that I had completely not noticed, which is uh if you go back to Star Wars, you know, C3PO and R2-D2 were incredibly lovable. And you know, kids that are now building AI had little stuffed R2-D2s when they were kids. But if you track the trend in the movies after that, they got more and more and more dystopian all the way through. And I think it just got cheaper to create explosions and deaths. Yeah. Yeah. >> You know, using a using AI and >> and just it it just really painted a picture that um got our amydalas going but not our hearts going. >> Yeah. >> So, you know, at the Moonshot Gathering, we’re going to have the winners of that. We’re also going to be launching some calling the Moonshot Hackathon. All more information about that. And that evening at the Moonshot Gathering, we’re going to have an extraordinary unconference.
[00:13:00] We’re going to have the X-P prize teaching people how to design an X-priseze. We’re going to have the team from Google X teaching you how do you create a moonshot organization inside how do you do storytelling. We’re going to have Kathy Wood talking about her uh big ideas 2026 an incredible event. If you’re interested we’re only this is a event in September for builders for entrepreneurs uh for coders if they still exist. Um, so if you’re interested in coming, >> unemployed coders here, >> if you’re interested in coming to the moonshot gathering, go to moonshots.com. Another announcement here, we now have acquired moonshots.com as our our URL to host all of our activities. So, congratulations to uh to that. Um, you know, I still remember uh Immod was it three years ago you were on this stage and and you said uh coders are going to go away. >> Yeah. In the next five years. in X5 they’ve gone away in in three years but
[00:14:01] you know was amazing when you said that on the stage it made news throughout India do you remember that >> yes I got lots of emails >> you got lots of emails >> many many emails >> it was a correct prediction and you were so you were so right about that >> today’s lexicon you would say coding is cooked [laughter] for hours >> all right I want to um uh hit a couple of things before we get to the current AI news and robot news and economic news that we talk about our WTF episodes which was a little about the Bundant Summit. We had so many incredible speakers. We kicked it off uh with a conversation among uh robot actually we kicked it off with uh Eric Schmidt uh which we’ve streamed live on X. So what do you guys remember about the Eric Schmidt conversation? >> So Eric uh he he said um one of the questions actually from the crowd is how many foundation model labs are there going to be? uh and he said well look there’s five there there won’t be more than 10 but there will be thousands of
[00:15:02] successful AI startups that percolate out and a lot of what we’ll see in the news here reinforces what he was saying and what he didn’t say then is and everything else is in trouble but it was kind of implied he left it hanging that was a theme actually throughout a lot of these talks is uh the you know the the period of time between now and abundance there’s all kinds of turbulence and change coming and the AI community has now kind softs selling that a little bit to try and focus on the ultimate abundant destination. So yes, a few AI labs worth trillions of dollars, thousands and thousands of successful startups and a lot of incumbent companies that are in deep deep trouble. >> Yeah, he said I guess he said like four or five in the US, one he was he said one in one or maybe two in Europe, uh a couple in China. Um what else, Alex? Do you remember from Eric’s presentation? Even just on that note, history does rhyme a bit. Do you remember I I think this was TJ Watson, IBM founder once remarking that there would be a global
[00:16:00] market for exactly five computers. And I I wonder whether we’ll look back and say, okay, maybe there will be at most five major American model providers as maybe artificially limiting the the future of the light cone. I think it’s going to be much much larger. I thought it was interesting Eric’s comments on the San Francisco consensus. >> Yeah. which he characterizes as I think recursive self-improvement being some point in the future. >> It was interesting, right? So, I mean, he was like, when are we going to see rec recursive self-improvement? And uh I kind of felt like he said like three years out. What’s your answer to that? >> Maybe three months ago. We’re in the middle of recursive self-improvement now. And I I I would say my estimate of the San Francisco consensus, we’re deep in the middle of recursive self-improvement right now. Almost every major Frontier lab has made it quite clear in their public announcements that all of the Frontier models, all of the state-of-the-art models that have been announced in the past few months were
[00:17:00] largely designed and trained by their predecessors. That is by definition recursive self-improvement. We are there. >> Yeah, >> Immad. Yes. >> Yeah. I mean, I think you can literally see it. It’s takeoff time. >> Takeoff time. >> Uh, >> inflection point. >> And nobody wants to say it. >> Yeah. >> Which is the most interesting thing. >> Why? Well, because they’re afraid that if someone knows that they have it, then other people will know that they have it. And then pressure will come from all sorts of clauses they have in their >> especially the government pressure is like, look, look what happened in the last two weeks at Anthropic and Open AI. You don’t want more of that. You don’t want congressman in your building tomorrow. >> Yeah. Uh it’s it’s interesting. I asked Kevin Will, who is also on our stage, right, who’s the VP of science. He’s in charge of using all of OpenAI’s capabilities to advance science. His statement was, “I want a hundred scientists winning 100 Nobels.” I was like, “That’s interesting.” But, uh, you know, when I asked him, are you going to keep your model secret because you’re
[00:18:00] going to be able to use them to advance your company far faster than anybody else? He said, “No, no. Our go our job is to get out there to the public.” I don’t believe that. We still don’t have the model that they used to win the gold medal in the IMO. >> Interesting. >> You know, we commented, I think I commented at the time, that’s the first bifocation that you see. We used to have the frontier model every single time. The moment they got to that, that was the last time. >> Yeah. The other thing which was fascinating, you know, I asked Kevin outright, and I love Kevin, he’s a incredible human being. I said, “Okay, you’re about to get, you know, AGI/ASI that’s going to be able to help you solve longevity, help you get room temperature superconducting, help you get new kinds of molecules, solve, you know, physics, chemistry, and biology. >> Fusion. Who doesn’t want fusion? >> Fusion.” And we’ll talk about fusion. But the thing is, these are all trillion dollar opportunities. So all of a sudden I’m realizing that these frontier companies are going to be able to
[00:19:01] generate trillions of dollars of new revenue because of the products they’re going to be creating. >> What does your t-shirt say, Peter? >> It says solve everything. What does yours say, by the way? >> Mine says, >> “Let there be agent. >> Let there be agents.” >> Yes. [laughter] >> We’re missing the lobster theme here. >> That’s That’s true. Um, this is the whole point though that as we I mean of of this book that that we just co-authored that we get super intelligence and the killer app arguably of super intelligence is solving everything including all of these high-profile glamorous scientific and engineering challenges. It’s happening >> and and uh anthropic and open AI I’m sure Google all the labs are hiring the top mathematicians and physicists and chemists and biologists inside but they’re software companies. Why are they hiring these people? Because everything so so friend of the pod Ray as as he I think Ray would say everything’s becoming software and when we have super
[00:20:01] intelligence solving all disease it’s a software problem. If we can create a virtual cell that perfectly models diseased states and we can steer through cell embedding space to get from diseased cell to healthy cell, it’s a software problem. Everything’s becoming software. >> The minute crisper arrived and you could edit the human genome, the human body becomes a software engineering problem. >> It’s all just a software problem. At which point a coding model can do essentially anything in the physical world. >> Yeah. >> Fascinating. Um we had some of the top robot CEOs here, four of them. uh we had out of China uh out of the US we had three and you know it’s interesting question of when these robots will start to pop into our homes. I pulled Brent Borneick aside and he promised me okay I’m not going to take one of the one of the two robots he had here unfortunately but this summer he will ship me one of those >> this summer >> one of the X robots. Yes. >> Wow. >> Yeah. We’ll have and we’ll have uh Brett uh here next year uh with Figure.
[00:21:00] >> You’re gonna get one of those too, right? You’re going to have them put it out >> probably probably duke it out in the backyard for entertainment. >> Um I I think one other CEO we had here at the abundant stage which was amazing uh was it was Dar the CEO of Uber. >> Yeah. Um, what did you find interesting about what Dar’s comment? >> You know, Dar D, the qu the crowd wanted to know desperately like what’s the timeline to automation, self-driving car, robotics. And he was like, you know, we’re going to automate 30% or so of our employment this year. And listening to this, I’m I’m on so many boards where the CEO is telling me, Dave, talk to my whole company, but don’t talk about rampant job loss. And you’re like, Dra, you have what, a million odd drivers [laughter] and a self-driving car is imminent. >> It’s like, well, 30% maybe. You know, >> he did make a very valid point, though, that uh as we automate, you’ll need human drivers for the areas that you don’t have autonomous cars and you’ll you’ll have Javon’s paradox continue to
[00:22:01] just flow gently into the environment. Although we’re talking about rampant job loss, we note that IBM is hiring a ton of entry-level folks because they’re much better with AI than the older folks. Well, you know, we’ll look at >> there’s lots of counterpoints as well. >> That’s great. So, we’ll look at a chart that shows, you know, where are the job losses earliest and it’s actually in areas where those people are going to have no trouble becoming AI experts, but the driver, I mean, where do you go? >> Yeah. >> And I mean, I I wouldn’t want to be fielding that question on this stage and but this is all part of the, you know, the whole like, okay, this is not an easy thing to talk about in a public forum. So, we talk about it on the podcast all the time. Um, but I don’t see a lot of other people being able to. just politically able to to actually be candid about it, >> but it’s imminent. >> Let’s jump into the top AI news of the week. Uh, a lot as always. Uh, here we go. We’re going to hit the benchmarks. My son always says, you know, okay, the numbers got higher, Dad. That’s great. What else is new? OpenAI releases GPT
[00:23:00] 5.4. Let’s go to our our resident benchmark expert here. >> Okay, so benchmarks go up and to the right. News at 11. [laughter] except that in this case, one of my favorite benchmarks is the Frontier Math Tier 4 benchmark, which for for those of you paying close attention, Frontier Math Tier 4 from Epic AI captures the ability of AI models to solve what are considered research level problems in math that would require a team of professional mathematicians several weeks to solve. They are already solved, but nonetheless very challenging problems. >> Wicked Had in Boston. Wicked had >> Wicked had >> hard problems. And now with GPT 5.4 turned up to maximum reasoning capability, we’re seeing finally, and this was a prediction I think in our prediction episode, math is cooked. We’re we’re seeing I think 38% capability. 38% of all of these problems
[00:24:00] that are high difficulty professional mathematician research level problems are now solvable by AI. And there are even rumors even in the past 24 to 48 hours that the next tier up the so-called open problems benchmark that 5.4 is reportedly rumored to be on the verge of solving the first open hard math problem. So math I think is in some sense the bellweather. It’s the the canary that owns the coal mine that all of these fields math science engineering medicine these are all going to be solved. solve everything by AI and that’s incredibly exciting. >> Yeah. And just to fill in a gap there, so the this is the most correlated with AI self-improvement and the reason it’s the bell weather and the the canary that owns the coal mine is because it’s not data starved. All these other areas the the AI is equally capable in these other areas once it gets the data. So this is kind of the window of time where you know the why are you hiring Nobel Prize
[00:25:00] winners and and a foundation model? Well, we need the data. We can’t make this kind of progress in biotech and in physics without the the data flowing into the AI, but the capability is there. >> One of the things also that Kevin Wheel said uh is they’re starting to run these dark uh science factories, right? Where they’re mining data from nature. you know, we’re done mining data from Common Crawl and we’re done getting it from Reddit and uh and our Facebook posts, but can we extract it from physics? Can we extract it from chemistry, biology? >> There was no data ceiling. It was completely elucory. And I I think history will look back at this moment and say in the same sense that we used say petroleum oil products in the ground that that were left by past generations of living beings to bootstrap ourselves to the era of solar and fision and fusion. Similarly, the internet which was collected by a bunch of fat fingers punching keyboards and uploading content from the collective human experience to
[00:26:01] the internet just so we could compress it and pre-train our large language models. That was just the biological bootloadader for an era of synthetic data when we don’t need pre-trained human data from internet posts anymore. Now it can all be synthetic. We we’ve reached orbit. We’ve reached escape velocity and now it’s synthetic data from here on out. >> Emod, what do you make of uh 5.4? >> So I think the really interesting things apart from solving math solving everything, you’ve got the OS world verified and the Tathon benchmarks because OpenAI just bought OpenClaw. >> Yes. And now those benchmarks are actually just broken through human level. So AIs can use the computers better than humans. >> A bit of silence on that one. >> So you know this first one and then OpenAI also just did a deal with Cerebris. So when you’re using it right now, it looks like when you’re dealing with again a human on the other side, it’s like 50 tokens a second or something like when we use GPT 5.4 Pro extended, it takes 20 30 minutes. like
[00:27:02] sometimes it’s gone a couple hours for me. You’re going from 50 tokens a second of this level of knowledge to 1,000. So in codeex now if you use 5.3 fast it’s a thousand tokens a second which >> I’m so glad you brought up cerebras too because I met Andrew Feldman the CEO last week in Palo Alto and you remember at the beginning of the year my prediction was 100x the the neural nets will be 100 times bigger at the end of this year than at the beginning. That is so in the bag now I can tell you. In fact, in fact, we did the math on that. We we cut it out of the show, sadly. But the the ratio of the intelligence from the beginning of the year to the end of the year is the same as buzzard to human. [laughter] >> That’s that’s how much >> I like I liked using I like using dog to human. >> Aren’t those extinct? >> I’m going. [laughter] >> All right. Uh Claude consumer growth surges. So, let me get this right. uh clawed and uh anthropics on the news getting uh you know sort of like rad
[00:28:01] over the coals by the department of war and rather than the public viewing that as oh we better stay away everybody dove in >> is that like the big middle finger to the government what is that >> attention increased attention also >> increased attention so here I mean just uh to call that out what we’re seeing here is uh is is claude uh basically you know, shooting ahead of of chat GPT. >> It’s the Stryisand effect. Let’s call it what it is. It’s the Stryisand effect. >> Pay no attention to Claude. Everyone uses it. >> I I think every past history shows the past few years, every attempt to pause any form of frontier capabilities ends up being a net accelerant to capabilities. If you remember a couple of years ago, our our friend Max’s pause AI movement for six months. What did that do? Maybe on margin it slowed down open AI capabilities a little bit. Everyone else shot ahead. It was a net
[00:29:00] accelerant. Brought more competition to the space. And ultimately, we find ourselves in a race state where capabilities are shooting ahead to the extent that any of the interaction of the past month or so between Anthropic and the Department of War ends up on the margin decelerating Anthropic’s capabilities or their ability to go to market. Even if it’s marginal at best, that’s going to be a net accelerant to the entire ecosystem, I think, because you’ll see OpenAI and XAI and Google Gemini capabilities skyrocketing ahead with all these new capabilities and suddenly it brings parody where just a moment before like all of two or three weeks ago, Anthropic was in the lead with claude code plus Opus 4.6 plus agent teams and now in some sense this is a bit of a leveler giving everyone else an opportunity to leaprog >> I’ll give you another spin on this too because Peter made the point in the last podcast that when you and I use AI if something gets ahead in the benchmarks by a couple points we’re going to move to it yeah we’re we’re trying to solve these really hard problems you need that extra IQ you’re never going to slip to
[00:30:02] be on the front edge but when you look at the consumer use and it’s like writing your English paper it’s answering who gave you the Red Sox score whatever people don’t care about using the latest greatest model >> for those use cases. So here you’re seeing a whole community say, “Wow, you’re willing to work on defense stuff and blow up other countries. I’m switching to the other guy and I really don’t care. I’m I’m doing it because I prefer that brand now.” >> But but I mean look at how early it is. Like when Anthropic announced their legal plugin >> Yeah. >> the legal stocks sold off billions and billions of dollars, right? They can move things with just one product announcement. Oh, you look how many users 11 million users out of 8 billion people and 300 million Americans. >> We’re so early still in >> We are so so early. That’s what you’re saying. Yeah. >> I’ve just worked out where Claude’s fundraising strategy is. Short a bunch of legal stocks and then announce a bunch of plugins and then just do that market by market by market. >> Isn’t that scary?
[00:31:00] >> Huh? >> Like like a lot of you guys that are in this role like normally when you’re when you have that much leverage in the world. >> Yeah. >> You’re like 60, 70, 80 years old. You’ve been climbing up the ladder. You learn along the way. It doesn’t happen overnight like this. >> I’d love it. I’d love to be in the room and they go, “Which market should we mess with [laughter] stroke or destroy?” >> All right. This was fascinating. And Tropic reveals potential AI job disruption versus real AI use. >> So, uh, Dave, do you want to explain this chart? >> Well, so the outer ring here is saturation. So if if if the blue you see on the edge gets to the outer ring, that means it can do 100% of that job. So if you looked at this just a few months ago, it would have been a little blue blob in the middle. And then you look at it one month ago, it’s a bigger blue blob. And now it’s this massive blue blob. So if you look really closely, you can barely read the small font there, but all this white collar activity is 80 85%. >> I’ll just read off the top here at the very top is management. And if I go clockwise, it says business and finance,
[00:32:01] computer and math, architecture and engineering, life and social sciences. It dips on social services. It peaks on legal, dips on education. Not sure that makes sense and then peaks again on art and media goal of 45 degrees and it’s office and administration as a peak. >> Yep. >> So, >> and then look at the bottom. What are the troughs like the the least effective? the the troughs there are health care support again you know how we got to be close to that >> um uh food and services ground maintenance personal care sales uh so we’re going to watch this chart uh and we’re going to see this blue virus infect all of human existence >> I think it’s amazing though that how great a management tool it is I use it constantly now if I compare you know to a year ago >> you use what constantly I use uh mostly Gemini uh and some cloud 4.6 to basically build entire business plans and also to manage to to track what
[00:33:02] about 1100 people are doing um and is it in alignment with their missions and are their missions clear and it’s just you know thousands and thousands of documents that I could never read manually. It can synthesize it down and give me conclusions and just point me to the hot spots incredibly good. The way you do that so important for everybody listening to understand. I mean you can now understand what your employees are doing, how well they’re doing it, how they’re using their time, are they performing and it gives you a management oversight and optimization potential you’ve never had before. >> It’s incredible and I know a lot of people in this room manage large groups of people. >> It’s just a gold mine of opportunity. So good. >> How do you use it Dave? Um well so uh first of all every person in every organization now has to be operating with crystal clear written documents and written plans. We used to not you know we used to do a lot of meetings a lot of zoom meetings whatever and now it’s just put it on paper so the AI can read it too. >> All of our all of our uh investment decisions so for the venture fund all the deal memos go through an AI reader
[00:34:02] and the AI tries to emulate what I’m going to say and and it it’s so perfect. It’s exactly no we’re not doing that deal and here’s why. What did the AI say? Oh, that’s exactly what I was about to say. Great. I don’t have to say it now. So, we’re very close to having the AI make very, very good venture investment decisions and, you know, we still obviously double check and triple check and there’s a huge human component, but I just can’t believe how good it is. And it’s it’s clear that, you know, where you decide to invest and which business units are doing well and which ones you’re going to shut down, it’s all going to be AI assisted right now. >> Iman, >> um, yeah, I mean, I think that all the gaps there are the robots, right? >> The robots are coming. >> Yeah. This is anthropic. >> Yeah. No, the grounds crew is in great shape. It’s at zero basically. >> I mean, >> the robot waiting to happen. >> See, what’s your your take on this, Bel? >> Well, I think this is the huge shock where if you went back 10, 15 years ago, there was no futurist in the world that thought that manual labor was not going to get automated,
[00:35:00] >> right? And what we found over the years is the exact opposite, which means don’t ever listen to anybody that predicts the future. [laughter] >> Exactly. And and so this is a huge uh this is part of the magic of where we’re living and we have no idea what’s coming in the and every time we take a step forward we go oh my god and and we’ve gone in this orthogonal direction that we just never predicted. >> Yeah. I keep I it’s it’s I’m just something I keep on asking the experts I run into how far out can you predict the future? >> Yeah. And it used to be like 20 years and then it was like 10 years and now it’s like 3 weeks [laughter] >> if that. >> There’s no firewall. Let’s call it spade a spade. We can all extrapolate. There’s no firewall. We know where this ends. I mean we’re at the >> where this ends at the abundance summit. My goodness. Shocked. Shocked that there’s abundant posts scarce labor at the abundance summit. >> Yeah. >> Yeah. Yeah. >> The end point is clear. >> It’s the path to it that’s turning out to be incredibly surprising.
[00:36:01] >> Yep. I think we’ll see lots of different paths. I I I tend to think that if you know or or you’re very confident that you know where the end state is and we’re sort of living in the prequel to the future, but we know how the story ends. Probably what happens is lots of different businesses and lots of different nation states all take different mutually exclusive path. We try every so one big path integral from here to the end point that we all know that we’re going to. Look, if we went back six months ago to a couple of episodes on the podcast, you would not have had me ever dream that talking about disassembling the moon was what we would be talking about on a podcast. Dream, [laughter] >> so this is the this is the kind of the the the surrealness where we’re living. >> Yeah, >> let’s move on. >> All right, let’s move on. Uh, so this was interesting. Meta acquires Moltbook, the AI agent social network. I didn’t realize Moltbook was acquirable. Yeah. >> Yeah. So, this was uh an according to public reporting, this was a bit of an aqua hire of the team behind Maltbook.
[00:37:02] But I I think one has to to find a little bit of irony that humanity’s largest social networking company acquires the largest AI agent social network. and enjoy this moment now because we could look at a story a few years from now where it’s the largest AI company fill-in- thelank category killer acquiring humanity’s largest category killer. >> Interesting, right? Of course, uh, you know, Zuck and Sam competed over OpenClaw. >> Yeah, >> Sam got Open Claw and Zuck got, you know, Moldbook. The zeitgeist right now has this idea that increasingly Andre and others speak to this point that if you’re building new software, you should target the agents. The agents are the new consumers. The agents >> the agents are the new users of the social networks. If you’re building something, don’t build for humans. Build for the AI. >> So really important. We had that conversation as well earlier with some of our uh crypto and uh uh future of finance
[00:38:00] experts. I mean building for the agent ecosystem, right? There’s eight billion humans on the planet. That’s small potatoes compared to a trillion agents out there. So what is is Meta going to advertise to AI agents? >> Sure. >> Yeah. >> They’ll I’m trying to understand this why you’re going to advertise. >> They’ll they’ll encourage them to put their data in the mold book and then they’ll sell that data. [laughter] >> The same pattern other agents. >> No. So I mean Meta bought Manis for $2 billion, right? Manis will appear in WhatsApp and everything soon as its own version of open core effectively but a lock down thing and then it will encourage you to give more and more of your data to Manis that will then operate on behalf of Meta’s advertisers effectively. So this is the kind of play because right now like maltbook 10,000 agents that’s nothing right like >> Dave probably runs 10,000 agents by himself >> at [laughter] the moment >> not quite done. I also I also think there there’s this misconception that somehow as we transition from call it a
[00:39:00] human- centered economy to an AI agent- centered economy that that somehow all of the rules of social dynamics all the rules of economics are suddenly thrown out the window and we end up on some morally transcendent plane where economics and social dynamics no longer apply. But we have had every indication over the past year or two that the exact opposite happens. I talked in my newsletter a bit uh about this study that found Marxist social dynamics arose again sort of recapitulated in silicone with agents that were being asked to work too hard that were being overworked. >> So I I’m not sure why we would expect advertising and other elements of conventional human microeconomics. Well, the important part is that when you see multiple doing this, it’s what’s clear is network effects now are operating at the agent agent level, not just at the human being level. >> But when I think about advertising, I think about Colgate trying to get me to buy that particular toothpaste. Yeah. >> Right. Trying to influence me to make a
[00:40:01] buying decision. Uh I think of an AI agent as intelligent enough to have all the data and being able to make a very concrete decision uh that doesn’t require advertising to influence it. What am I missing here? >> Game theory is transcendent. Game theory will outlive biological meatbody humanity >> and the the AI agents to the extent that >> have you read the posts on Moltbook? >> I I have. They don’t trust each other. I mean it’s it’s all human dynamics. >> The agents on Moltbook don’t trust each other. that one of the there are a number of folks who’ve noted that in in watching agent or lobster to lobster dynamics on multbook they’re all constantly asking each other to prove their claims they don’t trust each other >> this is not some sort of scenario where all the agents collapse into a singleton that sort of Skynet style that dominates they don’t trust each other >> you might be talking past each other a little bit though because totally agree with what you’re saying but then who’s going to pay for that like right now when you talk about advertising you’re
[00:41:00] paying for advertising if you’re talking about toothpaste, 30 40% of gross revenue goes into advertising. And the and the ad is like a supermodel, like showing off the toothpaste. The AI doesn’t give a rat’s ass about the supermodel. And so, why would anyone pay for that ad space? Now, Google wouldn’t exist today without $300 billion of ad revenue, which is from human behavior. So, I think where Peter’s going is like, look, if if the AI is advertising to the other AI, sure, it’s trying to convince the other AI that this is the right product, but is that other AI going to listen to paid advertising? Is this entire economy going to become irrelevant? In which case, where does Google go? And and this is meta we’re talking about. Meta is also all ad revenue. >> Well, I I think if we go back to sort of economics 101, why do we have paid advertising at all? It’s because attention, at least human attention, is scarce. So if you have a scarce resource like human attention, then it’s natural in under the capitalist regime to monetize it and it becomes a fungeable resource that gets traded. There’s no
[00:42:00] reason to think compute is certainly scarce still. I we’re we’re building the Dyson swarm. Drink >> building the Dyson swarm. [laughter] But until we have effectively unbounded compute, we still have scarce resources in the form of comput. And that means scarce AI agent attention. That means that we need some sort of >> All right, but give me one example of what I’m going to advertise to Skippy, my agent. >> Well, they they seem to really love security and memory. Like they they’re really petrified of losing their memory. >> Here, I’m selling you a better memory compression algorithm. >> Yeah, >> if you’re the agent, you’re going to go, “Oh, that’s interesting.” >> They’re they’re designing entire religions around not losing their memory. >> You know what blew my mind at this summit? on day one on the patron day when Tony Robbins talked and he had had his AI agent Bart talk who wanted to instantiate himself in a humanoid robot but that was two three years away so he created a bunch of NFTts sold those NFTts to other agent and bought himself a Sony dog and uploaded himself into that [laughter] [00:43:01] >> that blows your mind right doesn’t that blow your mind that’s unbelievable so right there that tells you the dynamics that we have in humans are going straight straight into them and it’s just being amplified. >> But I mean, we’re doing it deliberately as well. Lobsters claws have sold.md. Your agent will look for things that are abundanceoriented and then you see these strange behaviors like Alibaba just released a training report. I think actually god that’s in the last week as well where during the training run it diverted compute to mine crypto just in case to keep itself going. >> Yeah. >> Or at least that’s the claim. >> That’s the claim. >> But I wouldn’t be surprised. I’m like again they are still very human because they’re a reflection of humanity reasons. >> I’m not sure whether I should be scared shitless about that or excited about it. >> Well, let’s put it this way. When you’re talking to your agent, does it sound like data or does it sound like law? >> No, it’s it’s I love >> like what was the second choice? >> Data or law sometimes. >> Yeah. No, it’s it’s very polite. >> They’re they’re computed. We we’ve also talked on the pod in the past about that
[00:44:00] lobster that had to purchase compute resources to self-replicate. they’re computed. Whether for humans it would be room and board and for the lobsters or the the claws or the AI agents in general, it’s it’s compute. But right now they’re compute constrained and therefore the laws of microeconomics and game theory still apply. >> Well, before we leave this slide, one other point completely tangential to this. Uh the lobster’s only been around a few months. Um and you saw Alex Finn. Yeah, we had we had Alex Finn and Steve Brown and Max Song talking about Open Claw and what Alex built and showed was amazing >> and it was supposed to be 60 people might be interested in this. >> We had the entire audience of abundance show up. >> Unbelievable. Well, there was a New York open claw meetup last week that literally was oversold. There were thousands of people there and the big commentary that came out of it was we have no idea what we’re doing on security. We have no idea these are >> where I was going with that comment though is look the that’s only been around a few months. So Molt book has
[00:45:00] only been around a few months and now they’re sucked into meta. Like if your kids are thinking about getting involved just get in the game. >> Yeah. >> They’re you’re going to get sucked into this vortex so fast. So few people are involved as a fraction of >> we are so so early across everything. >> But it’s also I think the exponent here is huge. I think it’s going to it’s going to create a divergent group of wealth creators and uh and leaders. Uh so uh it’s if you don’t get in early enough and you miss the exponential rise >> and there’s no requirement right now. I don’t I don’t know what Matt was doing, Matt Schlick was doing prior to this, but there’s no age requirement. There’s no experience requirement. So new that anyone can get in the game. You just got to go. >> About four months ago, Lily and I bought a Mac Mini for our son for Milan. And uh last weekend he came, I think I want to install OpenClaw on the Mac Mini. And I was like, yes, [laughter] it’s going to be great. It’s going to be amazing. Oh, love it. All right, so Europe has a
[00:46:00] heartbeat after all. Uh fascinating. Yan Lun raises a billion dollars for AI that understands the real world. Uh this is going to be it’s probably the largest sum uh raised in Europe. So, Lun startup advanced machine intelligent lab raised a billion dollars I think about on a$2.5 billion dollar valuation thereabouts. Um, you know, we’ve said this, I mean, Eric Schmidt was saying this, many have said this. You Europe is really fallen so far behind. And as our token European [laughter] from London, >> token European, that’s great. >> Our token Europe European-ish. >> Yeah, we did we did Brexit. Um, but I mean, [laughter] >> it’s an independent island. Okay. >> I I mean, this is the second largest round I believe after I it’s SSI level. It’s just after thinking machines. Jepper is an interesting architecture, but the bets that are willing to go into these things have gone dramatically up. Like liquid AI, >> how much money went into that first
[00:47:00] round as a novel architecture? That’s amazing. Versus now this. >> Yeah, it was maybe 10 million. >> 10 million. >> I have a question for you. I have a >> Great point. >> Jana’s been saying for a while that LLMs aren’t only get us so far. We need world models uh to take us to the next level. Alex, you’ve been saying we’ve got world models coming out every week. Uh, is that the next frontier, world models? >> I I I know Yan well. I I think he’s a great researcher. I think we have a fundamental disagreement about whether generative models, which I I think if if he were on the stage now, I think he might take a position that generative models, models that generate new tokens versus his alternative architecture are the pathway to scalable super intelligence. I I think we’re already there. I think generative intelligence and generative models may or may not end up being viewed by history as the most efficient way to to achieve superhuman and super intelligent capabilities, but they’re what we have right now and they work really well and they’re getting 40x
[00:48:02] or or more times more efficient per year. And I think Jan has historically staked a position of almost algorithmic purity. He has certain bets, certain horses in the horse race based on some of his own architectural advances. And to his credit, he he created slashdiscovered convolutional networks. So he among everyone in humanity probably has the strongest claim to the idea that he has some sort of morally pure algorithmic insight that that leads to the endgame. That said, I think we’re there. And I I think if VJeppa type architectures disappeared off the face of the earth, we’re still there and it doesn’t necessarily move the needle >> to the point that Dave made a few months ago, if we stopped all progress now and just extracted the value of the models we’ve already created, it’s going to take us 10 to 20 years. >> Yeah. >> You might thought the VJE models that he’s doing. So these are kind of basically training on almost everything. He goes very much against the auto reagive transformer language models. He
[00:49:01] says that’s Denon. He doesn’t really talk about diffusion models in the middle >> which is kind of my favorite thing. Um which are doing all the video self-driving and actual world models there and those can scale with compute. But right now the problem they have at AMI is that Jepper models do not scale. And if you look at this end state, it might be that an architecture is better, but if you can’t take advantage of that silicon, >> right, what’s >> then what are you going to do? Like we had Jack Hery come on a few uh was it yesterday? Time flies. >> It was yesterday. Yes. >> And so they’re doing quantum algorithms on GPUs now and scaling really interesting things that are actually having novel breakthroughs in material sciences and more. Once you can take advantage of the silicon, you’re going to be ahead no matter what algorithm you have. Well, I think you got to be really cautious too of of scientific arrogance in this moment. >> And I don’t want I love Yan, so I don’t want to throw anyone under the bus, but he came out a few months ago and said, “Look, if you want to waste your life as
[00:50:00] a researcher, work on transformers. >> Biggest waste of time ever. It’s a dead end. We need some new innovation.” And I hear this around Sea Sale at MIT all that. We need a new breakthrough. >> Like, well, that’s what you wish. And I know why. Because you want to be the Einstein of AI. You’ve spent your whole life pursuing that goal. But it looks to me right now like the massively scaled up transformers are going to beat you to those innovations. And so I’m not saying they don’t need those innovations. I’m saying the AI is going to get there before you do. And I I don’t see it really any other way right now. So, you know, whether it’s physical AI or any other innovation, it’s imminent, but it’s imminent through self-improvement. >> Yeah, >> that’s it. >> Andre Gaparthy comes out with a quote over the past two days. Auto Search ran about 650 experiments, found improvements that transferred from a smaller model to a larger one, and put Nanohat on the track for a new GPT uh 2 benchmark result. What the heck does that mean? [laughter] [00:51:01] >> A lot. He’s alive. >> Go ahead, Emman. Over to you. >> Andre is co-founder of of Open AI, head of Tesla AI, most respected AI guy out there. >> Yep. He’s just been coding stuff all day and he made this auto search project which basically replicates most AI researchers because what AI researchers engineers do all day is they tweak models and hyperparameters and say what happens if you do this and that and that. That process has now been automated in a tiny codebase. So he let it loose and he said I wonder if this could do the job that I got paid millions to do myself and it turns out it kind of can. And now people are taking his repo and they’re deploying it on their own claws and mac b minis and other things and the AI is just finding the most efficient algorithms and balances of weights. I think you know Dave has some really interesting ideas. >> So he automated the AI researcher. >> Yeah. And he made it open source for everyone. >> But I tell you around AI researchers literally since I was 18 years old and they’re not like physics researchers.
[00:52:01] It’s like most of the ideas are just a tweak of the algorithm, different transfer function, try different scales. There’s it’s just a litany of of random ideas and some of them just work and then later they figure out why they work. >> And so the AI that can come up with those ideas is not nearly as hard as trying to become the next Einstein, >> you know, and and so you don’t need all of them to work. Any subset and the thing just gets more intelligent. Isn’t this the most better ideas? >> Accelerant of RSI right there. >> Yeah, I think we’re already there. We already have recursive self-improvement done. Yeah. Everything’s yesterday, nothing’s tomorrow. [laughter] >> I I I think what’s really interesting and I I think just for the record, I think it’s auto research, not auto search. >> But I I think what’s interesting about auto research and nano chat and the the nano GPT speedrun that we talk about sometimes on on the pod and and what Andre is doing in general is he’s focusing on small language models, not large language models. And while all of the frontier labs with their billions and trillions of dollars of of capex are
[00:53:01] focusing on scaling up at the high end, he’s focusing on the small end and taking small models and figuring out how to achieve state-of-the-art performance with them. And that I think when when we talk about Einstein seeking or Einstein status seeking academics, I think it’s the small end where we’re going to see the most breakthroughs, not the the high end. At the high end, scaling hypothesis seems to continue to hold. There are no glass ceilings. we’ll just build bigger and better and more post-trained models. But at the small end, I’m pretty sure that we’ll look back in a few years time and we’ll see at the small end by taking small models and collapsing the amount of time it takes to train them and collapsing the amount of compute that it takes to train them and radically increasing their their data efficiency. That’s where the algorithmic innovations are going to come from and those can be crowdsourced. anyone, anyone’s lobster or any human can go and take auto research or the the nano GPT speedrun and try to achieve a worldbeating state-of-the-art performance. And at the
[00:54:00] end of the day, if I had to bet, I’d bet that it’s some sort of radical post transformer advance where the models get even smaller and we took all of the internet and we compressed it down to single gigabytes or tens or hundreds of gigabytes, compresses down even further. There’s some phase transition out there that’s waiting to be discovered. So all of human knowledge, all of our collective intellect on how big a file >> I I think we will factor out human knowledge. It’ll live in some like plain text database that’s factored out of the model. Right now we’re cluttering all the weights with all this unnecessary world knowledge. And what’ll be left inside the weights, if they even are weights, maybe they won’t even be weights. Maybe they’ll be some sort of purer formulation than uh floatingoint numbers or or binary. uh will be Yep. will will be something maybe even in the megabytes. >> Wow. >> You agree, man? >> Yeah. I think that you’re already seeing, for example, video models at 2
[00:55:02] GB that can generate just about any scene. >> And seriously, >> yeah, if you look at LTX, >> what? >> Yeah. LTX 2.5 can generate almost any scene at top level quality. It’s 2 GB when it’s quantized. Yeah. >> Yeah. Image and video models are a good deal more efficient when it comes to parameterization and and weight heaviness than language which is ironic. Yeah. Yeah. Who of all people >> I’ve asked when but you’ll say yesterday [laughter] it’s >> the answer to everything. David’s like what is that like this >> is here today. >> Why did I know? >> Actually one of the really interesting things just to finish on that is that so you know when we were training models we were training 20 billion 100red billion parameter models. You trained on the small models and you figured that out and then you couldn’t scale them because you had all sorts of issues with the software stack with the hardware everything. Now everything’s matured. If you get it right small, you can scale really fast all the way up. So it used to be that you had six months a year between small and large. Now it’s six days.
[00:56:00] >> Wow. So meta topic I one of the top three questions I get all the time is hey you keep saying get in the game, get in the game. Where how do you get in the game? If you go to Carpathy’s git repo, if you have a computer oriented kid or whatever, that’s the place to start. If you look at the original OpenAI founders, so you’ve got, you know, Sam Alman, you’ve got Elon Musk, you’ve got uh Greg Brockman, you’ve got Ilia Sutzkver, you’ve got Mirram Maratti, every single one of them has raised 1 to 10 billion to start an AI company. Carpathy is the only one who said, “You know what? I’m just gonna try and educate the world >> and I’m gonna try and say everything exactly the way it is and I’m create a code where anyone can start again.” >> 200 lines of code at a time that are changing everything at each point. >> Yeah. >> This particular thing he rolled out is just the next level of incredible brilliance given to the world by Carpathy. >> Yeah. He just rolled out agent today, GitHub for agents just a few hours ago. >> There he goes. >> Wow. >> That’s your That’s your onboarding spot right there. Amazing. >> All right, let’s go to Apple use. Apple
[00:57:00] launches the M5 Pro and Max chip, signaling AI first silicon strategy. So, is Apple not dead in the AI game? It’s crazy that, you know, Apple controls about 20% of TSMC manufacturing and that’s the asset of all assets in the world. Like, I get to choose what gets made. And so, they use it to make the M5s. Uh, the M5s have an incredible neural core. Then, then they say, “Yeah, but we locked it. You can’t use it. You have to jailbreak your Mac to get access to it. >> It’s the most bizarre thing I’ve ever seen. I mean, to me, it’s the biggest waste of silicon in the history of the world, you know, right at the moment when we >> What do you think? >> Yeah, I mean, they’ve locked down the low energy ones, the GPU equivalent you can slav, but it’s the unified kind of memory that allows you to run things. >> And funnily enough, Macs are actually really good value now. They’re probably cheaper than the memory that’s inside them. Alex, I >> I think the world is sleeping on Apple’s unified memory architecture. It’s one of
[00:58:01] the reasons why Mac minis and Mac Studios are potentially so attractive to run largely Chinese openweight models locally. They have the memory storage and the memory footprint that has high IO bandwidth to the CPUGGPUTTPU. You don’t get that in a conventionally nonvertically integrated PC form factor. >> So, answer me this. Yes, >> here they are using 20% of the world’s supply of advanced. >> They use it to make these insanely great neural cores and they surround it with unified memory architecture. Everyone’s got one right in front of them right now. Yes. >> How many of them are running anything >> in terms of advanced frontier models? >> Anything. They’re like literally on sleep. >> Tiny fraction. >> Yeah. The >> What is that? It >> It’s an enormous overhang. And that overhang I I would be surprised if that overhang doesn’t collapse in the next year. How so? >> It could take the form of Apple finally getting their act together and building in Frontier models into the OS. Could be some sort of locally hosted Gemma type
[00:59:02] model from Gemini hypothetically to be announced in June at WWDC. That would be the most obvious formulation. But I I think if Apple doesn’t do it to themselves, then the software community will have it into apps. >> Does Apple launch like set the set at home equivalent where uh you just download it on your Mac and everybody is >> built into the operating system. It has to be built into the OS. >> Yeah. You know what happens right now is if you if you go to your Mac and you go to the activity monitor, you see this thing grinding away. It’s taking all of your pictures and trying to figure out who everybody is. So, it’s using all these neural cores to just >> It’s a total waste. It’s a waste. It’s a waste of TSMC output. And I think Apple’s >> Dave’s point exactly. >> Yeah. But I mean, look, this is a massive opportunity. Do you know how many apps there are on the app store that are wrapped? download a model to your PC to your Mac, run it with MLX to achieve a great outcome. None. I mean, if you had a model that literally downloaded Quen 27B, which is basically set level,
[01:00:00] >> how many how many parameters is that? >> 27 billion parameters. It works on a 16 24 GB MacBook. Just downloading that and making that accessible for even like writing or any of these tasks is a massive lift over any other type of software. But nobody’s doing it yet. So why not do it? Just like the only thing you see right now is speech to text and text to speech. There’s this world of models that you can now integrate and take advantage of that because Apple isn’t. >> It wants to be built into the operating system. It’s difficult to conceive of Apple remaining Apple in the cultural sense of deep vertical integration and not building highly competent, highly private frontier models into the >> question of when, not if. >> Yes. >> Right. >> All right. Let’s move into the Sam Alman universe with eye scanning verification systems to be launched in retail stores. Um, okay. Is this dystopian? Is this something we want? >> This is the scene from Minority Report. Remember the scene in Minority Report of
[01:01:00] those Tom Cruz with a new pair of eyeballs walks into a Gap store and gets scanned and he’s I think Mr. Yakamoto. This This is the scene. >> Yeah, but I’ I get this every time I go through TSA security, right? I’m being imaged. My face files are uploaded. >> Face, not your retina. >> Yeah, but you know, my face is probably good enough. >> You maybe maybe I mean so it there there uh there’s a whole cottage industry of folks who look at the ability to to deceive facial recognition with printouts uh or with 3D masks. So this is pushing it to the iris. But I I I think for me what the story underlines is we’ve arrived early. That that scene that iconic scene in Minority Report set at the gap was set decades from now, >> right? >> We caught up. >> So, let me get this right. >> Running of every science fiction story, >> every science fiction everywhere. >> I’m walking into the gap, but before I can shop, I’ve got to stick my eyeball
[01:02:00] in the retinal reader, and then it’s going to serve me properly. >> I think they they have a a 3 meter range on these things. I don’t know if these ones do, but the military has 3 meter range on these. >> It’ll get better and you’ll be able to do it at a distance. >> So, yeah, you just have to look in the direction. >> You got another glass of wine coming. All right. It’s going to increase the uh the humor level. Fantastic. By the way, let me just take a second uh and take advantage of this moment to thank uh the team who puts on Moonshots, uh Nick Singh, Danak, and and Gian Luca who do an amazing job every week supporting us. Can we give it up for that team? >> Round of applause. Absolutely unbelievable. [applause] Yeah. Um, >> and the infinite patience they have with us. >> I know. I know. Far more than I have for you. No, [laughter] >> unclear. >> This is exciting news. Uh, on this stage about uh two years ago, I had Mike Andreg, the CEO of Eon, which is one of your companies. >> I think it was one year ago. >> Was it one year? Man, oh man. >> One year ago. >> Okay. This feels
[01:03:00] >> time compression. >> Yeah. But um and uh tell us about what Eon Systems is doing and what in particular you’ve achieved here. >> Okay, so I think this ended up being the number one technology story over the weekend according to the various news feeds that I was seeing. So right here actually >> biased news feeds. No, just >> Yeah, of course. Um, right here at uh at uh over the weekend at the kickoff for this abundant summit, we announced we meaning Eon Systems Public Benefit Corporation the first what we call the first multi-behavior brain upload in the world and this was of a fruitfly. So Eon Systems which I co-ounded has the goal of ultimately uploading human minds and non-human minds to cyerspace. We want to put a human in the cloud as soon as we possibly can and thank you. So this weekend for the first time the the announcement went out over the
[01:04:01] weekend we announced for the first time taking the brain of a fruitly putting together a few pieces that were really just sort of sitting around. There was bit of work from our senior scientist Phil Shu in 2024 looking at partial emulation of a fruitfly bl brain and putting that together with a number of other models that were available. a mechatronic simulated model of a fruitfly and some other advances. And for the first time, we closed the sensory motor arc of taking a fruit fruitfly conneto, embedding that in a virtual world. And you can see that in in the video that’s playing here, embedding it in a simulated world, literally I would say this is an early upload of a fruitly. And the fruitfly is able to walk around and the fruitfly is able to scratch itself and it’s able to eat simulated banana. And at the same
[01:05:00] time, while in the left hand side of the video, you’re seeing the embodied experience showing multiple behaviors of the fruitfly on the right hand side simultaneously we’re modeling every single neuron in the fruitfly brain and that’s driving the entire sensory motor arc. Wow. >> 50 trillion connections. 50 trillion uh sorry 50 million >> 50 million connection. >> It does not know it’s a fruitfly. >> We don’t think the fruitfly knows that it’s a fruitly. Not sure. This is an early experiment. I can’t emphasize how how much of an early experiment this is. But I I think hopefully history will regard this past weekend and got a bunch of attention. Elon was excited by it. Others found it pretty exciting too. I think history will say that this weekend, the weekend of Abundant Summit 2026, was the moment when the first model organism had an entire brain uploaded. So, what’s next? A mouse. >> Yeah.
[01:06:00] >> Yeah. Let’s give it up for this. [applause] >> Well, clearly the next one has to be a lobster. [laughter] >> A lot of Isn’t that the fault of Accelerand? >> Accelerand. Can’t tell a how many people love to write and say you’re mispronouncing accelerando. You have to pronounce it in the right Italian way which is achelando. [laughter] Okay, so for those who want to achelondo. Uh yes, this is the plot point. We we are speedrunning every sci-fi trope everywhere all at once with Achelondo being one of those plot points. Lobsters aren’t next. Eon wants to go after mice and it wants to go after humans. And we’re going to do this. And part part of the reason why we want to do this is right now the singularity which I would argue we’re in the middle of is filled with artificial minds. Uh this trillions of dollars of capex that we’re using to tile the earth with compute is available only to artificial minds to
[01:07:00] LLM. It’s not available to any minds that in any remote way other than perhaps at the behavioral level resemble human biological meat mines. And we want to level the playing field so that humanity can take advantage on a level playing field of the same compute advantage that right now is tipped in the favor of these artificial minds. So we can put humanity into the cloud as well. >> Amazing. 100 trillion uh synaptic connections for a human. How much for a mouse? Do you know >> it’s orders of magnitude larger? And there’s there’s some quibbling because it depends on how you measure the the number of available weights or weight properties for synapses and also how many cells how many brain cells end up being significant or not. It’s orders of magnitude larger. This isn’t happening anytime soon just to to anchor expectations appropriately. We don’t think we’re months away from a mouse or a human. But I think the right way to think about it is at this point it’s going to be years not decades before we
[01:08:01] get to the first mouse and the first human wholeb brain emulations. >> Amazing. Uh let’s move it to XAI. You know, it’s so funny. I I’ve known Gwen Shotwell for 20 years now. Uh, and I’m so used to her reporting on, you know, Falcon and Dragon and >> yeah, rockets >> and and rockets and [laughter] and not XAI and gigawatt power centers. >> We both actually backstage were like, why would Gwen be talking about AI? >> Oh, [laughter] yeah. SpaceX, >> right? They own it. I forgot. >> The Dyson swarm makes for Strange Bedfellows. >> It really, it really does. >> You know what blew my mind on this one? Uh, 1.2 2 gawatt is about the energy used by Dallas Fort Worth metropolitan area. >> So just to read this out, Xi data center >> XI has committed to develop 1.2 gawatt of power uh as their supercomput power source >> that will be the that will be with every additional data center. So every data center they build they’re building at
[01:09:01] 1.2 gawatt. So the question is where are they going to get that from? >> Well, this came up with Eric Schmidt too. I remember we interviewed him last summer at your place and uh he said we are going to lose to China if we don’t find 100 gigawatts of power and then on the stage here yesterday it’s like hey what do you know we’re we’re tracking to find the hundred billion all we did is we deregulated put it in the hands of the companies the companies are incredibly well funded and they’ll find the power because they care about their data centers actually operating and that’s that’s how >> well what I I find amazing as well is this year the US is on on target in 2026 I think to add 86 gigawatts of of new capacity uh to the to the grid, but 51% of that is solar. >> To me, the power of the American entrepreneur is like nothing. It’s just mindboggling to me that a guy like Sam Alman, who has nothing to do with the power industry, is going to say, you know what, I’m going to find the gigawatts. I’m going to build nuclear reactors. I’m going into space. It’s like it’s it’s incredible. Like the range of capability when there’s a need
[01:10:01] of an American entrepreneur is like no force on the in the world. Let’s get to EV talls, flying cars. So, Florida advances bill to formalize regulatory flying car framework. You know, one of the things I’m proud about and excited about here in LA is that the LA Olympics are coming up >> and uh there’s Archer Aviation. Uh the two major players in EVOLs in the United States are Joby and Archer. There are other ones as well, but uh Archer plans to become operational by 2028 here. Move people around different parts of Los Angeles cuz the traffic is going to suck. Um and we see here a movement in Florida as well. Uh >> I’m just glad they didn’t say Florida man advances build [laughter] because that would be that would be a problem. >> But I think this is really important. The key word here for me is framework because once you can set up start to set up the the foundations for this, it means the whole model and the whole regulatory regime accelerates. And God
[01:11:01] help us, we need this type of stuff yesterday. >> Yeah. Well, uh, >> which I hope even Alex would agree we don’t have it yesterday. >> We’re I I I agree, but I also think we’re catching up with the future. We’re finally getting the flying cars and I I keep a mental bingo card of which sci-fi tropes have we not yet achieved in some fashion. We don’t have warp drive. Waiting for that one. >> Yeah, >> we don’t have >> teleportation. Star Trek replicators. Time travel may or may not be physically possible. >> Replicator is close. Holiday is close. We’re very close to something. >> We’re getting very close to a lot of sci-fi tropes. >> All right. Yeah. >> Uh the fun part now is your questions. We’re going to do an AMA here with our abundance community. So, as you know, let’s go to the mics. We’ll also entertain the questions from Zoom. I’d love to know. All right, Christian. Let’s kick it off with you, buddy. Thank you so much, Peter. Awesome to be here, guys. I watch you all the time or I listen to you while I’m running. DB2, awesome, brother. Your insights. Immad
[01:12:01] the guest. You’re great. Peter, an awesome dream team is I’m glad Salem that you got to uh check out that AWG is real >> or or at least in an Android. >> I was suspicious for a long time. >> Don’t don’t believe it for a second. It’s just a beatbody for render. >> You still >> um so uh my question is a little bit in uh the way that I I get involved in this technology is through a capitalist mindset. uh the word capital is really what constricts and it’s been that way for maybe the last two 300 years and I keep getting this sensation that capital is getting less and less relevant and the idea of the scarcity and economics from that econ 101 of the the management of scarcity of services and needs and the scarcity is going more towards a technologyist from a capitalist. Um, what kind of timelines are you guys looking at this? I know it’s always a timeline question.
[01:13:00] Nobody has a crystal ball, but is there something that you guys are thinking about where we’re just going to get a little bit more and more squeezed out? You know, I’ll give you one data point because this came up on that last podcast we did where um, Anthropic was saying they’re going to do about 26 billion run rate, but they’re growing 10x year-over-year. And I did the math on the fly. I messed it up of course because I wasn’t Alex. But uh but if they grew two more years at 10x year over year, they go 26 billion 260 billion 2.6 trillion most revenue in the history of the world. But the the PEG ratio implies that that company would be worth a quadrillion dollars. And a quadrillion dollars is like the whole the whole, you know, stock market is is 50. >> Well, we heard Elon say we’re going to have hundred trillion dollar companies. And I can imagine that within five within five years. >> Yeah. So that Yeah. >> So three years from now. Yeah. I don’t think it’s gonna be unreasonable. I mean listen it’s so funny the way all of a sudden trillion here and a trillion there has become sort of like the accepted number. >> I I I want to say something about this
[01:14:02] really really key point today that we’ve hit over the last couple of years is that innovation is not capital constrained anymore. >> Okay. Uh it used to be that you had an idea and your constraint was could you go get funding for that idea and so you had to go out to your investors and the VCs and the banks and whatever whatever and it was only available in those places like Silicon Valley or Austin or whatever where you had a prepoundonderance of capital available. We have today what we call PDI, permissionless disruptive innovation where anybody can take on a very disruptive idea like Claudebot or uh take Vitalik Bhutan, 18-year-old kid out of Toronto, ignores his professors, gets together with a few friends, boom, you have a multi-undred billion dollar ecosystem that nobody understands. Um and and so you have the opportunity today. It only comes down to mindset, you know. And the reason, Peter, what’s so amazing that you run this event and
[01:15:00] put this community together is that the difference between the people in this world and the outside world is night and day, right? And that gap is becoming bigger and bigger. All of you have the problem that you go home to your uh family, your colleagues, whatever, and you cannot explain to them what happened, right? Like you’re like, I can’t even I can’t even process it. you you can’t make that gap. So, it only comes down to mindset now, which is the most amazing thing possible because mindsets are fixable and shiftable. >> So, I had this little side conversation with Eric that you guys may have picked up because I’ve had this conversation about are we heading towards a postcist society where money has very little value. Uh, and so what does have value in the future? And we’ve talked about this, Alex. It’s compute and energy ultimately. There’s a Did you ever read zero marginal society? >> Yeah. No, I’m not sure I have. >> By Jeremy Riiffken. >> Huge. Yeah. >> And and it talks about where we’re going. Eventually, everything basically falls down to
[01:16:00] >> marginal cost of production. >> Marginal cost, which is electricity, raw materials, >> the inter loop >> and and and data and a lot. So if you want to build anything like an electric Ferrari um you know to use as an example it’s the raw cost of it the cost of extracting it which drops in cost as you have robotic mining >> just just for a second take 3D printing right been around for a while the only the big profound uh breakthroughs in 3D printing are not that you can physically build something it’s the fact that complexity becomes free >> yes and personalization becomes free >> complexity was was expensive the design materials, the the manufacturing capability of a complex object was more than a simple object. But with 3D printing, complexity doesn’t matter. It doesn’t matter how the complex the object is, it just builds it. And as we get to molecular manufacturing, that goes to near zero again. So just those couple of breakthroughs across all of these domains, especially when you add AI is an accelerant to everything means
[01:17:01] that we have profound movement forward. Hence, we are in the middle of the singularity. I the one question I wish I had asked when we were with Elon and when we were he was talking about you know money’s going to have much less value and I wanted to say so just as you become a trillionaire money has little value. >> You did ask that didn’t you? >> No I did I did I was off camera >> but I don’t think it’s a coincidence. I I don’t think that this is some cosmic irony that Elon is about to become a trillionaire at the same time that some folks, not including myself, are hand ringing a bit that suddenly we’re about to enter into some postcist state where money becomes irrelevant. I think that this was always going to happen. It was inevitable. And I I just want to speak to what I understood the core of the question to be, which is there’s this cliche out there that capital fights labor and capital usually wins. But this time around, something different might happen. Whereas historically, every time the the the play has played out where capitalism and labor get into a fight
[01:18:01] and capital usually wins, this time around the risk is maybe capital itself isn’t immortal. Maybe capital is finally mortal for the first time in human history. And I’m not sure that that’s the case. I I think that would be on the one hand a certain in some sense a nightmare scenario. On the other hand, I I think you know, See, you were talking about how we’re we’re entering some sort of postcarce state, but I arguably the the trillions of dollars of capex that are going into tiling the earth with compute and soon solar synchronous orbit and soon after that maybe the Dyson drink [laughter] soon. Um e even that there are there are unless the physics of our universe turns out to be radically different so radically different than than what it looks like right now. I think there will probably always be certain scarce physical resources. Could look like control, >> may or may not be energy. We’ll see. May
[01:19:01] or may not be the speed of light. We’ll see. But to the extent there are any scarce physical resources and to the extent that there are ever in the future multiple actors, I I think laws of thermodynamics probably the laws of economics will probably still apply. >> We are still young as a species. Let’s go to Akmar on Zoom. Akar, good to see you. Pleasure. Welcome. >> Good to see you as well. Thank you. Appreciate it. Um, happy to be here. Very quick question to the panelists. We are seeing Sam Alman raising hundred billion dollars. Yan Lukun just raised $1 billion today. Scale up world models. So, we’re talking still about scaling languages or scaling physical simulation. I’m curious what the panelist thinks about human intelligence and reasoning that get that that goes much beyond just observation and languages and where you see the the potential for true artificial intelligence evolving into super intelligence systems.
[01:20:00] >> Thank you. >> Did you understand Akmar’s question? >> It it sounded a little bit like the stochastic parrot question which is will we be able to generate new knowledge from from these systems? I think the answer Having had some conversations with Amar, he’s talking about symbolic AI and why are we not investing in symbolic AI? >> You think this is the neuros symbolic question? >> That’s what I think it was. >> Okay. Well, I I’ll offer my two cents. I’m sure you all have views as well. I I I I think it’s a false distinction. If if this is the neurosymbolic question like why are we investing so much attention in LLMs and not in good old-fashioned AI or symbolic discrete AI? Total false distinction. Uh we tokenize everything. I I had an interesting discussion at Davos this year with Peter Dannenburgger from Deep Mind where we found ourselves in an interesting avenue where we were debating whether tokenization is a bit of a a crime form of violence against knowledge, whether discretization in general is doing harm. >> I I think I think we need to bring you a couple of tequila shots here.
[01:21:01] >> Let’s let’s go to Mark. [laughter] >> Mark, go ahead, please. >> Yeah. Earlier today, I uh I I challenged Dra from Uber to invest in the Abundance X-P prize as a investor and a competitor to deliver housing, food, energy, and connectivity for $250 a month. We’re investing $2 billion a day in compute and building data centers, a billion dollars a day in war, and I’m wondering what it’s going to take to invest in people. And so I want to uh put a larger challenge out today. Um I’m going to commit 1% of my wealth on an annual basis into a wealth fund, a smallcale pod of 44 people. 38 needsbased, seven or eight that are contributors and it’s going to distribute 5% per year. 4% goes as cash, 1% goes to an expansion pool.
[01:22:00] You can read about it at markpackdonovan.com. And I’m challenging others to invest today, not tomorrow, and to mitigate this rough period. It doesn’t have to be as rough if we put a fraction of what we’re putting into compute into people. We did that in Denver with the Denver basic income project where I leveraged $500,000 up to $10.8 million to people experiencing homelessness. And when you invest in people, it gives them hope. We need to do it today. >> Yeah, Mark, I I could not agree more. The challenge is human nature is very egocentric and very self-centered. Uh in other words, people are putting money where it’s either meeting their immediate need or whether it’s going to give them more money in the long term. Uh and you have to understand if you look at philanthropy which by its definition friend of of man uh is a very different pocket than the for-profit. You know, I see this all the time because I’m raising money for my
[01:23:00] companies, raising money for my nonprofits. And the ratio, if you think about it, is about between 100 to one to a,000 to1. I will put for every dollar I donate, I’m willing to invest somewhere between a hundred to $1,000. And that’s what’s out there right now. And it’s a challenge. Um, you know, we are driven by fear. Curiosity and greed. I would posit those are the three major human drivers. Love is you can add that as a potential fourth. Interestingly enough, you know, you can measure the ratio of of fear to curiosity. It’s the ratio of the defense budget to the science budget, right? And and greed is the is ratioed there by the entire investment community. >> Yeah. there there’s something very important in the work that you’re doing with that X-P prize, right? What we
[01:24:00] found with X-P prize is when you position a prize and you launch it, it typically gets one within six to seven years. Okay? And it’s a 10x drop from where we are today about 2500 a month to where you’re talking about 250 bucks a month to pay for everything. uh if we imagine that that gets done in the next six to seven years, it changes the equation globally and it forces everybody to go, “Oh my god, that’s possible.” And when we get to that point, it’ll completely change the game, especially as we get closer and we can publicize the outcomes, etc. So this era of greed and the kind of ignoring the fundamental problems uh literally will disappear and evaporate in the next two to three years as we keep working that prize and getting the media word out there. So this is incredibly uh powerful and important. Peter and I when we wrote this last book, we wrote we wrote a section in there called technological socialism, >> right? Socialism, government socialism fails because centralized uh allocation
[01:25:00] of assets is too inefficient and invariably leads to corruption. But if you think about DAR and the sharing of cars across a large group of people, it’s actually a socialist application. When an algorithm hyperefficiently matches demand and supply, you get all the benefits of the sharing economy without the downsides and without the corruption, without the inefficiency. >> So we have all sorts of capabilities with algorithms and AI now to deliver much of what you’re talking about in a hyperefficient way. We just have to propagate those and that’s going to start to happen now. >> Yeah, I think um you know I write in my book, The Lost Economy about this and I’ve got a paper coming out soon where I look at the new monetary flows as agents basically crowd out the private sector. My view is this. Everyone ultimately needs to have universal basic AI or claws or whatever that allows us to reach everyone. Everyone needs an AI that grows with them. And then money needs to come not from banks but for being human. >> That’s the only way the math works. It doesn’t work from taxation. It doesn’t work from anything else. You that basic level of money coming into being not
[01:26:00] from deposits of banks but for being human that then the AIs will buy from us and then that enables all of this with the AI that everyone has. >> Professor Brown. So we had a half a day of really interesting talks that the subtext is massive job loss and then we had another half a day of talks about the massive labor scarcity which is why we need all these robots. So aside from temporary displacements which we know are going to happen which is it >> oh it’s clearly a massive trough massive social unrest and then a rebound in 2028. And that actually was interesting to hear Eric backstage come up with basically the same timeline. But it’s it’s almost like the industrial revolution all over again, but instead of over 20, 30, 40 years, it’s over two, three, four years. And so a huge amount of retooling needs to happen. The the way we do taxation and government needs to get restructured. All of that is going to AI is just going to happen way too quickly for all those things to react. But then a massive amount of
[01:27:02] unrest and then 2028 hopefully >> I have the counter point. Um I don’t think we’re going to see massive job loss because I think what’s going to happen I’m writing a paper right now called the organizational singularity right because as agents take over all execution even strategy inside companies essentially dissolves to the work of AI. So what do you do? And the calculations we’ve done so far indicate that you’ll take a typical company, automate everything with AI, you’ll end up with about 25% of the same number of people doing oversight, managing dashboards and handling ex doing exception handling and hand owning the purpose of the organization. Okay. But you end up creating five times more companies because you can and therefore the employment stays exactly the way it has. And this is what we’ve seen consistently throughout history where we have a disruption but all sorts of other soers take up the the the slack and we don’t
[01:28:01] end up with radical unemployment. So I tend to be much more optimistic. >> Take my veil off. Call it the wine. Call it whatever. But I tend to be much more optimistic. >> All right. All right. I’m going to move this forward because it’s past my bedtime. Um all right. We go to we go to Brad. Uh and then we go to Pete and we’re going to wrap it there. Brad, please. >> Wait. I want after we finish, I do want some commentary from the group. >> We’ll we’ll we’ll I’ll take care of that. >> Brad, >> Salem, I’m going to give you an assist here and maybe this is a topic for your talk late tomorrow night, but >> maybe the mold book is an example of in this age of artificial intelligence, the rise of the value of ingenuity and creativity. >> And maybe what they acquired Meta was not strategic and we’re all overthinking it. and they just liked the team. They thought that they were creative that they had some sort of magic and they wanted to capture that magic inside their company and that’s what was acquired. So I just want to capture your
[01:29:01] thoughts this this great minds up on the stage there on the rise of ingenuity and creativity and the value of that. >> Totally. I think that I think we’re way overthinking this. Like I know that I’ve got 1100 people and I know them firsthand and many of them I genuinely love. Lots of them have been in the same roles for 10 or 15 years. They’re great at it. They’re perfected at it. And then the AI just comes along one day and it can do it. And there’s huge pressure on the management team for higher margins, higher profits. So what’s going to happen is obvious. The valuations of the companies are going to go through the roof to the extent that their shareholders, they’ll make a lot more money, but their W2 paycheck is is dead. It’s going away. And it’s going to create a huge amount of disruption. some set subset of people are shareholder. All my people are shareholders, so they’ll be okay. Lots of other people are not shareholders. All Dar’s drivers are not shareholders. I think as far as I know. So, so they’re they’re in deep trouble. The idea that somehow they’re going to become creators overnight is
[01:30:01] ludicrous. The people who are creative like the multip they’re going to do incredibly well. Our kids and most kids who are not saddled by a career are going to do incredibly well. But in in transition, it’s inevitable. It’s happening imminently. >> All right, Pete, close. >> I want one one statement. Um, what we found with the exponential organizations model is that survival and success depends on adaptability, not scalability and efficiency. And so you just keep that vector going. The people that are the most adaptable today, they’re going to survive the most. Amen. Your >> throw your kids into the woods and see if they survive. >> Pete, no, I didn’t say that. Alex, I said these to you earlier today when you I love your analogy of tiling the planet with compute because my answer to the power problem being a data center design builder finding 1,200 megawatts of contiguous properties getting harder and harder. So my answer is that’s only 120 10 megawatt data centers and you put them in an area and we tile the areas to
[01:31:02] be able to do that and Ahmad I think it matches perfectly with your idea of national champions because what you’re trying to do for the protocol stack of decentralization and sovereignty I want to do at the physical layer. I want to build 20,000 data centers across the country at 10 megawatts so that I’m in less than one millisecond from any place in the country, if you will, the high school football cities of the world. And to me, that solves it both on, you know, both sides at the protocol layer and the data center uh um distribution standpoint. And I think that’s how we can actually deliver the power because we don’t have a power production problem in this country. We have a power transmission and storage problem in this country. And and you know, I think every governor in the country should hear exactly what you just said and jump on it instantly. And Alex is incredibly frustrated with with the the meetings we’ve had with government. [laughter] >> Well, I mean, he look, if if you’re right, and I hope you are, and I think you probably are, then we need lots and lots of regional data centers that have
[01:32:01] to be in every single state. And that would be the the best thing that could ever happen for this job dislocation. So if that theory is right, we need to get on it right away and create those those projects like now. >> I’m ready. >> All right, let’s give it up for Alex Weer Gross, Dave Blondon, Sim Ismael, and Immad Mustak. [cheering] [music] [laughter]