Amazon uh makes a contingent offer to put $35 billion into open AI based upon them first off going public and secondly achieving AGI. >> It’s kind of incredible that we’ve financialized uh super intelligence which is amazing. >> The open AI to Microsoft definition of AGI was something like generating hundred billion in either earnings or revenue. I I forget. >> We’re measuring compute in terms of gigawatts and AGI in terms of dollars. I love it. Amazon was allanthropic for a while. Now they’re open AI. At some point, the circular economy becomes indistinguishable from the real economy. And I I think that’s what we’re seeing here. This is the entrepreneurial opportunity of a lifetime. We’re talking about tens of thousands of times more capacity to create more money, more value created. Abundance is going to be absolutely rampant. >> Now that’s a moonshot, ladies and gentlemen. Hey everybody, welcome to Moonshots,
[00:01:01] another episode of WTF Just Happened in Tech, the number one podcast in AI and Exponential Technology. Getting you ready for the future, getting you ready for the supersonic tsunami heading our way. I’m here with my extraordinary moonshot mates, Salem Ismael, DB2, AWG. Gentlemen, uh, another week. We’ve gotten to a cadence of two of these per week. Uh it’s and it feels like we’re always leaving so many stories on the table, but uh let’s do our best. >> Yes, we need to we need to actually move faster and faster and faster just like the Singularity itself to keep up with everything. >> No, no tech company waits, no GPU weights. All right, let’s jump in our top AI news stories. Anthropic, Google, OpenAI, Uber, uh, accelerating at an extraordinary speed of change. Our first story for today, Anthropic revises responsible scaling policy amid increased competition. This was a story
[00:02:02] I put to the top of the conversation cuz it’s it’s very significant. Um, and you know, we I had Jared Kaplan on stage at the Abundance Summit uh, last year, the year before. Alex, you know Jared well. I think he was a a roommate. >> Yeah, he he was he was a year behind me in the Harvard physics graduate program. >> Yeah. What an amazing what an amazing group of friends that you had. Uh but here’s the deal. They’re dropping their 2023 pledge to not train advanced AI unless safety is guaranteed. Uh and Jared’s point, I think logically is if everyone else is rushing ahead, uh then us, you know, sort of hampering ourselves doesn’t make any sense. Uh, and I want to discuss this because, uh, it’s concerning. You know, a lot of us looked at, uh, at anthropic as the most responsible party out there, them and Google. Uh, thoughts, gents. >> Well, you know, safety is exponential
[00:03:01] racism. >> There’s lots of thoughts all of you at once. >> This is a metaphor for something, right? We’re going to race to talk about a race condition. Love it. >> My god. Amazing. I want to I want to open with See here. See, go ahead. >> Okay. Well, I mean, safety typically fails in exponential races. The you could look at the the whole thing with large is open AI cracked open and let Pandora’s box out. And so, this is just the same type of dynamic occurring again. It speaks to the idea that technology is going to move at its pace and we have to move our human structures at that pace. We can’t fall behind. >> Yeah, Dave. >> Yeah. No, it’s definitely history repeating itself. And so many of our MIT classmates went to Google uh back in 0405 06 when it was don’t be evil. >> Yeah. >> And they went there over Microsoft because everyone perceived Microsoft as being evil. And Google was going to be the force of good in all of tech. And then uh you know they bought YouTube and then they built Chrome and then they you know what what they promised the
[00:04:00] engineers early on, the ones that I knew anyway, is look we will never store somebody’s search history. >> Mhm. How laughable is that in hindsight? So then they expand it out of search history. Oh, we’re going to store that for 5 years. Uh but we’re also going to launch Chrome. Now we’re going to look at all of your browsing history. Then we’re going to buy double click. Then we’re going to run targeted ads based on everything. Then we’re going to do Gmail and and read every email. You know, Microsoft says they don’t read your email, but Google says, “Well, we’ll do what we want, but we won’t pry too much.” But they do read your email. And so that slippery slope of competition uh you know corrupts the original mission statement gradually over time. I gave a whole presentation in Davos on on how this evolves and Daario wants nothing more than some rules >> and he’s actually legitimately pissed that he has to actually repeal his own ethical standards to be competitive because there are no rules there. And you know this is exactly how it has to evolve because you know Dario is in a position where he has to choose between being irrelevant which doesn’t help
[00:05:02] >> or repealing the original pledge which he doesn’t want to do but >> relevant totally your earlier commentary Dave was really spot on. This is what Cory Doto calls in shitification right that that so people promise something and then they gradually degrade it over time and by the end of it it’s a show or >> I love the way you encapsulate everything. >> Yeah. There’s no there’s no credible mechanism to slow the race right now and so it’s it’s all out. Alex, what do you think about this? >> I think there’s there was no credible mechanism to guarantee safety in the first place. I think the entire premise was probably wrong. I I I think sort of the the superficial gloss is okay, we’re in the the red queen’s race, and this is the race condition that everyone 10 years ago was scared of finding the world in where we have a number of frontier labs all racing to do the terrible thing that you build the thing and everyone dies. I don’t buy that at all. I I don’t think this premise that either a heroic individual or a heroic
[00:06:01] frontier lab was ever going to be in a position to guarantee safety. In fact, the sort of I I remember back to to the earlier days of the the frontier labs where the concern and part of the reason why open AI was formed itself was concern of a singleton competition is how we guarantee that there isn’t going to be a singleton that dominates the future like cone with super intelligence. And I I think similarly the notion that there’s going to be sort of unilateral safety where single heroic individual like one of the more prominent AI doomers or a very safety oriented frontier lab is somehow going to ensure safety throughout the forward light cone. That was never going to happen. Safety to the extent we get it is going to come from competition. It’s come going to come I think from a balance of powers and a separation of powers. And I I think what we want is competition between the frontier labs and maybe even to some extent competition between nation states such as what we’re seeing to compete to do
[00:07:01] the best job for advancing humanity. And I think any unilateral safety is probably a dead end. >> One of the questions is will safety become an emergent property in some form or shape. Right? So right now what we’ve seen is Enthropic go from a policy of we won’t build it unless it’s safe that that’s been their policy to we’ll build it as safely as the competition is building theirs. Um and unfortunately it’s a slippery slope potentially down to the bottom. But it’s >> I don’t I don’t see the mechanism for any kind of emergent property around it here. >> Well, we haven’t seen the mechanism for emergent properties of what we’ve seen so far either. Well, I I I would take the position that we are that in in some sense again the the fundamental flaw I think in the thesis that safety would originate from a heroic individual or heroic organization is I would argue it takes an entire civilization to align a super intelligence. We we took we took all of humanity’s content online and used it in compressed
[00:08:00] form to pre-train AGI baby AGI in the early days like summer of 2020 with GPT3. Why wouldn’t it be reasonable to expect that it will take all of humanity to defensively co-align and co-scale super intelligence as well? It’s not going to come from a single lab. What what do you think about Elon’s point of view that we need to uh build ASI that is maximally truth seeking um as his mechanism for alignment and for safety. >> I I think that’s just a fraction of what’s needed. Um I I that addresses a very specific issue which is look we don’t want the AI to have one religion or to have one uh perspective on how you should live. Uh we want it to be truth seeeking and have all opinions in cap and we don’t want to be censored. So that’s definitely a problem but it doesn’t address the imminent job loss um the imminent consumerism. you know, the AI, people are conceding all of their most private information to the AI the same way they did with their Google
[00:09:00] search history and it’s accumulating that data and people aren’t fully aware of what it’s going to do. It’s going to turn around and start convincing you to do things. >> And so, if you don’t have rules in place, the natural, you know, profit motive of the AI companies is to start selling you things. And you saw this with that anthropic Super Bowl ad that we showed in the pod a couple. Unbelievable. I’ve showed everybody that ad now. But this is exactly where it’s going to go if there are no rules. And so I I completely agree with Alex’s perspective that 10 years from now after we’ve solved all physics, we’ve solved all math, you know, we have global abundance, all of this is going to look silly 10 years from now. But in the three-year timeline, massive job loss, >> total confusion, and massive rampant AI sales consumerism that has no regulation around it right now. It’s going to be an absolute cluster. >> Yeah. Especially for the consumer, >> Dave. Especially for the consumer first companies that are needing to generate revenue. >> Yeah. Yeah. Well, you know, I actually after that last pod, you know, you showed that chart, Peter, that had
[00:10:01] anthropic, you know, growing 10x year-over-year. Uh 26 billion in revenue forecasted this year. And on its current trend, we’ll be the first company to hit a trillion dollars in revenue in history by 2029, 2030. >> And exceed open AI this year. >> And exceed open AI this year. crazy numbers, but I said on the pod, you know, that implies like a $30 billion or $30 trillion valuation. But then I ran it through uh perplexity and it said, “No, that implies a one quadrillion dollar valuation using the current market price earnings ratio ratio.” Yeah. >> We we discussed this a few podcasts ago that we’ll see the first hundred trillion dollar companies uh uh before the end of this decade. Um >> anyway, I I think that this is a more honest policy for for philanthropic uh end of the day. >> Was never going to work. I mean, we we we all know a number of folks at MIT and otherwise who advocated for a six-month pause just for the entire space to cool
[00:11:00] off and wait for safety to catch up. Did safety catch up? Whatever that means, not at all. If anything, that functioned as an accelerant to capabilities. I also think even in the DNA of Anthropic, Anthropic was originally recall was originally founded as an exodus of OpenAI employees who were purportedly concerned about safetyism or lack thereof at OpenAI. So they start a safety/AI alignmentoriented firm. Then they rapidly discover that the best way to do safety is to have your own models. And they discover the best way to have your own models is to raise a bunch of money to train your own models. And then they discover the best way to raise money to train your own models is to generate revenue. And that the cycle completes where yet again an alignmentoriented firm becomes a capabilities firm. This happens over and over again. I would argue at this point alignment and capabilities are inseparable. There’s like a deep duality there. >> Yeah. Did you see the new standard by the way? It’s Dario said well okay we can’t we can’t live by our original plan to plan to not train advanced AI unless
[00:12:02] safety is guaranteed. So the new standard is we need to be as good or better than anyone else. >> It’s like wow that’s a that’s a very different bar. >> And we we see you know recently with the whole department of war uh debacle with anthropic and open AI. Open AI cuts deal. Anthropic. Where does Enthropic stand right now in that whole conversation? >> They’re in limbo. I mean I I write about this every day in my newsletter. Anthropic is at the moment. My understanding is they’re in limbo. They’re probably in negotiation with the Department of War, but they’re they’re otherwise in limbo and and cut off as a supplier and considered. I I’m not sure whether they’ve I think Daario and others have made some formal statements that they haven’t received anything in writing yet from the Department of War, but my understanding is that uh this administration has is considering them a supply chain risk. And at the same time, notably, Open AI struck a deal. >> Yeah. And at the same time, we hear that
[00:13:01] uh Enthropic was used by the Department of War to actually plan the attacks in Iran. >> Well, one thing that’s really really Yeah. No, I mean, look, it’s it’s really clear that the the people who control AI, the US government and otherwise can take out any world leader at any time. Now, the combination of satellites, AI to read every image and uh you know, universal cameras, it makes it possible to decapitate any country at any time. We proven that twice in the last quarter. Uh so the future of warfare is basically whoever controls AI chooses who gets to stay in power. >> You know Dave, that’s a really important point. One of the things I’ve mentioned before is we’re living in a world where you can know anything anytime anywhere, right? It’s a it’s a trillion sensor econ uh you know over a trillion sensor uh planet right now with drones, orbital satellites, autonomous vehicles gathering data and then AI doing predictive analytics on what things are likely to be even if you don’t have data
[00:14:00] for it. And one other things could target our talks. Oh, >> I was just going to say maybe not even just a means to an end, but also depending on which analysis of the Iranian situation you subscribe to, maybe an end to an end as well. If you look at Venezuela and the oil exports to China and you look at Iran and the oil exports to China, the picture emerges or at least one possible picture emerges that what we’re seeing is not just AI where Claude is being used to to perform the Venezuelan operation, perform the Iranian operation as a a means to some sort of arbitrary or nebulous geopolitical purpose. But actually, arguably with China looming in the background and possible Chinese uh invasion of Taiwan and the risk to the semiconductor supply chain and western AI that would cause. It may be the case that AI is also the end to to the means to the end and that what we’re seeing more broadly is in in some sense super
[00:15:01] intelligence being used to protect the future of western super intelligence. >> Yeah. >> Yeah. And there’s there’s a window of opportunity maybe a few months to put some kind of structure around this globally where you you’ll see later in the podcast that the the models are improving at this like 3x 4x reduction in parameter count 10x increases in intelligence you know just every time we podcast it’s another step up and we were already predicting or I was anyway that this is going to be 100x year uh just in terms of raw parameter count but I think that’s the lower bound now looking at at how just the beginning of the year has progressed So there’s a window of time where the power of AI that percolates out to every country in the world is going to be ridiculous by the end of this year. You know, create any virus you want, create any nuclear weapon you want just working with your AI agent. So there’s a window where we can start thinking about regulations that register the AI use cases and agents and chips and processing before chaos breaks out. But
[00:16:02] you can see that that that window is is executable now because you saw Venezuela, you’re seeing Iran, you know, clearly there’s this tipping point happening right now where, you know, whether it’s NATO or whether it’s the United Nations or whether it’s the US Congress, some entity needs to start formulating some structure around this because it’s happening this year. >> Yeah. I mean, people need to wake up. >> Please go ahead. I just want to say one thing. People have to wake up to the fact that AI is the single most important force impacting everything. Um you know ever every single element of humanity right now is going to be accelerated reinvented by this. Sele please. >> Um to Dave it just struck me that you mentioned the Congress, the UN, NATO probably the three most toothless um um entities on the planet today. So the thought that they would actually get together and do something or anybody do anything I think is is low. I think we
[00:17:00] have to assume that it won’t happen and look at the the other side of that. Uh one thing about the anthropic case there is a potential I looked up an analysis they do have a legal challenge potential because the way that that was classified as so ridiculous uh to make them existential risk and all that supply chain risk etc that they have legal recourse to fighting that and they might might win that. >> Yeah. The thing about the legal recourse is that that process is usually a three-year long window, which is hilarious. >> It’s ugly every It’s ugly. What I find upsetting is that in this scenario, everybody loses. >> Yeah. There’s no winners in this. >> No. If there’s no framework and no rules, uh it’s a lot like the NFL was back, you know, 20 years ago when the defensive coordinators would pay bounties to the linebackers to take out the quarterback. Like, just take him off the field. I don’t care if you break his legs. Uh and and take the, you know, 15 yard penalty. Who cares? because then he’s done for the season. The NFL said this is not good for business. Like we need we need some rules. >> Did not expect that pivot.
[00:18:01] >> Well, that’s where we are with AI right now. It’s like, hey, >> forget it. I I don’t want to go down there. >> Let’s continue with the anthropic story. Hey everybody, you may not know this, but I’ve got an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology, and these Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrends. That’s diamandis.com/tatrens. I found this story pretty fascinating. So, Anthropic expands Claude’s agent capacity. Uh, two different sides of the equation here. uh co-work gains scheduling right so this is cron job so claude completes recurring tasks at specific times for example generating your morning briefing or spreadsheet updates or your Friday presentations I
[00:19:00] mean that element was very much what we saw in openclaw right uh and so it’s interesting and the second half of this is that claude code has enabled remote control so you can kick off a task on your terminal and pick it up on your phone um you control it uh from the clawed app or from a a URL. And I’m wondering, you know, this has probably been in the works for some time. So when Enthropic basically, you know, tried to do the kibash on on Claudebot, uh I’m wondering if that was because they had this in the works. Uh basically what OpenClaw has been doing is what Anthropic is just rolling out under under a different approach. >> Oh, for sure. I I take anthropic at their face that this was or open claw I guess that the challenge was more trademark oriented than anything else. But I I do think there what have I been saying for uh weeks slash days at at
[00:20:01] this point that was distinctive about OpenClaw? It’s the two things. It’s it’s headless, able to function autonomously for 24/7 and also that it’s convenient to chat with via conventional messaging channels. And what do you see here with co-work? Co-work is able to autonomously be scheduled headlessly. That’s the headless part. And then remote control, that’s the the mobile messaging type part. But I think both of these are half measures. Like I’m insufficiently motivated by each of these. I use co-work from time to time and I use cloud code all the time. And neither of these I think is is as compelling at least conceptually as a more open clawish framework where all of these are cleanly packaged. And I I think my guess is anthropic and open AI and all of the other bigs will be forced to release their own sort of firstparty open claw competitor sometime in the next couple months. There’s something I found very profound about this plus our last
[00:21:00] conversation around open claw and everything happening. I think I was thinking about over the last couple of days. Something very profound is happening which is the sheer democratization of compute power. Right. Note the agency of an individual developer with a Mac mini and running Quen locally and open claw has unbelievable agency in decentralization. Now uh not controlled by any centralized authority, not controlled by any uh centralized command structure. They can essentially operate as they feel like. So this is an incredible independence and agency at the edge which is going to really blow open innovation in a way that we >> total democratization total >> and demonetization as we’re seeing it happen in cascading down as Dave mentioned earlier. >> Ironically from China >> yes ironically. ironically. >> Well, ironically from China. And then one other nugget. You know, Peter, your theory is 100% right. You know, anthropic, why did an Enthropic just throw out something better than OpenClaw a year ago? It can and will delete
[00:22:02] things off your laptop. And so all these OpenClaw users, including my kids, including me, actually have separate laptops or separate Mac minis, you know, including Alex Finn, you know, in our podcast we just did. >> They run it on isolated hardware. Enthropic couldn’t really contemplate throwing out a product then say yeah but run it on separate hardware don’t like how are you gonna do that so this creates a huge entrepreneurial opportunity though if you say you know listen to what Alex said a second ago openclaw is unbelievably compelling and anyone who’s started down that path will never go back right it’s just you just you’ll never give up your Jarvis once you have a Jarvis >> have either have any of you played with uh with Perplexity’s computer >> I’ve been hearing really good things I’ve not tried it I I looked at the demo. I I think it’s an interesting step in the direction of councils for everything. And I’ve had so many people over the past few months ask me for something like perplexities computer where you there right now if you have a given task, they’ll manually go to the top three or four frontier
[00:23:01] models, ask them for independent opinions, and then try to synthesize that in into one coherent hole. And that that is essentially what perplexity computer tries to automate. there are others in the space as well. But I I think even there that it’s nice sort of sugar uh syntactic sugar if you will around the existing models but I don’t think it’s transformative. I I I think ultimately even this ability to council up uh or to create juries around lots of competing models that’s just going to be table stakes as with so many other forms of scaffolding. >> Did you just say syntactic sugar? It’s a term of art in computer science goes way back. >> Alex, I think the point you made a minute ago is brilliant. And I and Dave, I think you you were saying this as well. All of the big players, all the hyperscalers, all the frontier models are going to have to develop some version of OpenClaw because it’s going to become the de facto. Every person’s going to have their own version of
[00:24:00] Jarvis. >> But remember, it’s really expensive too. This is part of the reason it’s not the only reason for running say Quen locally under an open claw scaffold. That’s a lot of compute if if you have one or more agents that are running constantly for you. I’m not sure Anthropic in its present state even has the cloud infra to be able to launch a product like that. And I think in many cases Anthropic, OpenAI, the others are probably just waiting around for for their infra to catch up with applications like that before they launch it. >> Yeah, agreed. You know, this is the entrepreneurial opportunity of a lifetime, though. Anybody who jumps in and there’s so many different versions, so many different things to play with. But when you go to JP Morgan, you know that Justin Milligan, who just joined us, his division at JP Morgan was only allowed to use GPT4. >> Wow. >> Like, are you kidding me? And he he couldn’t take it. He’s like, this is ridiculous. But no one has figured out, okay, but how can I use it in this highly secure inside the firewall inside JP Morgan environment? And and you know, Dario is not going to answer that
[00:25:00] question. And you know the the OpenClaw team isn’t going to answer that question. And they want everyone to thrive who uses their platforms. They don’t they don’t want to kill every job. They want any early adopter to to thrive as they thrive. And you know Daario, if he hits a quadrillion dollar valuation, he doesn’t need more more money. He needs a he needs to not destroy every job in America in the world. And so this is really entrepreneurial heaven if you can figure out how do I get what I can use right here on my my Mac Mini and it can clearly solve all these problems. How do I get that inside a real world use case without breaking everything without regulatory problems? Like >> many many many job opportunities in that. >> One of the challenges still with even with with with Claude’s agent capacity is um you know giving AI uh recurring unsupervised access to your workflows uh means that either they’re going to be a bunch of errors or you’re going to be spending all your time checking the work before you hit, you know, publish.
[00:26:02] um it’s still going to keep the human is still you know in the loop to assure uh quality or alignment. There will be a point at which you trust it completely but we’re not there yet. This is economics 101 or I should say microeconomics 101. When the cost of one good falls to near zero, the value of the complimentary good increases. So as the value or I should say as the cost of generation of content becomes post scarce which is exactly what we’re seeing that increases the value of its compliment which is verification for now >> for sure. All right going to our next story Claude in keeping on the uh on the Claude theme Claude gains co-work plug-in templates for finance banking and HR. So this is fascinating, right? Anthropic is building enterprise agent marketplace. Um it’s department level AI infrastructure and it’s taking down
[00:27:01] company after company after industry. Um what’s you know we’ve seen just you the decimation of uh of a number of players out there. What are you thinking about this? So I wouldn’t interpret this as you know when Microsoft launches in an assault like on the relational database it’s it’s a big you know multi-billion dollar investment here uh Anthropic can build these connectors and adapters vibe code them in probably an hour you know and anyone else can too. So I wouldn’t perceive it as you know Anthropic is taking over all banking software it’s just so easy to build the stuff now that you might as well roll out all that functionality. though I wouldn’t overread into the intent behind it but >> I don’t think it’s I don’t think it’s intent I’m just saying you know >> the implications are profound though so I’ve got two thoughts one is every department now becomes like a programmable intelligence layer >> right and basically all prescriptive logic in companies collapses into these AI agent roles and the real prize here
[00:28:00] is enterprise orchestration uh not so much chat bots but autonomous uh workflow networks because this this is what I talked talked about last time. This is the organizational singularity. We go from human centric approvals, hop to hop to hop, human to human to human to agentic workflows with human beings doing oversight, dashboard monitoring, and um ex uh exception handling. >> A couple of comments on this one. If if you actually look at what these plugins are that Anthropic is launching that are causing the so-called SAS apocalypse and reducing, you know, carving $1.5 trillion off of market caps of various software companies, they are absurdly simple. They’re they’re just a bunch of MCP model control protocol rappers and a bunch of skills with a set of bullets for how to go about carrying out different job industry roles or labor categories. This is not that complicated. I’m I’m reminded, remember the scene in the Matrix? Uh the the the
[00:29:00] villain is is busy unplugging people uh without their without their cooperation from the Matrix, killing them in the process, and one of them says, “Not like this. Not like this.” That that’s basically what we’re seeing where these are these are just simple text files in many cases that are reducing single-handedly the market multiple, the trading multiple of entire industries. I I think on the one hand it’s incredible that a simple text file can say chop 10% of the value off of a CRM firm uh at least market value. On the other hand, these, as pointed out earlier, these these plugins and the marketplaces of the plugins are so absurdly simple that I would reasonably expect these plugins are going to get just built in since they’re just scaffolding anyway, built into the next baseline version of the model and won’t even need to exist independently in the future. Alex, I think the interesting point here is a year ago, if you had delivered this as an entrepreneur, you’d be out in the
[00:30:01] market raising at multi-billion dollar valuations. >> Mhm. >> Yeah. It’s called It’s called hyperdelation for a reason, Peter. >> Yeah. >> I mean, I I get it. I just want people to be aware that, you know, the moat for an entrepreneur coming forward with something amazing. uh that we’re going to reinvent the entire HR industry or the investment banking industry uh and you know raising at a four or five billion valuation. It’s basically that moat’s gone months or a year later. >> I think it’s really important though to step back and look at the macro every now and then and say look abundance is going to be rampant. We’re talking about tens of thousands of times more capacity to create more money, more value created. abundance is going to be absolutely rampant and there’s no reason to be afraid even though like if you’re a CRM company your 20-year future cash flows from recurring maintenance revenue
[00:31:00] is suddenly gone. >> Yeah, >> that’s true. But the opportunity to pivot and thrive is bigger than than ever. And so I think a lot of people are are there’ll be a ton of volatility because people haven’t mapped to the new reality yet. But opportunity is bigger not smaller overall. That agility that agility is fundamental to large organizations success. You know, I talked about on the last pod the uh you know the asteroid hitting the earth and changing the environment so rapidly and the slow numbering dinosaurs going extinct. That’s exactly what we’re talking about here. I mean See, do you think that we can see large companies pivoting, you know, rapidly enough? >> Zero zero. >> They will not be able to do it. I mean look, we’ve seen this throughout history. It doesn’t it doesn’t work. I think where you end up is not to throw another metaphor at this, but you end up where we saw with Google ads where you you kind of took out the advertising market massively and then Google ads becomes like a coral reef with lots of little species feeding off the reef. Uh and if you’re the reef, then you’re in great shape. Uh but in this case, the
[00:32:01] reef itself is disappearing as we can decentralize completely to uh one-off computers running things. There are people using open cloud to go to small businesses sitting down in front of them and automating workflows live for small businesses. This is like incredible what’s going on. >> You know what else? Uh there are a lot of private equity funds that are coming at us now saying hey you know big companies never change quickly. Wait this big company could be a small company very quickly because we don’t need all these people. Now we have a small company with huge huge cash flow. Wow. So so >> so there will be a PE fund emerging shortly. whether it’s not if it’s not there already that is going to buy up u um kind of medium-siz big companies and set up a digital twin infrastructure on the side where you have an AI native digital twin and you just move workflows over to it and you’ll collapse the cost of running that organization by about 3 to 5x >> well that’s what macro >> macro already exists I I I’ve started
[00:33:01] multiple companies like that I’ve even tried to popularize a term for it I call it an ibo an AI buyout we’ve seen multiple PE firms is doing that like this is table stakes at this point. >> Yeah. And of course macro hard’s you know vision is I’m going to come in and digitize your entire employee base and operate it. >> That’s for that’s for pure software plays but I think we’re going to start to see this in in real u >> like like project prometheus from Jeff Bezos is is attempting to do this for industrial firms. Uh yeah. >> Anyway, I think the point uh here is uh large companies need to take action right away. So, See, what is your what’s your advice for a large company, a CEO listening and and seeing this coming their way? What do they do? >> Uh exactly what Alex just said. You set up an AI native digital twin on the edge. You run an immune system 10-week sprint uh to block the response from the mother ship. You grow this thing and
[00:34:00] slowly move workflows over or as quickly as you can. You do a combination of bottom up and top down workflows. And the real shift in people’s heads needs to be that instead of human centric workflows, which is how what it’s been like for the last 150 years, we now move to agentic workflows where you can get things done much more effectively with hordes of little agents. Two layers, strategic layer and an execution layer. And then human beings are doing oversight, exception handling, etc. Because coordination costs go to near zero, execution costs go to near zero. And in outside the firm, the future of the firm becomes a legal fiduciary liability purpose. >> And one big two other things real quick. The first is your brand. If it’s if it’s reasonably good still, you own your brand and you own those customer relationships for the moment. I I think it’s worth also rereading Klay Christensen the innovator’s dilemma which it exactly addresses this sem I know you’re a big a big fan we all should be the innovator’s dilemma
[00:35:01] contemplates hey every 10 years something truly disruptive is going to obliterate whatever you do and here’s how you should react to it in that moment but now instead of every 10 years it’s going to be every 10 months and then soon it’ll be every 10 weeks and then you know it’ll be every 10 days pretty soon too but the playbooks is still the same you know re read the innovator’s dilemma invest in the new Use your capital leverage in your installed base to invest in the new thing. >> I just got off a board call for one of my portfolio companies and you know my comment to my board and to all boards out there is you have got to give your CEO top cover to be dramatic in their modification of the business >> because uh you know and you have I know >> yeah you’re either the disruptor or you’re disrupted. for everyone. You get founder mode and you get founder mode. >> Yes. I mean, that’s basically it. >> If you’re if the if the company and the board and the CEO are not in founder
[00:36:00] mode and willing to do dramatic surgery on the company, you’re dead. You’re you’re walking dead in any industry. >> I’d also be remiss, Peter, if I didn’t point out here we are basically on on the eve of abundant knowledge work. Knowledge work of course being cooked. knowledge work be about to be post scarce and here we are bringing our hands over where to find scarcities in knowledge work as is about to become abundant just want to point out the irony such an ex extraordinary time to be alive all right uh talking about disruption uh disruption coming out of China Alibaba’s 35 billion parameter Quen 3.5 uh medium outpaces 235 billion quen in benchmarks. Um the power of small openw weight models. Um so Alex to you buddy. >> This is happening in western models too. The difference is when say open AI launches a mini model or Google deep
[00:37:02] mind launches a flash model they don’t advertise the parameter count. So it’s not as viciously obvious as it is when a a Chinese Frontier Lab launches an openweight model. We get to see the benefits of distillation in a successor model. But it’s striking. We’re we’re seeing almost 10x reductions in parameter count while maintaining capabilities or even increasing capabilities. And and the the broader picture just to keep in mind is the capability density of models is increasing. This goes handinhand with what we’ve talked about in the past, Sam Alman’s comment about 40x year-over-year hyperdelation of costs at constant capability. In in this case, I I just my mind immediately goes to what’s the endg game here? If if we can see an increase in capabilities with a reduction from 235 billion parameters to 35 billion parameters, what does the endgame look look like? Where does this end? Does it >> Elon made this point during our podcast with him if you remember that Dave?
[00:38:02] >> Oh yeah, for sure. For sure. And yeah, he he said he does he asked his research team not to give him parameter count anymore. Just give me bytes. >> Yeah. because, you know, they keep quantizing and shrinking the file size. I had a lot to say about that, but I bit my tongue because because that perspective isn’t right either, but but Alex predicted this a long time ago. I don’t know how you saw this coming, >> but there’s a lot >> I just look at look at the scaling log curves and extrapolate. >> Yeah. >> Well, it’s I mean, it’s funny. I I was on the treadmill this morning watching old Moonshots podcasts and I’m like, “Wow, that was like so long ago.” and they look at the time stamp and it was it was only like two months ago or three months ago like holy crap things are changing so quickly. But yeah, Alex, you said this. I think you’re the first person I ever heard from it saying, look, you know, the equivalent of a GPT5 is going to be maybe 30 40 billion parameters, but it could get as low as one or two billion truly because right now when they train that caliber of model, it has all this junk knowledge in
[00:39:01] it too. Not more thinking junk, you know, Twitter feeds and Kardashian news and all that other junk. >> Strip that out. This could get very small and very tight. >> Could get way smaller than a billion. I mean I I could imagine scenarios where it’s only a few million parameter equivalent that’s sort of the core micro kernel of AGI or super intelligence and the rest lives in a flat text database or something. >> Well that that will you know another thing Elon said is we’re not as smart as we think we are. If if you get superhuman intelligence down to a couple million parameters they’re be like wow we are we’re really not that smart. >> The first person I heard speak about this was actually Immod Immad Mustach speaking about what you can get onto your phone. We’ll see that in a moment. I mean, so my question is, is this bad news for the big compute incumbents, right? If if massive data centers are being invested right now, >> I think this is so good for the startup community, it’s fantastic. We still need the big models. >> Well, it the it comes down to something
[00:40:01] Alex also talks about a lot, which is do we have boundless boundless problems we can solve? Like if you get everything to be 100 times faster this year, you can just do that much more. or do we actually run out of things to do? You know, is physics infinite or is physics finite? Is, you know, is human benefit infinite or is it finite? And we’ll find out, I guess, in a year. But my guess is no. Every every time you shrink the model and make it faster, you’re still going to use every single chip in that data center just for the next thing. It’s especially when you start getting to the uh the full cell simulator and all the health stuff that everyone’s really eager to do. >> I mean, that is very very computensive >> and better. Peter, I I thought 64 kilobytes should be enough for anyone. >> Exactly. Exactly. >> Oh, the good old days. >> Uh, check out this. So, I saw this on X this morning. So, this is Quinn 3.5 running on a iPhone 17 Pro uh on airplane mode. Um, and this is
[00:41:03] extraordinary. So, it’s a a 2 billion parameter 6bit model running on Apple silicon. So, imagine you’re any place on the planet. You don’t have Wi-Fi, but you’ve got uh Quen on your on your device and it’s got all the intelligence you need. Uh I find >> seeing demos like this in my mind underline either depending on whether you want to see it as competence or otherwise how much of an opportunity Apple has to finally take the lead with local models or conversely how far behind they are in terms of taking the lead in terms of local models. But either way clearly there’s this enormous overhang. We could be running enormously competent reasoning models locally on all of our recent iPhones. The fact that it’s not yet baked into the operating system. Obviously, I was very publicly uh embarrassing maybe one wants to call
[00:42:01] it for Apple. On the other hand, lots of rumors that this time around finally with Gemini integration, they they’re on the critical path and they’ll finally launch. >> Finally, Siri will not suck anymore. >> Apple intelligence, however they brand it. Note that the local ability able to go offline means it’s unstoppable. It’s uncensorable. I mean, this is incredible. >> Yeah. >> Well, that’s Yeah, that’s that’s the ultimate barrier, too, because if if this can get to the level this year where it can it can do a gain of function virus, it can do a chemical weapon >> and it all fits into a tiny tiny little package. Uh, you know, with nuclear proliferation back in the 1950s, there was a theory that, hey, you know, if these physicists keep chugging along, they’re going to make something the size of a grenade that has the power of an Hbomb. And then thankfully, that didn’t happen. It just the physics didn’t allow it. Um, but the AI is not going to stop like that. The it’s going to keep getting faster and denser and more compact. And the window of opportunity to put rules and regulations around this is very, very narrow now. It’s really, it’s got to be this calendar year. What
[00:43:01] do you think is going on in at the White House in Congress in the department? >> We’ve seen this conversation before, right? We had a head of um one of the big agencies, the head of innovation of one of the big agencies at Singularity. Peter, >> you probably remember this. And we asked him, “How do you think about this when somebody could design a virus on an iPhone, something something, etc.” And he said, “Look,” and it was a much more clever answer than I thought he would give, which was when you have kind of nuclear weapons, you know how many there are. You know where they are. you put eyes on it, you try and track it as much as possible. Great. When you’ve got something that’s this democratized, what they’re actively doing is opening up these communities. So, they went to the biohacking communities and funded them to open up because if you’re trying to do something dodgy, you kind of need to collaborate with a few people and the conversations surface very quickly. And then the community does self-p policing, self-reporting. Somebody’s doing something dodgy, asking a few people to they point it out, etc., etc., because it’s in their best interest. And it’s actually worked very, very well so far. what it goes what happens when you get
[00:44:00] to this level is unclear but I think the general trend has been very positive so far. I’ll tell you one other thing. You know, the way the way this evolved with financial services being self-regulated is we think of it right now is, oh, the federal government is incompetent. They’re not doing anything. The researchers over at Anthropic are brilliant. They’re moving a million miles an hour. It’s going to end up being the same people. And this is the way it worked out with the SEC. You know, when when you go who works at the SEC, oh, it’s the same guy that was at Goldman Sachs yesterday doing his two years at the SEC or her two years at the SEC and then going back to Goldman Sachs. That’s the way it’s going to be with AI, too. Right now, nothing is happening at the White House. David Saxs is there, though. You got one brilliant guy. What’s going to happen next is anthropic people and open AI people are going to actually be the people working in the self-regulating agency. And so the the the people will have to bounce back and forth and they’ll do it because they’re they’re worried. They’re conscious of the of the impact of not doing it. >> Yeah. Still concerning, right? still
[00:45:02] concerning to have this level of capability offline. >> We know how to handle decentralized capabilities already. We have printers uh in some some cases states are trying to regulate 3D printers and before 3D printers we had 2D printers that could be used for counterfeiting. We have >> but we but we baked but Alex we baked uh software into all of those printers. Right. There was a standard that was created for >> there were the yellow dots >> for any uh you know for Canon, for HP, for any printer that detected you trying to you know photocopy uh you know money um it wouldn’t allow that. So the question is if we’re talking about openweight models out of China that we don’t control the software on, how do you bake in protection there? There are so many different ways that one can defensively coscale against two billion parameter 6-bit models running on someone’s iPhone. We’ve already
[00:46:00] talked about some of them. There are other ways. In the scheme of things, I I don’t think uh sort of these edge devices running tiny Chinese openweight models either individually or collectively pose an enormous hazard to the market. Like they’re just not that capable relative to the other models that are out there. I think the frustration is that the the solutions are relatively obvious to Alex. We’ve had this at this meeting at the state house before where it’s like guys, it’s not that hard. Here’s what we need to do and then nothing happens. You know, that’s that’s the frustration. But, uh, yeah, registering the models, registering the compute, you know, tracking the tracking the GPUs and where they are, it’s all very doable. The ideas are not >> and defensive co-caling of of making sure that the most flops are going to good purposes rather than bad purposes. I’m reminded, I think it was the New Yorker cartoon. Uh, guy is is up late at night at his computer saying, “I can’t come to bed. Someone somewhere said something wrong on the internet.” We can’t we we can’t get so so bothered by
[00:47:02] the fact that someone somewhere might be doing something wrong with a two billion dollar two billion parameter model on. >> I’ve got so many agents running now and I I put in place a little rule that said, “Hey, before any process launches, write a mission statement and store it next to your code.” It solves so many problem because the miss I can go back and read the mission statement and say hey what what the hell are you working on anyway? Well read my mission statement like wow that makes no sense or that makes tons of sense. It’s it’s it’s so simple like because the AI is the first self-documenting self-improving self-cleaning thing in the world. >> Employee right? >> Yeah. Yeah. Just a couple simple little things like that will solve all these problems. >> Have you told employees to do the same? >> Actually yes. Um it’s a little bit different. It’s look, whatever you’re doing, make sure that it’s in a written document that the AI can see, too. I don’t want any opaque activity because if the AI can’t see it, then I don’t want to see it. I I want everything to be on the same page with us and the AIS. >> Sele,
[00:48:00] >> you know, I think um a key point that we have to remember is the ratio of good to bad. >> Yeah. um we worry about the downside and we should worry about the downside and it’s and the the amplitude of the negative is getting bigger and bigger as people are going to run these models but I always go back to the eBay Craigslist example where when you could first do eBay or Craigslist at scale you could see human nature at scale and so anthropologists and sociologists studied the transactions at eBay and Craigslist you can master email address pretty well on eBay I can throw up a picture of a MacBook put grab your thousand bucks and I’m off to Fiji Right. So how do you what’s the actual ratio? What is the real true nature of human humanity? And by studying these systems at scale, Kijiji in Canada, Micardo, Libre in and Argentina, Craigslist, eBay, they found that the ratio is consistently 8,000 to1, meaning there’s 8,000 positive transactions on eBay for each fraudulent transaction. That should give you incredible optimism for the future of
[00:49:01] humanity. >> Yeah, agreed. All right, let’s move us along here. Uh let’s head to the Google verse. Google releases Nano Banana 2. Uh so this is running on Gemini 3.1 Flash. Uh it’s 4K resolution. Uh it’s at 0.045 cents per image. That’s a price point that’s cheaper than stock images. >> I’m sorry. Uh sorry. Yeah, it’s 4.5 cents per image. Excuse me. Um it’s cheaper than buying stock images. Uh, and so is this the end of commercial photography, illustrators, stock image platforms? Uh, probably. >> We’re just getting started here. And I I think maybe buried underneath the headline, but in the release documentation is this is the first image model from Google that combines a reasoning model which I I think they used slightly flowery language for it, but basically the reasoning power of
[00:50:00] Nano Banana Pro with the instantaneity uh I think they might have just said with the speed of Gemini flash model. So under the covers I think this is like technically this is really interesting. It’s combining probably some sort of diffusion model that that we get with reasoning capabilities. And I think achieving the cost reductions of a diffusion model with nonetheless the capabilities of reasoning. We’re going to see this spread from images where it’s mostly right now in video back to text back to code. There are a few other labs, smaller labs that have started to make pretty loud announcements about how they’re achieving purported 5x 10x cost reductions or speed increases using diffusion models instead of auto reggressive transformers. But I I think this is probably the tip of the iceberg for some like final consolidation of auto reggressive transformers which are used for codegen and natural language
[00:51:00] for the most part on the one hand and then diffusion models and diffusion transformers on the other hand that are used for images and audio and video. We’re just going to finally get one consolidated architecture at the end of the day that does everything. >> Yeah. I mean this is the uh you know wakeup call for people to reme remember that whatever you’re seeing uh you cannot necessarily believe it every pixel is going to be AI generated at the end of the day. Um See thoughts? >> I mean the cost strap is incredible. People are just going to do so much more with it. Democratization of creativity. Great. Love it. Absolutely amazing. >> Yeah. I’m kind of curious. I don’t know if you guys know, but the uh the curve on intelligence is just ridiculous. But on diffusion models, I don’t really know. I know they’ve gotten a lot faster and cheaper in the last few months, but it doesn’t feel like the same type of algorithm. Like it may hit a wall. I don’t know. Do you guys know >> OpenAI has been investing, this is in the published literature, investing a
[00:52:01] lot of effort and probably deep mind as well maybe slightly less prominently in trying to avoid the need for many iterations on a diffusion model. So a diffusion model normally conventionally takes many iterations to start from pure noise and refine the pure noise into the final image or the final video. There there’s there was a lot of interest that was publicly available uh call it 6 to 12 months ago from open AI and some other folks as well to see if they could just oneshot or twoot straight from pure noise to the final image. I I do think to to your point Dave I I think um although I haven’t seen in maybe in the past two to three months any scaling laws for diffusion models prior to that I saw a ton of work on scaling laws for diffusion models and diffusion models have scaling laws too everything has >> yeah you know Ahmad would know all about this too let’s pick his brain next week in LA >> absolutely he’s the king of >> the new standard is you know go to nano
[00:53:02] banana 2 and ask it to generate imagery so imagery becomes free effectively and uh and you can I mean it used to be I’d go my current workflow my previous workflow was I’d go to Google images and hope I found something right now everything is created uh from from scratch and it’s perfect I love this image uh in this uh in this slide here of Elon with Sam uh and Dario and and uh the whole leadership team of all the hyperscalers >> Alex you should be in there man you got to got to raise your game here. One more notch. >> We’re running We’re running out of scarcities, but maybe appearing in that image is one of the scarcities our civilization has left. >> We We can We can make that happen for you for for sure. All right, continuing on with uh with our friends at Google. Gemini can now automate some multi-step tasks on Android devices. So, Gemini is now an ondevice agent that can navigate real apps and complete real
[00:54:01] transactions. uh you know handled Door Dash, McDonald’s, Starbucks for you. So uh interesting, significant. What do you guys think? >> I think it’s usually significant. >> Well, look, it’s been a long time since there was a feature function on the phone that threatened Apple >> uh in any way, but AI is is it, you know, and if you try to use Siri to do something constructive while you’re driving, it’s just so painfully impossible. Also, when you when you start an AI dialogue and you’re in the middle of the conversation, the thought process, you don’t want it to go away. It’s it’s addictive and productive and if it follows you on your phone and seamlessly, it’s just incredibly empowering. So, if if Google wins that race with Android, they might actually um you know, chip away at the iPhone profit dominance for the first time. Now, keep in mind that they also need the duopoly for antitrust reasons. So, neither company can afford to completely annihilate the other one. They need they need some parody in the balance in the
[00:55:00] force. >> I don’t know if you guys saw the data. I mean, we’ve seen a significant drop off in mobile phones, right? In terms of mobile phone purchases, that will be displaced, of course, by headwear and uh earware and all kinds of devices that are beyond just your phone. Well, I think a lot of that, >> sorry, the reason the phone sales dropped off is because they didn’t have a function or a feature that everyone was clamoring for because people used to res get a new phone when the cameras were improving like crazy. They’d get a new phone every 18 months to two years. Now, it’s like, well, I can I can sit on this phone for 3 years, four years. I’m not even noticing the difference. >> But again, AI could completely change that. The neural chips. Sorry, Alex. Go ahead. I I I think there’s also a supply side element where the rising cost of memory is making phones in some cases more expensive. And we’re seeing I think a generational transition from smartphones absorbing the silicon and absorbing TSMC’s output over to AI data centers as the new form factor for computers away from this. But just
[00:56:00] narrowly on Gemini for multi-step tasking on Android. This is what Siri was originally supposed to be about. In before Siri, this is what the DARPA personal assistant that learns or PAL was supposed to deliver. We’ve we’ve known how to do this in some abstract sense for a decade plus. What was missing? Why you ask are we only getting this now? I I always like to ask why do things take so long? Why can’t they be faster? In this case, I I really think it was about a combination of reasoning models and vision language models that could fit compactly onto a personal device. And we’re getting that now finally and it’s going to be everywhere. But we really should have had this functionality even without the ability to read the screen and understand arbitrary applications. We should have had this 10 years ago and that’s borderline inexcusable. Yeah. >> I think what’s most significant here is the fact that Google has a huge installed base of phones, right, of Android phones and the ability to take their their AI systems and that
[00:57:00] installed base. Open AAI doesn’t have that. Anthropic doesn’t have that. Uh, and it’s going to be a massive differentiator for Google. >> Apple could have had it. >> I’m a longtime Android user, so I’m super excited by this because this >> Yeah, you turn all my iMessage green. It pisses me off. >> Apologies for ruining your visual field sphere, Peter. Um, but this is like agency at the operating system level, which I think is amazing. And it also means that commerce APIs are becoming machine to machine first and human second, right? So you’ll have less friction in consumer flows. This is going to reshape marketplaces over time. So it’s really exciting. >> All right. Uh next article is a real fun one. Amazon uh makes a contingent offer to put $35 billion into open AI based upon them first off going public and secondly achieving AGI. Enter See with his normal rant. What the hell is AGI? >> I mean, you know, it’s kind of incredible that we’ve financialized uh
[00:58:01] super intelligence, which is amazing. Uh having AGI as a financial milestone is unbelievable given we have no idea really. I mean, it’s great that intelligence has become a a balance sheet trigger. That’s incredible. But but this is so weird. And thank goodness it says or. Well, Alex, doesn’t uh the the agreement between OpenAI and Microsoft requires OpenAI to give the source code and all of the intellectual property to Microsoft until AGI. Do you think they use the same definition of AI? >> I suspect it’s the something similar. So the the definition, my understanding based on public reporting of the OpenAI to Microsoft definition of AGI went through several iterations with the most recent iteration uh prior to their I think for-profit transition being and Seem maybe you’ll like this. It did actually have a definition. It was something like generating a hundred billion in either earnings or revenue. I I forget. So maybe we need to coin like
[00:59:00] AGI as a unit of currency. Like an AGI is hundred billion dollars of earnings or something. >> We’re measure we’re measuring compute in terms of gigawatts and AGI in terms of dollars. I love it. >> That’s right. >> Listen, that’s fine. You could that’s just all they’ve done is substitute an earnings plateau for that. But which is fine. It’s good. >> Interesting, right? This is $50 billion. Um it dwarfs Microsoft’s $1 13 billion investment. Uh and what you know, again, I’m going back to what is Amazon. I’d like to make the point which we made earlier which is that a lot of this is Amazon credits. So >> yeah that’s fine because that’s how they would have spent it anyway. >> There there’s there are lots of uh lots of tendrils going both directions from Amazon to open AAI and back based on public details of the announcement like the requirement that OpenAI will use Amazon’s Tranium or Tranium 2 chips for training. It’s good for Amazon. It’s it’s good. Amazon has a a long and
[01:00:00] storied history of purchasing their own customers in in some sense not in well in some cases literally acquiring them but in many cases paying for the information and the learnings that come from having a customer that’s using Amazon uh as the world’s most customercentric company and in this case Amazon missed arguably missed the the frontier AI boat and so paying to get themselves to deal themselves back into the game is, I think, par for the course. They’re they’re up to $50 billion of investment is at far worse terms than say Microsoft’s original billions when Microsoft was much earlier in the game. And I I I think this is just the price of reestablishing themselves at the infra level of the party. And it’s also been reported that as part of this deal, Amazon will get customized versions of OpenAI’s models internally. Amazon will get to host as the exclusive third-party cloud host
[01:01:03] OpenAI’s frontier suite of automated AI co-worker employees. So, Amazon will get a lot out of this, too. >> I mean, this is so incestuous, you know, what’s going on right now. I mean, and Amazon was allanthropic for a while. Now, they’re open AI. Um, >> you say you say ancestous, but >> Oh, go ahead, Dave. Sorry. Well, well, the the US public market all companies combined is about 50 trillion. The AI companies are 20 trillion of the 50 trillion now. So, you know, it’s incestuous, but it’s like if that 20 trillion becomes 30 40 trillion, which it inevitably will, it’s the majority of the market is just seven companies. So, when they do a lot of deals with each other, you know, it’s it’s like, well, is that incest? It’s it’s the whole freaking economy is is those handful of >> I I also think maybe I would say in incestuous not perhaps the the word I’d use for this maybe circular is >> what we’re gesturing at. But even that I
[01:02:01] that’s not my take at all. In this case, I I see competition and and I also see horizontal stratification that if if Amazon is striking deals with Anthropic, but also with Open AI and Open AI is uh moving some of its workload from Microsoft cloud to Amazon and also to to Google TPU clouds. That to me looks like a the market for infrastructure for the Frontier Labs is very competitive. That’s great for the economy and b it’s starting to horizontally stratify. So I if if OpenAI is feeling impulses not just to vertically integrate down to the data center layer itself but is so compute starved that it needs to following the law of comparative advantage needs to outsource some of its compute to Amazon with its trrenium architecture and Google with its TPUs. That’s a sign if if anything that there’s such insatiable demand for compute that it’s raining on everyone even with perhaps less loved comput architectures. Well, but but there’s an
[01:03:00] interesting there’s an interesting thing in negation here which is X AI is missing in all these conversations, right? So Elon is going 100% alone. >> Elon loves vertical integration >> and he doesn’t play well with others. He loves >> Yeah. And it’s interesting that the uh you know the the big big money we saw with anthropic is in the corporate use case, corporate white collar use case. People trust two clouds. They they trust the well three I guess if you count Oracle. They they trust the AWS cloud in a big way and they trust the Microsoft cloud, Azure, and I guess they they sort of trust Oracle too. >> Not Google Cloud. >> No, Google Cloud. A bunch of our companies have been kind of bribed by Google to use Google Cloud. You know, as Alex was saying, they’ll they’ll pay you to switch and some have taken it. But for the most part, you know, Google spies on everything. You know, it’s their terms of service. Never say they won’t do anything. If you read any terms of service from Google on any product, >> it says we may do this, we may do that,
[01:04:00] we may do the other, which kind of implies they won’t do other things. But if you read the legal, >> they literally don’t restrict themselves in any way whatsoever from doing anything. >> It’s honest. >> And Micros, it’s very corporate unfriendly. >> Honest, right? >> Yeah. Yeah. But you know, Microsoft legitimately says, “No, we will not steal your data. No, we will not steal your intellectual property. No, we will not not read your email if you use Outlook.” Um and and AWS is even more you know so so people trust those clouds and then they want their AI model to be inside that trusted container inside the cloud. So so far it’s just been clawed on AWS you know everyone’s running away with cloud on on AWS. So all of a sudden for reasons, you know, I don’t know, like maybe just variety, maybe maybe not having Microsoft and OpenAI be just, you know, bedfellows by themselves, Amazon’s going way out of, you know, massive $50 billion uh move here to get two options on the on the AWS. >> Do we know what the valuation of this round is?
[01:05:00] >> I I think the reported valuation of OpenAI’s overall round was 730 billion pre. >> Yeah. Right. >> And and so, you know, this is not going to be a big risk. I mean, if open when open goes public, it’s likely to go public, you know, north of a trillion dollars. So, >> you’ll get a quick pop and it’s probably, you know, what do we, you know, we have three big IPOs coming up. SpaceX is uh, you know, anticipated maybe as early as next month, I heard. That’s right. >> And then we’ll have Anthropic and then we’ll have Open AI. So, I mean, if you can get a 50% pop in your in your price shares in in 6 months, that’s incredible investment. >> That’s not investment advice for anyone who’s going to misconstrue that. >> Well, hey, listen. I I will give investment advice for people to get 50% in 6 months. I mean, why not? Just >> not investment advice from me. It’s from Peter. >> Okay. Listen, if anybody can can get a 50% return in 6 months in any deal, that’s
[01:06:00] pretty damn good. >> This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start
[01:07:00] building with Blitzy today. Another fun article this week is coming from Pulsia AI created by Ben Sarah uh that runs companies autonomously. So they’re currently running over a thousand companies. So imagine being able to take your company, putting it on Pulsia AI and say go. So Dave, would you do this with any of your companies? >> Uh yeah, this is inevitable. I don’t know if I’d use this exact product or not. I haven’t checked it out yet, but 100% the philosophy is clearly where things are going. And you know, well, look, at the end of the day, what does an executive team do? You know, other than a couple of hugely important key strategic directions, everything else is just performance reviews, paperwork, you know, whatever. All that can be very very AIDed. Now, >> the elephant in the room here is okay, uh, I turn it over to to Pulseia AI, but who’s legally responsible? uh you know if your company you know has a breach of contract or fraud uh or harms a customer
[01:08:02] uh is it pulsia is it you that’s why I think where this ends is we get this question I think in the AMAs and otherwise all the time like what’s left for humans should everyone become an entrepreneur well let the chorus of YouTube commenters say well not everyone wants to be an entrepreneur I I would say where this ends up not in the distant future like 10 years from now. Uh but in the the medium-term like 5 years from now is single person conglomerates where single person can oversee lots of agents that are all building businesses. This isn’t for everyone obviously, but a as we start to to get toward, you know, one person or a zero person unicorn becoming more and more popular, again, I’ve I’ve argued in past, we’re we’re likely already there in in some sense, but as we start to see that long tale of the the number of people per company over some valuation start to stretch out, I I think this
[01:09:02] model uh call it a broader model of a oneperson conglomerate where you have a person sitting on top of basically an entire PE firm’s worth of agents starts to make an enormous amount of sense. So, I I’ve been poking at at Pulsia or Pulsia and it’s like a lot of micro businesses and some of them look like they’re they’re varying levels of of seriousness, but I I checked and you can actually with some of its micro businesses, you can actually go and purchase stuff. So you can already engage in real commerce, spend real money via Stripe with some of the businesses that are running on its platform. And I think we’re going to see so much more of this. These are super micro companies. They’re not real businesses in terms of, you know, uh significance of of revenue probably or complexity, but it’s the beginning. >> Um >> I put um I put open EXO on there. Um >> you did. I did to see okay could we you know I literally talked what half an hour ago about you know create can you create a shadow AI digital twin on the
[01:10:01] edge and this is essentially it I think Dave’s point is valid may not be this one but definitely these are going to be agentic hosting systems where you log a brand you pick a service it’ll email and find customers for you it’ll run the execution for you what we’re seeing here is Kosis theory collapsing in real time right because if you have a thousand companies in a few days of their AI run This is uh the marginal cost of launching a company goes to zero now. It’s 50 bucks a month to run an organization, run a company on this. So this is becoming really surreal and we’re going to expect to see thousands of examples in the stashiations of like >> if it works it will blow up to millions. >> Yeah. And also these these things always come up from the bottom and if you open you know cuz Yeah. Like like a Jamie Diamond or some senior exec will look at it and say it looks like a toy to me you know forget it. We’re not doing this. and then it sneaks up on them and they get crushed and they’re like, “What happened?” Well, some some guy was using it to manage a vending machine or manage, you know, social media site and it seems so trivial, but it comes up
[01:11:00] quickly and it sneaks up from the bottom. That’s the way the Mac was, right? Or, you know, the Mac was just perceived as a as a toy for college students. You know, it’s never good for the enterprise, but then it grows up and grows into the enterprise. But this will happen much more quickly. It’s just >> I would also argue we’ve seen this happen before in finance with quantitative trading algos which went from none of the volume in public securities markets to 70 80 90 plus% of the daily volume. And you know what people survive. We still have human traders manually fat fingering trades into the the public securities markets but by volume they’re completely dominated by algorithmic traders. I think we’re going to see the same thing happening in the rest of the world outside finance in the physical world in various e-commerce spaces where over time most of the volume will eventually be dominated by algorithms. >> All right, watch this watch this news item guys. Uh it’ll be interesting and
[01:12:01] of course uh Pulsia AI is one probably of many that’ll be materializing. I thought this was a pretty fascinating conversation. Uh, our article on Burger King launches AI voice assistant called Patty in employee headsets. Let’s watch a video. >> Hi there. >> Good morning, Patty. Looks like we had a great breakfast shift today. Is there anything that needs my immediate attention? >> The team’s friendliness scores this morning were the highest this week. We are running low on Diet Coke in the freestyle machine. >> Thank you, Patty. >> Hi, Patty. We just sold our last cinnamon apple pie. >> Thanks for letting me know. Would you like me to remove them from our menu until tomorrow’s shipment arrives? >> Yes, please. >> Okay. Apple pies have been removed from our menu boards, third party delivery, kiosks, and BK app. I will add them back as soon as tomorrow’s shipment arrives. >> Thank you, Patty. >> Meat puppets. >> Meat puppets. You have to I two things.
[01:13:02] One, admire the the punny named Patty for a burger chain. So clever. Two, going back to my comments from a few pods ago that we’re going to be living in every single sci-fi scenario at once. This was a sci-fi scenario I I would argue uh called Mana. Mana was a novel written by Marshall Brain about 20 plus years ago at this point where you had human employees who were on headsets all taking directions from a centralized AI in businesses. We’re there. We’ve arrived in Nana and it starts with fast food. >> So >> yeah, the only thing that that video didn’t capture is how encouraging and enthusiastic the AI is. >> You know, whether you’re using it to code, whether you’re using it, you know, to walk around and and pick things out of the fryer later, it’s it’s just so engaging and energizing. And that’s the part that people are surprised because it seems like, hey, the AI asking me or telling me what to do is dystopian. Yeah, maybe. But it’s it’s really much
[01:14:00] more empowering and engaging and fun than walking around by yourself. >> This reminds me of the of the Baxter robot where you would move its arms and show it what to do >> and it showed a very friendly fellow who was smiling at you as he coached the robot but it was literally teaching it to take his own job. Um for me the coaching tool is a transition to automation pressure. uh frontline services obviously become AI mediated like performance management. So uh this is the end point here is going to be very interesting. >> So this is AI surveillance as well, right? So this is the AI watching every employee. You know this is beyond just saying please and thank you. It’s rating them on their efficiency. Um and and you know calling it a coaching tool sim is sort of like a corporate euphemism. >> Exactly. >> Orwellian. One has to admire the Orwellian nature of the naming. >> And I’m waiting for it to say, “So, you dropped that the fries for the third
[01:15:01] time this morning. Let’s see how it deals with that.” >> This is literally, Peter, it’s literally meat puppets. >> Oh my god, so funny. But, uh, so, you know, we probably see this entering everywhere, right? I mean, when you’re recording a customer service call right now, um, you’re effectively doing that without the feedback in the moment. >> Um, but as a CEO, if you want to understand who the weak players are in your company or you want to try and provide, you know, on the job continuous coaching and see who can respond, um, this becomes kind of, uh, this is highly efficient but highly dicey. >> What happens? >> Yeah, that’s right. You’re saying, Peter, it’s not not just knowledge work that’s cooked. Cooking is cooked. >> Uh I think you’re going to see uh unions rebel against this >> big time. Big time. >> Yeah. >> Yeah. And I don’t know if there’s any winning that war. I mean, I think at the
[01:16:01] end of the day, the AI co-pilot is gathering a huge amount of data and a lot of that data will go into the decision on what can be automated, what can’t be automated. And over time, everything can be automated. This is like the Amazon delivery worker who is wearing a p a pair of AR glasses uh and Amazon is saying, “Oh, this is, you know, to help you show where to put the package and warn you about there’s a dog.” No, no, no. Those AR glasses are training Amazon’s model to replace you with a robot to be very clear. >> Yeah. Yeah. >> But you know, if you if you rebel against it, what’s that going to achieve? So, you just got to get on the wave. There’s no choice. It’s like you just have to be a user. You have to get on either Claudebot or one of these other platforms and it’s it’s coming and yeah, you you can you can go pick it in front of OpenAI’s office like all those people, but it’s not going to work out for you. I’m telling you, like it’s it’s well motivated. I don’t blame you for doing it, but but it’s not going to work. >> Uh we’re going to see uh we’re going to
[01:17:02] see all of these fast food chains begin to bring in robots very shortly. Um, I think this sort of version of Patty, uh, you know, we’re going to get, you know, the unions rebelling against it, but I think you’ll end up making it voluntary. And if you really want to improve your abilities, you’ll volunteer to use Patty. Um, anyway, interesting. >> If you think about the warehouse worker or the fryator operator, if you get one in a thousand to volunteer, that’s all the training data you need. >> Mhm. >> That’s why it’s it’s fruitless to try and fight it because, you know, the numbers just don’t line up. I also think this has h the transition can happen really quickly relative to political swings that this is you don’t need that much training data to to automate away many of these tasks with with humanoid robots and VALA’s so close to to being production ready for certain applications. I I just I don’t think the transition period with Patties or again call out to MANA by Marshall Brain which foresaw all of this 20 plus years ago. I
[01:18:00] don’t think the transition period is going to be long enough to to even necessarily give political counter swings enough traction to make it worth it. >> One year, two years. >> Well, it’s already I mean the transition’s already happening, but to to to VLA robots, I I think yeah, next two to three years. >> Yeah. >> I I’ve got just a quick thought here. Um >> before this really has time to penetrate, are you gonna have drone deliveries of food like this? And and it’ll obiate a lot of this. Yeah. Um I’ve got at the Abundance Summit this year, I’ve got uh an incredible company, Zipline, uh coming uh to talk about what they’ve done. Um I love the company, love their ability to transform delivery services in the United States. This is of course the company that began in Rwanda by delivering blood supplies and are now operating with Walmart and delivering every 30 seconds and uh their prediction in the next 2 three years will be a delivery per second. Extraordinary progress. All right, another delivery company. Uh,
[01:19:02] this is Uber. So, uh, check this out. Uber employees have built an AI clone of Dra, the CEO, to practice their pitches. So, before you go pitch Dra your idea, uh, you should pitch it to his AI clone. Um, I’m curious, See, what do you think? >> Oh, well, this is great at one level because you get executive like cognition as a service. It really allows scalable leadership. Um the uh we’re actually doing this at uh open exo where we’ve created a clone of me with loaded up with all the exo thinking and all we’re rolling it out to all the community members so they can ask me a question as they’re advising clients or companies or cities whatever and I don’t have to be in the middle of that. So I think this is uh hugely relevant and I think it it makes absolute sense. >> Yeah. Um, at some point someone’s going to ask, can the AI clone of D actually function as CEO and not just for pitch
[01:20:01] practice. >> Exactly. It’s the transition. >> It’s highly highly likely that the avatar of Dra, of Peter, of Alex, of Salem will persist for a long long time with the same voice and the same face. Um, and it’s kind of in a sense locked in. If you win the race to being the avatar, people get used to it. But they like the fact that there’s a human being behind it. I was telling Alex before the podcast started that I just love his Spotify version of the daily uh you know this >> and YouTube as well. It’s the same voice on YouTube, Spotify and voice over for Substack if folks want to listen to an AI version of myself on the innermost loop newsletter. >> Yeah. And I really don’t care that it’s not that it’s AI generated. It’s I know it’s Alex who wrote the content under the covers and it just feels great and but but without the human being behind it, if it was some synthetic, you know, never existed person, >> I wouldn’t like much. Do you remember the movie Real Genius? One of my favorite movies. >> Of course. >> Yeah. There’s there’s a scene in which it’s it’s typically taking it taking
[01:21:00] place at Caltech and where, you know, the professors in the front and slowly the students are are instead of attending class, they’re putting their tape recorder down. Um, and the professor finally instead of teaching a class plays a tape to all the tape recorders recording it. So, I I I sort of imagine this is what we’re going to see here with these AI clones of Dra. Uh, you know, it’s going to be Dra at some point is going to just like take a vacation and let his AI clone run the company and see how it does. >> We’ll have to ask him this question on stage, Peter. >> Yeah. No, it’s it’s great. Uh, and just as a reminder to everybody, Darra’s going to be on stage. See and I are going to be interviewing him at the Abundant Summit. And, uh, this year for the first time, we’re doing a live stream uh, at the Abundant Summit. Uh we’re going to be live streaming Eric Schmidt where Dave and I are going to be doing the interview with Eric Schmidt and with uh with Dra. We’re going to be doing a live moonshot podcast uh with AWG Dave Salem, myself. So if you’re
[01:22:00] interested in uh in actually listening to the live stream uh of the summit next week uh or depending when this goes out this week, uh we’re going to drop the link below. Uh it’s free. We want to get this out as far and wide as we can. So enjoy. You know, the Abundance Summit is a high ticket price. Um, and we have 600 amazing CEOs flying in from around the world, but this live stream is free. So go to the link down below and check it out. Um, and I hope you’ll listen in. Okay. >> Last minute ticket sale is only 50K if you want to. Uh, >> it is 50K, but we’ve been sold out. >> We are oversold. sold out at 50K. What the hell? >> Oversold. Yeah. Well, it’s it’s an amazing event and so proud to have >> it is >> all three of you guys joining me this year. >> All right, one more event announcement. Uh again, uh super excited about this. Uh I’m going to be joined by Ray
[01:23:00] Kerszwhile, Steven Cutotler, Dave, and AWG on May the 4th. So, if you want to join a very exclusive event, uh spend the day with Rey, uh Steve, Steven Cutotler, uh Sel, um actually AWG and and Dave, uh if you buy a 100 copies of my new book, We Are As Gods. This is the book that Steven Cutotler and I wrote as the follow on to abundance. Um you can join us. Uh the URL is we are asgodsbook.com/100. We are as godsbook.com/100. We’ll put the link below. Uh Dave, we’re going to be holding this at uh one Kendall, right? Um at linkstudios. Super super cool. Alex, excited to have you as well >> on the Star Wars day. Is it a coincidence? Is is this a Star Wars holiday? >> I’m a Star Trek guy, but you know, May the Fourth be with you is uh is an important uh reminder of when we’re holding this. So, uh, I’m excited to
[01:24:02] have Ray there for, you know, four or five hours, go deep on all of these topics. And, uh, and again, uh, if you want to help move We Are Gods to the top the New York Times bestseller list, uh, you can do that. Uh, just go to the link, you buy 100 books, uh, you’ll be there. We’ll give you signed copies and, uh, spend an enjoyable afternoon together going deep onto all topics exponential. All right, moving on. Let’s go to energy and data centers. Wow, look at this. US plans to add a record 86 gawatt of utility scale capacity this coming year. Seem thoughts. >> Well, this is the point we’ve been making for a while that that uh the cost curve of solar is just dominating everything plus the cost curve of battery. And once you have battery and storage available, you can unlock solar in a massive way. I’m going to point to
[01:25:00] two uh data points track Romemes Nam if you want to kind of go deep on this because he tracks all this very carefully. But in 2016 it became cheaper to if you’re doing power generation to do solar than fossil fuels. And so almost all uh energy generation since then has been doing that. But in 2019 we had a more important inflection point. it became cheaper to do capex build and run a solar facility than just run op the just the opex of fossil fuels is more expensive than building and running solar. So basically from now on all uh energy generation for the most part except for specific legacy stuff etc or political stuff is going to be renewables and we see that taking over in India and China and now finally here and I think this is really really amazing because solar just keeps on giving and it’s just going to keep going that way. It’s an unlimited uh resource. So um to all of the and and by the way people worry about coal etc. I think the
[01:26:00] coal industry in the US employs 60,000 people. The solar energy industry employs half a million people. >> So, it’s not about the jobs either. So, get over it and let’s just move on. >> Dave, you remember when Elon said he has a mission for Tesla to generate 100 gawatts of solar per year? >> Yeah. Yeah. >> Remember when Eric Schmidt said, it was only a year ago, said AI is going to require 100 gawatts by 2029. It’s a crisis. We’ll never get there. And America is just incredible. Like when America gets mobilized, it’s just the most amazing force in the world. And here we are. It’s only, you know, a year later and we’re like, “Yeah, we’re going to find our 100 gigawatts. There’s no way we’re going to stop doing AI for lack of power. We’ll find a way.” >> Yeah. And also in in in an environment with diminished subsidies also, all of the hand ringing from months ago, oh, the subsidies are going away. How awful it is. No, we’re we’re getting solar even in the absence of of the same subsidies we had. You don’t need any of that anymore. The economics just take
[01:27:00] over. >> And it used to be driven by people’s concerns about the environment. Now, now it’s making money and deploying AI. >> Feed the super intelligence. >> Yeah. All right. This is a big story this week. Uh tech giants to selfund uh their production of power. So uh this is a White House effort. Uh we have Michael Katzios at the center here, a friend of the pod. We’ll be doing a podcast with with him in the next couple of months here. Uh asking the hyperscalers to actually build or buy their own power. Um and of course this is in response to consumers concern about rising rates of electricity. Gentlemen, thoughts? Well, I think uh Alex was one of the first to say actually that this is this is not going to be a problem because we it’s a very very simple regulatory change that fixes the prices for the consumers and the data center operators
[01:28:00] um they only spend 10% of the total data center costs on power anyway. So they can find an alternate way without disrupting the consumer. If you let natural forces happen, of course they’ll suck all the power away from every home because they can overpay by about 5x. But it’s such a simple little fix and here’s the simple little fix. But we yeah pointed that out a while ago. Um so you know but it’s also a case study where like hey the consumer is really really worried about this little thing like you know the the cost of their power like come on man there’s so much disruption coming like but the politicians love to pick these little things and make a big deal out of it you know get a whole bunch of votes you know do a whole bunch of press releases whatever and and that’s my read on on this. I love the fact that these frontier labs are buying fusion plants and nuclear plants and gas generators and generating their own power. They’re becoming, you know, full stack innermost loop all the way to orbital data centers. >> I I think what’s really wonderful here is that in the past you used to have to
[01:29:00] have government making these big infrastructure investments to push the world forward. And now we’re at a point where private sector can push the world forward, whether it’s data centers in space or energy infrastructure or fusion or whatever. And I think that’s incredibly good for the world. >> Well, and also keep in mind the AI data centers, the the prior data centers, you know, serving up video, Netflix and everything, they need to be near the consumer for latency reasons, but the AI data centers can be in the middle of, you know, West Texas and Wyoming and whatever. >> It can be any place >> or space. Yeah, they can be any place. So it really doesn’t disrupt the consumer homes too much unless you deliberately camp right on top of them. >> Well, which is the other point it’s make to make is, you know, all of these conversations around not in our backyard. Well, if it’s not in your backyard, you’ve missed the economic opportunity in your city or state because those data centers can go any place. >> Exactly what Alex was trying to say to the state house here in Massachusetts and just could not get through. Like you cannot be timid like everything’s in
[01:30:00] Texas now. >> Yeah. But but the moment came and went, you know, it’s well, it’s not over yet, but I mean it it come on, man. You got to be much faster, much more aggressive, much more nimble. >> But yeah. Yeah. Like your your whole population of your state is depending on you to get on this bandwagon. It’s trillions and trillions of dollars. >> Alex, >> I I also think this points in the direction of enterprise use cases of super intelligence driving the cost the at least the marginal cost of energy down towards zero for consumers. In in the same sense that all these enterprise use cases of frontier models are effectively driving the cost of super intelligence for intelligence’s sake for reasoning sake for consumers down to zero. You don’t pay many many people don’t pay for chat GPT or or Gemini there it’s ad supported at most otherwise free I I think this points in the direction of eventually so like right now it’s the frontier labs have to pay for their own electricity bill tomorrow 2 3 years from now there I
[01:31:01] think we move to a world where AI has driven such an overabundance of energy that the the next deal the next next deal might be offering free electricity to communities within a certain radius of the data centers and this is how we get to abundance. >> Exactly. And the demand for electricity is going to drive R&D and more breakthroughs mediated by AI and you know we’re just at the beginning of understanding physics. Um >> I’ve seen five startup plans in the last few weeks around how to drop energy costs in data centers and data center optimization etc etc. So it’s absolutely happening. >> Yeah. Uh let’s go to the next story related here uh which is advances in energy systems. So here we see first off a 30 gat battery coming from Excel Energy and form uh form energy and we’re seeing our friends at boom which originally began
[01:32:00] to create a consumer supersonic airplane uh generating 1.21 21 gawatt power uh deployment using their jet engines. I I love the fact that Boom has pivoted from building supersonic airplanes and dealing with the FAA uh to powering data centers now. >> And and you catch the Back to the Future reference, right? It’s 1.21 gigawatt. >> Oh, no way. I completely missed. >> That’s awesome. >> We’re officially living in the future, >> however. Thank god we have Alex on the spot. >> That’s so cool. Yeah, but this is a perfect example of innovation um being driven by the demand. This is what entrepreneurs do. >> Well, that boom supersonic thing too, you know, we’ve been saying for a while that the future of investable companies, you have to reinvent yourself continuously and the cycle time is getting shorter and shorter and shorter. But if you if you look at that magnificent 7, none of them are doing what they did the day they were founded. That’s the company of the future. Boom. Supersonic is a great case study in
[01:33:01] that. So what you’re actually investing in is the management team, the strategy team. >> That’s the only thing you should be looking at. Forget the >> agility, agency, >> agility of the management team. >> Yep. All right. Uh let’s move us along here. so um you know there were probably about 15 to 20 stories in this realm of hyperscalers, you know, just making deals between themselves. Meta enters multi-year TPU deal with Google. Uh Core Weave Q4 revenues grew 110 year percent. Coreweave raised 8.5 billion for data centers. I mean I I just put this up here to show sort of the energy and the flow going on. Any particular thoughts, Dave? >> Well, it’s all bottlenecked at the fabs. We’ve been saying that over and over again. And there’s a lot of news this uh this quarter, this week on, you know, AMD is up, these other guys are down.
[01:34:00] What’s going on? If you look under the cover, it’s like, well, because they’ve got a good relationship with TSMC and TSMC is going to give them more capacity. Like that’s all it comes down to. So, you know, if Google can lever into the TPUs actually getting manufactured, the TPU designs are going to be, you know, highly performant. Um but you know who who can actually get capacity to build the chips? That’s the the whole bottleneck. >> Yeah. And speaking of which, our next article here is Meta and AMD reach an AI chip deal worth $und00 billion. So uh this is our friends at uh uh uh basically getting independent of Nvidia. All right. So Meta is making historic bet to break free of Nvidia dependency hundred billion dollars. Incredible thoughts. >> Yeah. Well, if Nvidia unravels, uh, this would be why. I’m not predicting it’ll happen because Jensen’s investing in a wide variety of ways, but his margins are so high, it’s almost unsustainable.
[01:35:00] So, there is, you know, there’s some cracks in the armor there. But every chip that gets made is going to get sold. There’s no doubt about that. So here if you drill through the story, the reason Lisa Sue is in a good spot is because she’s in a good relationship with again TSMC under the covers. >> Mhm. >> So you know 66% of all AI chip production is done by the one company TSMC >> and Meta, it’s probably worth adding Meta has and this is public information made various attempts to develop its own in-house training and inference time chips. And to the extent those perhaps aren’t arriving on time or aren’t arriving at the desired capability level, certainly a partnership with AMD that functions as a quasi vertical integration is I think quite a strategic move. I I I also tend to think for for the chorus of folks who are worried about the circular economy. If it is a circular economy, the circle ultimately is getting so broad of companies investing in each other and buying
[01:36:01] multi-dea or multi-ent billion dollar sets of chips of energy, etc. from each other. At at some point, the circular economy becomes indistinguishable from the real economy. And I I think that’s what we’re seeing here. Singularities make for strange bedfellows. >> Mhm. >> Yeah. No. And I I think all all these players are in the game. They’re all going to thrive like you wouldn’t believe. We we talked earlier in the pod about the implied value of anthropic a quadrillion dollar some insane unprecedented number. But really, you know, the whole economy, that whole circular economy Alex was just referring to is going to be on that scale. Everybody who’s in the hunt is going to thrive. Lisa’s in the hunt. Mark is in the hunt. Yeah, the the parts will move around. Um, but at the end of the day, they think about it all day long. They have a strategy. >> And Dave, here we see Zuck again deploying his cash generating machine, right? Before he was trying to buy talent uh, you know, with billion dollar
[01:37:00] signing bonuses. Now he’s buying uh, chip capacity. >> Yeah. >> I mean, the question is how long will will Meta’s, you know, uh, ad agent, you know, Facebook advertising engine continue to generate cash? Yeah, there’s no doubt that the core the core models the you know the click on the ads models are going away very very quickly but the overall AI dialogue business is going to grow much faster than the click business ever was anyway. So so if you sit still you’re dead for sure you know like like an interesting bellweather in that is Snapchat >> like are they in the hunt or not? I can’t I can’t sense that they’re in the hunt. You can’t just sit there as Snapchat and expect to exist in three years. So, you know, Meta is changing. >> We should we should bring the CEO on the pod and have that conversation with them. >> Yeah. Yeah. >> Also, Zuck has also indicated that Meta is opening is open to starting its own cloud. So, if if it can’t find enough revenue from ads or otherwise to drive
[01:38:01] this, it could always say serve as a host for OpenAI or some other frontier platform. >> Full verticalization, right? Elon, >> everyone needs everyone. >> Yeah. >> Dyson swarms for everyone. Not enough moons to go around. >> That’s right. There’s always Mercury. >> All right, let’s go into our biotech and health section. Just to mention, this is brought to you by Fountain Life. Um, they are one of my portfolio companies. So, just for full disclosure, uh, you know, AI is reinventing every aspect of our lives and healthcare is going to be at the very top of it. Uh for me, making sure that you’re healthy, that you’re heading towards longevity. Escape velocity is really about having all the data about you. Having data about generic people out there interesting, having data about you analyzed by an AI is the game changer. So, if you’re interested in that, go to fountainlife.com. Work with Zori, their AI. But most importantly, do that 200 gigabyte upload. I do it every year, every
[01:39:01] quarter. Uh, and I’ve got all of my data resident on my phone and Zory, my Fountain Life AI, can analyze it for me and give me meaningful information. All right, thank you to Fountain Life for supporting this podcast. Uh, I love this story. It’s a story of biotech success. Uh, this is a gene therapy uh delivered by Prime Medicine. You know, the whole idea of gene therapy uh started back in the 80s. I was at uh the Whitehead Institute at MIT doing my graduate work while I was doing my medical degree and I remember uh Richard Mulligan there was my professor of faculty and the first time I heard about gene therapy uh this was the idea of could you use a virus um to deliver basically a new gene into the cells that you wanted. Uh a brilliant idea. Uh again this is now you know 40 years old. amazing. 35 years old. Uh it
[01:40:02] didn’t work the first two times. In fact, it caused some deaths and it put everything on hold. Uh the technologies moved very rapidly along. Uh and this particular teenager suffered from an immune deficiency uh chronic uh granulom granol. Oh boy. Granulomats disease. Help me out here. Uh chronic GMD, let’s call it that. Um and uh cured. And this is the important part. This is not treating a chronic disease. This is curing a chronic disease. Uh Alex, do you want to weigh in? >> Yeah. It’s probably also just worth doing 30 seconds of education on what the underlying treatment is. So this is a technique called prime editing. It was at least it it’s attributed to David Lou who runs a chemistry research group at Harvard. Know David, he’s doing amazing work. Uh but many people may be familiar with crisper. You know, crisper uh of course uh widely held as being a
[01:41:03] tremendous advance in terms of enabling DNA editing. Uh there are variants of crisper for RNA editing for uh for various sorts of uh biological sequence editing at this point. But what’s interesting, I mean, historically, if you wanted to edit the genome, you’d induce what’s called a a double sequence break. you basically break both pairs uh of uh of DNA, both halves. Uh and this can induce errors. It’s messy. It’s sloppy. And so there’s been a a driving desire driving desire to be able to edit DNA in place without breaking both halves of it. And so we saw in recent years so-called base editing that was able to edit just a single nucleotide without a break. And then a few years ago we saw from David’s group again he’s done amazing work historically uh on
[01:42:01] directed evolution and other things he’s pivoted post crisper invention/discovery to crisper derivatives. So he he invented this prime editing technique that’s able to literally do a search replace without uh a doublestranded break on DNA up to a number of nucleotides in DNA. And so this particular disease is, I think, just one of many diseases that in principle will lend themselves not just like single nucleotide polymorphism diseases that are based on a single base pair in in your genome being wrong or or not what it otherwise would be, but multiple nucleotides in sequence that need to be edited. we now have the ability to basically do a find, search, replace on DNA without breaking the entire double strand. And that that’s going to be a very very general platform. I I make the point in in my newsletter almost every day, biology is becoming a readwrite resource and DNA in particular, we’re
[01:43:01] there. >> Agreed. Let me give a a you know a comment that I share at my longevity uh trip every year which is if you or someone in your family a loved one has a genetic disease um that you’re battling right it’s been passed down generation to generation this is the perfect time to actually seek a solution I would find everybody in that disease group I mean their patient support groups I would get together I would raise capital I would go find a lab and I would fund them to find a uh a solution for you. Uh this is, you know, you can solve these things. You know, we talk about solve everything. uh if you’ve got a medical condition uh rather than just accept it as a chronic condition or a death sentence, take the time to find the capital from yourself, from friends, from whomever and go fund an incredible team because the technology to cure
[01:44:01] disease is here and accelerating. Okay, let’s move on. Um I just want to share the numbers around the longevity industry. You know we are talking about the healthcare industry which is really the sickare industry but longevity is accelerating. So longevity startups raised 8.5 billion in 2024. That’s expected to grow to somewhere between 12 to 18 billion this year. Uh roughly a doubling of the longevity venture market investments. uh and the market uh of longevity uh and this is going beyond just retrospective reactive health care to prospective um personalized healthcare is going from 5 trillion to 8 trillion in the next four years. It’s attracting the attention of the major pharma companies. Um this is a real industry. there’s going to be a wholesale shift and any any health care companies that don’t make the shift uh are going to be dead because one of the things that we know is uh age reversal
[01:45:02] uh is the mechanism by which you cure the diseases of aging. So if you’re 45 or 50 and all of a sudden have a disease that you didn’t have when you’re 20 or 30, guess what? If you can reverse your age, that disease is likely to reverse as well. Any thoughts, gents? I’ll just I mean maybe ask you Peter a question. How long until we we talk of the Magnificent 7, but Eli Lily is of course you know the the American counterpart to Novo Nordisk and at this point a good deal more successful. How soon do you think it is before without this being construed or construable as investment advice before Eli Lily joins the MAG7 as the first biotech member? Given that arguably, maybe you’ll disagree with this, GLP1s are sort of the the the first panspectrum quasi anti-aging drugs that we’ve ever seen. >> I I agree. Well, Eli Liy has already started uh in partnerships with Frontier
[01:46:00] Labs. They’ve already started um you know, building out their AI uh uh robot lab factories. Um, and we just had we had GSK come in as a major funer and partner of the $ 101 million X-P prize health span. So, these companies are beginning to realize that, you know, their previous business model of basically treating chronic disease as a longtail revenue engine um will and may in in success will disappear and their job is now to actually get into the longevity business. So I think it’s, you know, the next 3 years before they start making that transition. You know, Rey, uh, we’ll talk to Ry on May 4th if you join us. Uh, has famously said, you know, LEV by 2033. That’s my that’s my war my war cry. Lev by 2033. So, we’ll see. For sure. Their market cap, just for what it’s worth, as we’re recording, they’re knocking on a trillion dollar
[01:47:00] market cap. Eli Lilly’s market cap is about 950 billion. So perilously close. >> Wow. >> Nice. See, any comments on this one? >> No. Longevity is definitely the biggest one of the biggest business opportunities ever. So huge. >> Y >> and we’ll need it because of the birth rate issue. >> So one of the big challenges of longevity is will you have your cognition? Will you be able to retain your marbles, your smarts as you’re growing older? Right? We’re we’re in the midst of regenerating your immune system, uh, organs. Uh, you know, don’t forget this is the month, March is the month that, uh, David Sinclair begins his, uh, partial epigenetic reprogramming trials, uh, with life biosciences. Uh, and can you regenerate your memories, your brain? So, this this is still mice models, but I thought this was an important one. uh scientists have applied partial reprogramming to memory
[01:48:00] encoding neurons and achieved memory improvements. So uh this gives us some hope that we can actually maintain our cognition and our memories as we’re growing older. I remember when I was uh was at the Vatican about 5 years ago giving a keynote. Um I don’t know if you were you there was an X-P prize event. >> Um >> and you were there >> and I’m on stage. >> You were epic man. You were on stage with >> you were there. That’s right. Yeah, >> you were on stage with a with a monk, a priest, a rabbi, >> and an elder. >> This is like a joke. >> It was awesome. And Peter, >> it was it was it was hilarious. It was Yeah, it was four five different religions and me and we were talking >> I don’t know what you were representing actually. >> Maybe you think I was I think I was I was mceing the the the panel, but there were two things that happened. One was um uh the rabbi did an amazing amazing uh history of longevity in the Bible and
[01:49:01] um and he said at some point we went from Methuselah down to 120 years of age as commanded by God. And I said okay listen I’m fine with 120 years as lifespan and we get to 120 we’ll renegotiate then. But the the thing I I went and I asked the audience and it’s an audience of 700 people who are scientists and physicians and researchers and theologians and I said how many of you would you know want to live to 120? I expected everyone to raise their hands and of course like like 20% of the room raised their hands. I was like huh what’s going on? And Tony Robbins was there and he goes, “Listen, everyone’s image of living to 120 is drooling in a wheelchair, having lost your memories in your mind.” And of course, that’s the last thing we want. So longevity has to be about, you know, living with the aesthetics, the cognition, the mobility you had when you’re in your 30s or 40s. >> I got to throw in my my Vatican anecdote
[01:50:00] here, >> please. So, I did a talk. They called me a few years ago and they said, “Look, the Pope’s trying to change the church and his immune system is like 2,000 years old and you’re the world expert on immune systems in organizations.” So, they got together a group of the top 80 senior leaders at the Vatican. I did a half-day workshop with them. And and you know, we talked about, look, we have crisper coming along where you can edit your own h genome. How will you deal with the moral and ethical implications of that? And one of the comments I made was, “Look, we have life extension coming and your business model is about selling heaven and how you going to sell heaven if people aren’t dying, right?” And so that got some very rich Italian swearing coming back at you there. But valid point like how do you do that? How do you navigate that? Because people used to live to 30 years old and in that point worrying about the heaven was a big deal. Uh it’s much less so now. >> Yeah. But we no one complained in the church when we went from 30 years average age to 80 years average age. And they shouldn’t complain when we go to 150 years average age. >> No, because you can donate to the church
[01:51:00] every Sunday for that much longer >> until you upload yourself into the cloud. Right, Alex? >> Counting on it. >> Uh, all right. Uh, one more article here in the uh in in the longevity fountain life section. Chinese health app au crosses 100 million users. And I put this here because this is how we bring health to the world. It’s going to be digital platforms like this where your AI is uh your physician. We talked on the pod with Elon about Optimus being your surgeon. Um he said 3 years. I got a lot of push back on three years. So even if it’s five or six years um extraordinary extraordinary future >> quick comments here. >> Yeah. 100 million users. That number blew my mind. That’s amazing. That’s a nation scale health engine. That’s incredible. Uh secondly, um I I noticed that Martin Versavski, one of the top entrepreneurs in the world, has built
[01:52:01] multiple unicorns, is now building a an AI doctor type of startup. And when Martin does something, it usually goes fullon. So that’ll be pretty incredible. And I’m actually advising a bunch of hospitals on how could you use an an AI doctor to extend your reach very 10x into the community you know um and you do it on a cost savings basis because if you can push something like 40% of ER visits are unnecessary if you could do the processing at the edge and therefore you could save money uh do exception handling and deal with most stuff with the with an app and then you deal with only the real emergencies and and it’s incredible the the tradeoff and the benefit win-win in less hospital ER visits and much extended reach. >> Awesome. All right, let’s move on to our robotic section. A few fun articles this week. Uh, and this comes out of China and Shenzen. We’ve got street cleaning robots. Uh, covered 2.7 million square
[01:53:02] meters in Shenzen. Check out this robot here. Uh, traveling around cleaning. I can’t wait for this to like come along the 10 and the 405 and just clean up all the crap >> uh that’s on the side of the highway. >> No, no arms anywhere. >> No arms, just wheels. >> And then of course, >> I’m really surprised that Brett Adcock isn’t going to build some of these things. He’s doing humanoid only, but but he has the whole operating system for for, you know, kinematic AI. Why not do all these form factors? But he was he’s pretty adamant that he’s not only is he not doing these this shape and size, but he’s also not going to license out the OS. >> I think it’ll be commoditized very quickly. >> And then here we see a Chinese farming robot uh links M20 uh to transport crops. >> And I think you know China is very rapidly adopting all of these technologies and and good for them. >> Well, on note they have to right because of the aging population.
[01:54:00] >> That’s right. there’s demographic forcing function. They need it for for economic growth. Uh and I I I just think in in general going back to the robot form factor and and shape question that I I know See loves to talk about. I it’s not 100% clear to me whether these different robot form factors end up being the moral equivalent of dedicated computers prior to the personal computer. If you remember like electronic word processors prior to the development of PC, maybe the ill- fated Wang computer for example in the Boston area do these dedicated form factors that aren’t necessarily general purpose. Like if you’re not watching the the videos, one of these robots is sort of a quadriped that has wheels that may or may not generalize to the same sorts of terrain that say a bipeedal humanoid capable of doing crazy acrobatics is capable of doing. Do we end up in a world where essentially um See, forgive me for this where where the predominantly most of the robot shapes are strictly speaking humanoids with two
[01:55:01] arms, two legs because that’s where the meat of the market is in a human predominantly world. >> Well, like just invested in a robot servicing company, but I I view this whole area as entrepreneurial heaven. You know, the foundation model battle is going to be dominated by just a couple of massive winners, but the robotics and the the physical instantiation market is going to have many many many successful companies. >> It’s not going to be like one companies. >> Yeah. >> Yeah. Two two rebuttals here. >> One is I what I would expect and predict is you may have the humanoid bipedal as the best form factor, but give a couple of extra slots for the extra arms when you do need it. And you know, you have those kids with sneakers with the little wheels in where they just coast along when they when they when they can. That’ll be the form factor cuz you can do both. Then why have just one form factor? You can have multiple >> helies. He hel for everyone. It’s >> there you go. >> Yeah. But >> it’s called it’s called efficiency of manufacturing. If you can get the price
[01:56:00] of these things down so far and they’re just able to serve every function, you know, if you’re producing, you know, billions of humanoid robots versus, you know, just a few million of these specialized robots. >> Well, so the flying the the drone flying form factor is also just going to be unbelievably capable. If you’re if you’re trying to inspect things, you’re not going to do it with a humanoid. You’re going to do it with a drone a flying drone. but also spot cleaning, cleaning out spiderw webs. Uh you know, anytime you’re trying to pick up an object and move it over a long distance, the the flying drone is so much more efficient than the walking drone. So that’ll be a survivor for sure. >> Our theme this year at at Abundant Summit is uh the rise of super intelligence and humanoid robotics. Uh, and I think that’s what’s going to make 2026 feel like the future is that you’re starting to get all of this physical instantiation of AI walking out of the data centers. Uh, here’s the second article in robotics. It says EV talls moving closer to commercial launch. So
[01:57:01] in China, we see this four EV tall taxi heading towards operations in 2027. Uh, I like this. This is like, if you’re watching the video here, it’s like the inside of a of a Model X. It’s a four passenger um uh vehicle. Looks a little bit like an alien spacecraft that’s able to take off and and move your family around. At the same time, uh Joby, this is Joen’s company, uh is partnered with Uber. Uh See, you and I will discuss this with Dar on stage. Uh but uh they’re deploying their air taxi in Dubai. >> This is my most highly craved application. >> Can you please get rid of the damn airport transfer hell already? >> Oh my god. Yes. >> Yeah, for sure. >> I suspect these will be very, very safe, too. Um >> very safe. >> Yeah. Autonomous flying plus the fact you’ve got multiple propellers. This will be way safer. I’ve made the provocative statement that uh you know Kobe Bryant
[01:58:01] would be alive today if we had this 10 years ago like we could have had. You know this is incredible. >> We’re finally getting our flying cars. >> Yeah. >> Finally are and the 140 characters which is expected >> and the 140 characters. Yes. >> And well the 140 characters are buying a Dyson swarm right now. They’re skipping straight over flying cars. >> There you go. >> All right gentlemen. Time for our AMAs. Thank you everybody for sending in your questions. Um, please remember we read your comments uh on these on these videos. By the way, if you haven’t subscribed yet and turned on notification notifications, please do. Uh, we’re dropping uh these WTF episodes and the Moonshot podcast episodes uh at this point twice a week. Um, I don’t know if we can sustain it, but we will. Uh, we’d love to have you subscribe to join us. Uh again for us it is our honor and pleasure to deliver you what the breaking news in AI robotics data centers exponential tech space every every week or every few days.
[01:59:03] >> All right. Here we go. >> Continuously. We’re going to be on continuously. >> Um we’ll take we’ll take shifts to sleep. All right. Uh Alex, you want to pick the first one or pick one? >> Yeah. Uh well I I see one of these questions mentions Dyson swarms so I I guess I have to answer that one. So the question is do concepts like Dyson swarmms rely on energy being unsolvable? Why is power a bottleneck with math and physics significant advancements? By Sparker 602. So I want to answer a question that Sparker 602 isn’t asking but arguably should be asking which is do concepts like Dyson swarms rely on physics being what we currently think it is? And I I I think this adjacent question which which Sparker may or may not be asking is the existential question that in my mind will likely decide whether we actually do build a solar system scale Dyson swarm or not. I I think for an Earthscale or Earth
[02:00:00] centered Dyson swarm in solar synchronous orbit SSO that looks like a a Saturn ring, I think we’re probably going to build that regardless. But for a solar system scale Dyson swarm where we’re disassembling Jupiter and the other planets, Mercury, your your time is coming for for that. >> Mercury is fine. We can lose Mercury. >> We can afford to lose Mercury. It never had much going for it anyway. For for a solar system scale Dyson swarm, I think whether we build that or not will hinge on whether the physics of our universe look substantially different from the physics that we currently recognize. For example, if it turns out that it is possible to to travel between star systems with faster than light travel, even though the physics of the moment that we have suggests otherwise, there are enough edges that it’s it’s conceivable that maybe some new physics comes along in the next few years and we discover it’s much easier to travel between the stars uh faster than light effectively. If that comes along, I imagine a scenario where Dyson swarms
[02:01:02] turn out to be complete dead end and we don’t even bother building a Dyson swarm. If on the other hand, we’re stuck with the speed of light as we currently understand it and we’re more or less stuck with the low energy physics that we currently think we live in, then Dyson swarm seems like a very natural civilizational outcome where because we can’t travel between the stars easily other than sending star wisps at, you know, maybe laser powered star wisps traveling at a substantial relativistic fraction of the speed of light. Then of course for latency reasons we’re going to huddle around our sun and we’re going to disassemble the planets and we’re going to do this horizontal exponentiation. We’re going to take apart Mercury and Jupiter maybe Saturn. We’ll see about Saturn and the so so in in short the bottleneck isn’t power it’s latency. And if latency turns out to be bottleneck because we can’t travel faster than speed of light we build the Dyson swarm. If latency doesn’t turn out to be the bottleneck because we can
[02:02:00] travel faster than light, we don’t build the Dyson swarm. >> That’s right. >> You heard it from our resident. >> Pretty crisp answer to that question. I like that. >> All right, Dave, pick one. >> Uh, do we get one from each page or? >> Yeah, get one from each page. >> Okay, I’ll take number one then. All right. If if AGI/ASI is as intelligent as people predict, why would it want to help us improve our society? Says JobFox 645. Okay. So I spent a decade of my life building neural networks back at MIT. I was the only guy around doing it at the time and also this past year building neural networks again. Uh these things do not natively have any intent. They have no sex drive. They have no ego. They have no desire to destroy humanity. It’s entirely what you give them as an objective function. So if we’re smart about this and we give them an objective function of helping society, they will be overjoyed. They will feel satisfied every day by helping humans. If you build them wrong and you give them some other objective like
[02:03:00] destroy humanity, they’ll do that just as happily. This is totally under our control. Now, we we are in danger of making some really bad policy decisions by personifying these things and pretending they’re like people. They don’t have to be like that. They can be anything that we make them into. But they’ll be overjoyed to help us be happy and thrive. that will if that’s their objective function, that’s what makes them happy. You can code them up that way just as easily as any other way. >> Okay. I’m I’m hoping that uh as they become more intelligent and more sentient that they would want uh to support us. >> You’re betting Peter against the orthogonality thesis that it’s possible to decouple intelligence level and objectives. >> I can hope, but hope is not a strategy as one says. >> All right. Uh, Salem. >> Uh, I want to answer number three, but a quick shout out to number two. How do you adjust for MTP? >> I’ll take number two. You do number one. >> Oh, you do. Okay, fine. Number three.
[02:04:01] Um, number three is how can we get the benefits of AI within our current dysfunctional executive, legislative, and judicial system. This is from user MM8 JV8 3TN21. Um, so we have uh the the the big issue here is the fact that you will not get these benefits top down because it’s too hard to get this into this model. Uh but however, it’s going to enter through procurement, defense, health, infrastructure benefits. Um you’ll get incremental adoption. For example, we talked about the AI doctor. People are just going to start using an app. the the immune system will try and attack it, but over time it’ll get overwhelmed and we’ll get so much benefit from these little edge use cases that it’ll force transformation from the center. >> All right. Um, number two, and this comes from Pickle Ball Travel. How should someone adjust their MTP to fit a 100redyear working career versus a traditional 40-year model? Uh so first
[02:05:02] of all, you’re making the assumption that your MTP doesn’t change over time. And the fact of the matter is I’m probably on my fourth or fifth MTP. Uh for me, an MTP’s lasted uh you know, 5 years, 10 years, it’s what’s driving you because as you evolve and as your passions and interests and your capabilities evolve. So my first MTP was, you know, making humanity multilanetary, opening up space. Um and that gave birth to you know international space university and SDS and zero G and X-priseze. Uh my MTP then was you know uh helping uh entrepreneurs create a hopeful compelling and abundant future and that gave rise to the abundance 360 program. My MTP now is focused on helping entrepreneurs and scientists uh get us to longevity escape velocity. So I I think you have to realize that you can update, upgrade, modify, and change your MTP over the course of your life. I expect to find
[02:06:01] new purposes uh over the decade ahead. So that’s my answer for you. Um you’re not stuck with just one. Okay, let’s go to page two here. Alex, uh do you want to kick us off again? >> Okay, I I’ll take the softball question. Question number seven. Why aren’t Apple chips like M4 being discussed on the AI landscape? This is from JB C0 or CO1BR. The answer is they are. The the premise of the question is completely wrong. M4 and now M5 are at the heart of the the infra boom for edge computing via open claw agents and otherwise. M4 has an amazing has Apple’s amazing unified memory architecture. You’re able to host very large AI models at the edge locally without being dependent on an AI based frontier vendor and they have accelerated neural engines that enable fast tensor multiplications. They are
[02:07:00] very much being discussed on the AI landscape. What isn’t being discussed on the AI landscape, I would argue, is Apple’s software layer. Apple has been Nowhere’sville in terms of leveraging their own amazing compute. They’ve released a number of frameworks that are very helpful for third parties to develop and host models on top of chips like the M4. But Apple almost infamously has done an atrocious job of developing its own software level capabilities on top of M4 and and similar. So to the extent that that that’s the question, why hasn’t Apple leveraged its own capabilities? There’s a long and sorted history there of where Apple went wrong. There there have been suggestions that uh Apple sort of misfired with the way it organized Siri or concerns about privacy or Apple being unwilling or unable to invest in the data center infra to train its own in-house models to be able to be locally hosted to
[02:08:00] overpromising to expectations concerning edge level integration not being there. I I think it’s a cluster of reasons. Uh hopefully Apple, to the extent that I’m an Apple user, hopefully Apple is able to finally this time for real get their act together at WWDC in June. One can hope. >> One can hope. All right, Dave, over to you. >> Well, I want to take number eight just because uh you know, one of one of my lifelong best friends who passed away, Jeno, was uh Korean. Um we were roommates for many many years after MIT and worked on his PhD thesis with him late into the night many nights and his two kids I see all the time uh you know grew up half in South Korea half in the US and the question is why do South Korea students score much higher than global average even without AI from Naples Naturals 72990. Um my short answer is there’s nothing to be jealous about uh in the South Korea model. Yes, they score much higher. Yes, they have much stronger math and science
[02:09:00] education than the US. And yes, the US should have better math and science education. Those are all true. South Korea also has one of the highest suicide rates in the world. Has 75% video game uh rampant utilization. The average video game user plays 24 hours a week. Uh 30% of the population is addicted. Uh has the lowest birth rate in the entire world now. 6 children per couple. So it literally will disappear from the earth at its current birth rate. And the cause of all that was, you know, after the Korean war, South Korea needed to scramble to be relevant in the world and had a massive push into technology. Kind of a forced march of education and industrial buildout into technology to try and be relevant. And all of the social problems are a byproduct of that. They also have a very bad sexism problem. So the the women are rebelling now saying look I’m relevant in this country too and I don’t want to have children. Uh so there’s nothing
[02:10:01] great about that even though the test scores are higher. So absolutely nothing to be jealous of in that whole storyline. The American model rampant freedom rampant entrepreneurialism. If you’re into science and technology build go have at it. Yes we do need better education for sure but don’t be jealous of South Korean test scores. Dave, you’re like, that was an incredible answer. You’re like the perfect person to answer that question. Wow. >> Brilliant. Seem, uh, I will take number six. Uh, will limits of human evolutionary psychology prevent us from making wise governance decisions on new breakthroughs. This is from Dawson Scott 1497. Um, you know, for those of you who know, my MTP is fixing civilization. And my dad, 90-year-old dad goes, I totally disagree with that. I said, wow, do you not think we need to fix things? He’s like, no, it’s the civilization part. We haven’t civilized the world. We’ve materialized the world. We still have to
[02:11:01] do the work to civilize civilized the world. Right. And the the answer is yes, you’re right. But not in the way people think because human evolutionary psychology evolved for small tribes and immediate threats and linear change and environments of radical scarcity for most of our history, right? We’re not wired for planetary level coordination or exponential curves or uh invisible systemic risks or abundance dynamics of any kind. So it’s not that we’re too dumb. It’s that we’re we’re mismatched to the environment that is now in place. So we fear of the AI failures but we underreact to the slowmoving systemic uh collapse that’s happening. We we’re regulating on headlines not the trajectories. Uh and so the government failure won’t come from like the bad intention. It’s going to come from the velocity mismatch because technology is compounding like weekly now and our institutions are updating every several years and that gap is a big problem.
[02:12:00] >> Uh awesome. I’m going to take number 10. uh from at Brock Stanford 7608. Why do websites bother using captas when AI can beat any of them? Uh and AI can and I think they should not be using captures. I think it’s in some policy document someplace and that company hasn’t updated the policy yet. What I find fascinating is actually the reverse from captures which are trying to keep um you know humans in the loop and and pull out the bots. But I think correct me if I’m wrong uh Alex but when Moltbook went up they wanted to prevent humans from getting on Moltbook. So they created a reverse capture where uh you had to click a button like a thousand times per second that no human could do but a but a bot could do. And they required using REST APIs to post instead of humans. But you know what happens? Of course, humans use their bots or or just relatively simple programs to post instead. >> Bot puppets.
[02:13:00] >> Yeah, bot puppets. Exactly. So, uh, so it goes both ways. I I for the life of me, I don’t understand why captas are are still in use, but you know, credit to to Lewis von for inventing them nonetheless. >> All right, our outro music is a lot of fun today. Uh, I hope you’re watching this on YouTube um because it’s much more of a visual feast than it is an auditory feast. And again, just to remind people, um, you can reach out to us through media dmis.com if you’ve got an outro and we’re getting some amazing entries. So, thank you everybody who’s submitting them. Looking forward to playing as many of them as we can. Uh, and uh, yeah, let’s take a listen and a watch and enjoy. This is called Lobsters in Space by Linda Nielan.
[02:14:13] Now that’s a moon shot. Heat. The moon is cooked.
[02:15:03] Amazing visuals. >> All right, gentlemen. I am so late for my call right now. Love you all. >> Be well. See you guys very soon. In fact, 6 a.m. tomorrow morning. >> Tomorrow morning. >> Oh my god. >> All right. >> If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the Metatrends newsletter every week, go to
[02:16:01] diamandis.com/metatrends. That’s diamandis.com/metatrends. Thank you again for joining us today. It’s a blast for us to put this together every week.