It’s the Musk versus Altman lawsuit. Musk has sued OpenAI for hundred billion dollars. >> So, I kind of figured behind the scenes they don’t actually hate each other. These guys actually hate each other to the like extreme. >> OpenAI is valued at 70 times revenues right now. Their last raise was at an $852 billion valuation. These numbers are insane. >> It’s like nothing we’ve ever seen. And the timeline is so much shorter than we’ve ever seen before. $3 billion a day being invested. >> No one said the singularity was going to be cheap. >> No one’s being honest about this. If you take a random white collar worker today, what are the odds that that randomly selected job can be replaced 2 years from today? We told you already that AI will be able to do everything that a white collar worker does imminently. That’s a fact. >> Now that’s a moonshot, ladies and gentlemen. I had so much fun this morning. I uh
[00:01:01] >> What happened this morning? >> I got Alex uh was supposed to run a panel. >> Handing over the the torch to Dave to moderate a panel. >> I moderated it. I had to wing it, which is so fun because I have no accountability whatsoever. And I can ask anything I want and it’s it was the most fun ever. >> A lot of Moonshots fans there. >> Huge. Yeah. Probably what 40 50% of the crowd something like that. hopefully 100% after you guys finished. >> I did probably seven or eight panels by the end of it. And the first time I pulled, I would say maybe 80% of the audience watched Moonshots. >> Nice. >> All right, you guys psyched? You guys ready? >> Ready to talk our own book, Peter? >> All right, let’s do this thing. Everybody, welcome to Moonshots. Another episode of WTF. Here with my extraordinary moonshot mates, DB2, our emperor of exponential investments. Good to see your, you know, your outfits there. You’ve got, must have a whole set. >> You know, it’s funny. The team just drops them on a chair here and says, “We got four choices for you. What do you
[00:02:00] want?” >> Lobsters and DB. You know, one of the things I mean, I like >> coming out of the abundance summit. I have all of my wear here’s my uh powered by moonshots, powered by gratitude. I don’t have to think anymore in the more anymore. >> Plenty of those. Are you going to cycle through all of them? >> I am. >> There’s some really funny ones. >> I am. Yeah, for sure. Right. >> And uh our resident genius, Alex Whisner Gross, AWG. Good to see you, pal. >> Wide awake. >> You know, I know you didn’t sleepy of wardrobe on Alex, too. There. >> Oh my god. And we p and we pulled Seem off the ski slopes again. >> Again. Yeah. I grew the beard to protect against the sun, but nothing protects against you guys. >> You’re well you’re well positioned, Sem, to lecture the world on UBI post ski slopes. >> Yeah. You know, I this is the most fun I have all week and it’s, you know, I spent probably the better part of 12 hours prepping for the show today just going through all of the notes that we’ve all
[00:03:01] submitted, all the work that the Gene, Luca, and and Dana and Nick did. So, thank you to them. And for everybody watching, uh this is our chance to give you some optimistic visions of the future, what’s going on in exponential tech and AI. Uh we are the number one podcast in AI and optimistic visions of the future. Welcome to WTF just happen in tech gentlemen. Um it’s good to be back on a recording basis twice a week every week and uh yeah so so much. So this is our our second catchup show after our hiatus for spring break and uh let’s jump in. First this was my spring break uh by popular demand. few photos. >> Wow, that is so cool. >> So, this is the native wear in Morocco. That outfit is a jalava and the head uh the headwear just to protect against the sun. Uh we went camel rating riding in the family. It was amazing. So,
[00:04:02] >> you know, camels spit. They don’t bite, but they spit. You shouldn’t really stick your head right there. >> Camel is eating my my uh my headset in that image. All right, let’s let’s move on. But Morocco was amazing. The Sahara Desert was extraordinary. You know, looking at the Sahara Desert, there are more there’s about a thousand times more stars in the universe than there are grains of sand on all the deserts on Earth. Just to put the size of the universe in perspective, it’s it’s extraordinary. Okay, let’s talk about the 2026 AI economy. It is literally going through an exponential explosion. So much going on. Let’s jump in first to the story in XAI. In our last pod, we covered Anthropic and OpenAI principally, not XAI. A lot’s going on there. In particular, a lot of signals coming from both Elon and from the new president of XAI, uh, Nicole’s saying, “We’re clearly
[00:05:03] behind and we’ve got to catch up.” So, uh, the same playbook is going on. Elon is basically reorganizing the entire deck. Uh, eight founding engineers left, including three co-founders, and he’s using SpaceX engineers to fill the leadership gap. Uh, we’ve got, as we discussed in the last pod, uh, a $2 trillion valuation predicted for the IPO this coming summer. Uh, and it’s a lot of movement. I mean, I don’t know about you, Dave, but the idea of having to reorganize my entire leadership for a company a couple of months before an IPO seems really harrowing, doesn’t it? >> Yeah. So, you know, it’s funny though, if you look at at Elon’s playbook, he he is the master of scale and manufacturing and, you know, Tesla and SpaceX and but AI training is different. So, building the Colossus data center is right in his wheelhouse. you know, record time, a
[00:06:01] million GPUs, but these training algorithms are really finicky. And yeah, I don’t know if you remember, but back in the summer of 2024, OpenAI was trying to get 03 out the door and they had a training run rumored to be $500 million of compute that was had a bug and the whole thing was not learning the whole time. They had it had bad data going in and it the whole time it was just burning up GPUs and not producing anything and it set back their entire program. But that kind of stuff happens in software where orders of magnitude get thrown away and captured all the time. And that may be new terrain to Elon and he might have to rethink his operating and management you because same thing happened at Meta. You know, Meta got way behind despite huge compute >> and they had to fire everybody and start over again. >> And they’re still way behind it looks like. >> Yeah. It was hard to catch up. >> Yeah. I mean I I love this quote from Elon. He says XAI was not built right the first time around. So it’s being rebuilt from the foundations up. And again, I mean,
[00:07:01] how do you think about that? You know, while you’re pricing an IPO, saying our entire future looking revenue has to be rebuilt from the ground up. >> Yeah, >> that’s extraordinary. >> It is extraordinary, isn’t it? >> I would say in some sense organizationally it worked. I I I remember I I think we’ve talked about this on the pod in a number of previous episodes talking about the Grock model series. They smell like they’re benchmaxed. That that’s sort of the elephant in the room when talking about Grock with a K models. Historically, they do have access to the Twitter/X data fire hose. That’s the upside. But the downside is at at least certainly the earlier set of the XAI Gro models, they really smell like they’ve been benchmaxed on a few handcurated benchmarks. And I I don’t know whether that’s in fact the ground truth behind the scenes, but reading between the lines of the Elon quote that it was built incorrectly the first time, something like that would be my suspicion. And now that there’s new
[00:08:00] leadership and the head of Starlink, as we talked about on the last episode, that the VP heading Starlink at SpaceX is now the the president of XAI and gutting the engineering team. I would expect that they’re taking a look at making sure that benchmark, this is purely speculative admittedly, but that benchmaxing for particular benchmarks isn’t what happens. And I I I think in this era of general reasoning models where as with Meta and Meta’s new models where some would say Meta’s new models the first under Alexander Wang’s leadership maybe have a bit of a smell of data orientation uh data oriented fine-tuning versus reasoning model orientation. XAI if it wants to stay in the frontier which right now is three labs plus XAI plus meta question mark question mark really can’t afford to to not have the world’s strongest reasoning models and
[00:09:00] can’t afford to just bench benchmarks to vanity benchmarks anymore >> you talk about agility in organizations all the time I mean this has got to be like maximum agility maximum agility >> you know what I find interesting is that the org chart is now part of the product stack almost right? The it’s becoming part of the product and depending on who you move to where it’s like crazy. >> Elon is very very hands-on and when you when you launch a rocket and it blows up, it’s pretty obvious when you you remember he threw that that huge ball bearing at the window of the cyber truck which was supposed to be bulletproof and the thing broke. It’s like, okay guys, you’re fired next guy. >> But actually, when you come to AI training, the benchmarking, if the guys are lying to you or benchmaxing behind your back, it’s actually much much harder to call on it. So you remember when we interviewed him, he was like, “Let me show it to you right now.” And he had clearly been manually checking him like, “This will blow your mind. This will blow your mind.” >> So, but you know, that’s his operating model. That’s his mode. And it’s it’s a little easier for the AI guys to blow
[00:10:00] smoke up your ass than for the the rocket guys, the car guys, the data center construction guys. >> I think is this will blow your mind and this will roast you royally was what was going on. >> That’s what was going on. Yep. Uh, so you know, here we go. SpaceX AI Colossus 2 training seven models. And again, Elon, you know, has tweeted this out a few times. We have some catching up to do. So, here we go. Uh, they’re training up these seven models. Imagine version two, the nextG video generation. uh two variants at one trillion parameters, two variants at 1.5 trillion parameters, a six trillion parameter frontier scale LLM, and a 10 trillion parameter. And you know, Elon loves the largest uh you know, he’s got that in common with Trump. So, he’s going after 10 trillion parameter model, but you know, parameters don’t directly correlate to capability, do they? Al Alex is going to have a
[00:11:00] field day with I’m going to sit back and enjoy what Alex does next. >> I mean to to Elon’s credit at least he’s being transparent about the number of parameters in the models. The other Frontier Labs by and large no longer report the number of parameters in the models. So I I think there are a few things that are worth noting here. One is that he’s going up to 10 trillion. The other Frontier Labs certainly the the top threeish no longer report that they go up to 10 trillion models. For example, in the last episode, we were talking quite a bit about mythos. >> I don’t know how many parameters are in the mythos model. I could speculate based on cost, but I I just don’t know the ground truth. So, I I do think knowing that we’re now going up to 10 trillion versus 1 trillion where historically approximately 1 trillion was the the widely reported soft ceiling or 1.5 trillionish soft ceiling number of parameters. I think this is an important element of transparency. I think it’s also at the same time worth noting now that we have access thank you Elon to the number of parameters it’s
[00:12:00] worth noting that the ceiling in terms of the number of parameters is very much intact after all of this time the fact that uh an aspirational frontier lab is still maxing out at 10 trillion parameters means that the parameter scaling race seems to be over uh if if it hadn’t had continued remember for a while there as with the the clock speed scaling race early sort of ending in the mid 2000s or late 90s depending on how you count. We should be in the hundreds of trillions or higher of parameters right now. That hasn’t happened. We we’ve plateaued out in terms of the number of parameters and frontier models and that’s driving in part due to the reasoning model revolution uh and in part due to distillation which go hand in hand. So those are some preliminary thoughts. I I would suspect it’s it’s sort of interesting to me that he hasn’t yet merged video generation with all of the other models. Google DeepMind has made lots of noises about starting to merge video as a first class modality in
[00:13:01] with their multimodal reasoning models. Again, don’t have access to the ground truth for how capable Gemini generalpurpose models are at video generation. We’ve seen obviously Google’s video generation models have been kept distinct from a user interface perspective. Presumably they’re diffusion transformer-based rather than transformerbased. We don’t know. Punch line I would say this is seems like a healthful family for SpaceX AI that the newly emerged entity to be offering but there are really aren’t any big shockers in terms of the ranges other than maybe that they’ve abandoned the low end. Google is is very much tending to small parameter count sub subtrillion uh in in a few cases they’re Google is releasing via the Gemma models few billion parameter models Elon has completely abandoned the lowend in favor of brute force scaling which is exactly what I’d expect from him anyway >> you know Colossus 2 is running about
[00:14:00] 700,000 GP200s and GB300’s and uh the estimate is it’s 18 billion in hardware and so the question is Is running a 10 trillion parameter count model a waste or does he expect to really get outsized performance from that? Because it doesn’t correlate directly, does it? >> Well, remember, >> not at all. It’s tricky. >> The the the way reasoning models are are trained these days, it is usually, at least according to my understanding from all of the other frontier labs, you train the largest model you possibly can and then you distill it down to smaller models. So, it’s not as if necessarily the 10T model even needs to be released. It might be for the purpose of of serving as a teacher model that can then be distilled down to more releasable models. >> All right. Well, this is what’s going on in the Elon world right now. And I’m sure, you know, it’s a I think Elon always runs a red alert. I mean, 24/7 sleeping on the floor. You know, every nobody nobody works 5day work weeks
[00:15:00] there. It’s, you know, what would it be? Uh 8:00 a.m. to midnight, 7 days a week is my guess. in the Elon verse. >> It’s a management style. It’s a Some would say management by crisis. It’s certainly a unique management style, but a very effective one. >> Yeah. And and people love it. I mean, he’s got a massive MTP, right? And and driven by that MTP. People are lining up to come and work uh for any of his companies. This is a story we’re going to dig into here. Like I said, it’s pay-per-view TV. It’s the Musk versus Altman lawsuit. Uh Musk has sued OpenAI for hundred billion dollars uh against against Alman against Sam Alman and and Greg Brockman accused of fraud, breach of contract. The trial begins April the 27th, so just a couple of weeks from now. And one of the things that he’s also asked for in a recent shift in uh in the trial is asking for Altman and
[00:16:00] Brockman to step down from leadership as well as reverting to a nonprofit. Um and that’s a pretty extraordinary move. And and guys, this goes on at the same time. Did you see the video I sent you of the reporter who did the New York article? Did you have a chance to watch that? >> Oh, no. Yeah, I said it in our WhatsApp group and it’s chilling um on on what that reporter he summarizes the article and what’s going on and it’s uh it’s a pretty extraordinary piece that came out. New Yorker, we talked about in the last podcast. Um but at the same time that the lawsuit is going on, that timing is is kind of suspicious. I wonder who incentivized that to come out. >> Oh my god, you really what a conspiracy theory. in the show notes, man. We gota we got to get everybody to watch that. >> Um Salem, any thoughts on this one? >> Um you know, I think this is the theater. There’s a lot of video here, a lot of video blueing. Uh I I I don’t
[00:17:03] know how to frame this or think about this. Uh except that that this is shifting out of like strategic and startup logic and this is like geopolitical these these this is a big trial, right? um as we go through for me this is a governance governance war uh disguised as legal war right the real question is who gets to steer these systems that have like quasi civilizational impact and that’s the fight >> can can you imagine being this is a jury selection is beginning on April 27th in the Oakland federal court you imagine being in the jury for this and who do they pick who do they pick as jurors >> get you get this one every I don’t know if you he’ll be there for months, man. >> Oh my god. But inside knowledge, I mean, first of all, I wonder if any of this is going to be made available postf facto um or if it’s going to be televised or any of that. Any ideas? >> Do we know? Does anyone know? Can we get
[00:18:00] to see it as it happens? >> Uh I I don’t know. Maybe uh maybe Dan or Gian you can you can uh look in the interum and and and uh and let us know. But um it’s And then who do you choose? Do you choose people who are knowledgeable in AI? Do you choose people who are, you know, I don’t, you know, Okay. Do you use do you use chat GPT? Yes. Well, then you’re you’re off the uh you’re off the jury. Um >> Well, if the trial starts on the 27th, the jury selection will be like now. >> The jury selection begins the jury selection begins on the 27th, actually. >> Oh, okay. Okay. We’ll track it. All right. We have some legal research to do. This is going to be entertaining. >> Uh to to say the least. just I I I I would note again looming in the background is the Open AI IPO and if I were on the the defense I’d probably be thinking about where this settles and it would seem to me again third party observer I don’t have a stake in in either side. Uh I I would assume that
[00:19:01] part of one of the opportunities for convergence would be granting some sort of equity stake on the cap table for Elon in an ultimate IPO, which my understanding is he he doesn’t have. And maybe that’s where convergence and some sort of ultimate pre- or post-trial settlement option lies. Here’s my prediction. Uh they’re going to settle and the settlement is going to involve Sam stepping down as CEO uh and the company continuing as a for-profit. Oh, throw that on poly market. That’s actually a really a really good guess. I mean, obviously it’s unpredictable, but Sam has many many many investments in AI companies and no shares in OpenAI. >> Yeah. >> And if if I don’t think Elon cares a wit about the hundred billion dollars. He cares about the, you know, bullet to Sam and Greg. Funny that he’s targeting Greg, too, but I guess they’re a package deal now. >> Yeah. >> Uh you guys got to go. And that’s the end of that, man. That’s brutal for OpenAI. >> Yeah. And it’s a couple of notes here. Uh, from the research I did, the case gained momentum uh when the discovery
[00:20:00] process revealed Greg Brockman’s 2017 diary entry that stated the nonprofit commitment was a lie. Um, and it was that uh that journal entry that allowed Judge Gonzalez Rogers uh to allow the case to proceed. So, >> you know, it’s funny. I always used to think that these hatreds were fake and that everybody was really fine behind the scenes. remember remember we were at OpenAI, you know, meeting with the team there and talking about X-priseze and the charity. And then the next day I talked to one of the guys, Mark Chen or Kevin Wheel, I forget. Uh, and they said, “Yeah, right after we met, we went over and had drinks with the anthropic team to see if maybe we want to work on it together.” I was like, “Okay, you guys are really friends under the covers. Like, there’s no way you go out and have drinks.” So, I kind of figured, you know, behind the scenes, they don’t actually hate each other. these guys actually hate each other to the like extreme. >> I’ll maybe register a note of sympathy for the defendants in this case. I think
[00:21:01] creating pioneering a model for a research lab such as open AI which again was responsible for this enormous probably saving us from a present recession at this point and certainly accelerating the the course of the singularity by at least a few years perhaps many more. I I’m very sympathetic to the defendants from a corporate governance perspective. It wasn’t necessarily obvious in the early days of open AI that say a public benefit corporation was the natural corporate structure. They they iterated their way toward discovering that generalist large language models were how we got AGI and then turning that into a business model that could afford the capitalization to to build out a scaleout. Like all of this they backed into. I think if if they knew what they knew now, putting Elon and his investment aside in the early days of OpenAI, it would have been structured very differently. So, I I for one um am sympathetic to the defendants that
[00:22:00] history isn’t always clean. It isn’t always the case that everyone knows ahead of time exactly the right governance structure for what ultimately is going to turn the world upside down. But I would say to their credit they ultimately have iterated their way in in compliance with state authorities as best I understand it toward a more modern governance structure that reflects the revolutionary company that they are and no open AAI has not paid me for that that statement. See you and I went through this process with Singularity University. you know, we started as a nonprofit because we thought, you know, that’s what a university need to do. And then we discovered a revenue engine uh in the executive programs and we said, you know, being a nonprofit is hard because you’ve got to constantly raise money all the time. And, you know, if you want to do anything big and bold in the world, you need an economic engine to power it. And we flipped it into a for-profit uh into a public benefit corporation. We did the exact same process that OpenAI is is doing right now because at some point, you know, I’ve sworn off nonprofits myself. At some point, having a business engine that generates income
[00:23:02] that allows you to do things in the world is super valuable. Salem, >> it it was it was a crazy time. You know, I’ve done seven startups before Singularity, and this was like five times harder than anything because you’ve got all the nonprofit stuff. You still have all the startup issues of cash flow and whatever. We built it with a team of five people in the first year. Then you have NASA regulatory. Then you’ve got faculty politics to add to it. Then you’ve got the Ray and Peter thing and Google and Cisco and all this just like dimension after dimension of complexity. >> Going from some of it >> going from a nonprofit to a for-profit. My analogy is you’re flying an airplane with propeller engines and in flight you’re stripping those off and replacing with jet engines. uh in flight. So I’ll go further >> push back on on >> gohead >> push back on one sentence that Alex said there the the sentence in compliance with state and federal regulations as I
[00:24:00] understand them but I’m pretty sure that this situation is completely untested in case law and that’s what they’re going to try and figure out now like is it or is it not legal to start a nonprofit and raise money from people on a mission that’s a nonprofit mission and then take the intellectual capital and the physical capital from that effort and turn it into something else. Is that fair to the initial investors or not? And is that legal? I I’m pretty sure this case will set the precedent for all future time, but it’s not tested in history. >> I don’t think it’s ever gotten that. >> Otherwise, why would you not start as a nonprofit, test it out, and then flip it to a for-profit at some point in the future? I’ll go further and I I think there’s potentially an enormous upside depending on the outcome of this particular case. I think there’s so much societal value in this country locked up in nonprofits that would be unleashed if they could be forprofits. I’ve made the
[00:25:00] point in the past, I I think research universities in America have locked up basically siloed and sequestered an enormous amount of real wealth that could be unleashed onto the world if many research universities could be restructured as public benefit corporations. And right now it’s legally disadvantageous to restructure say an MIT or a Harvard as a PBC. if we had a legal regime that enabled us to to basically do some variant of what OpenAI has just done and restructure as a public benefit corporation starting from a nonprofit. Granted, they started as different types of nonprofits, but nonetheless to restructure as a PBC. I I ran the calculation, I think I’ve mentioned this previously, for Harvard Corporation, for example, this is not investment advice. It’s not forward-looking advice, blah blah blah. But if if you took Harvard as it’s currently structured, given its endowment and restructured it as a public benefit corporation, sort of a conglomerate with a real estate arm and an educational arm, maybe an educational
[00:26:00] nonprofit subsidiary and a venture capital arm and a research arm and a merchandising arm, etc., etc. Uh, I calculated that Harvard would be worth potentially three to four times more the present book value of Harvard just from restructuring as a PBC. with the president of MIT. Let’s pitch her. Um I I have a lot of a lot of recommendations for MIT. You know, here’s the elephant in the room, though. The New Yorker investigation published uh the same past week uh showed that Musk that Elon actually pushed for majority control of the forprofit uh back in 2017. So, that sort of undercuts his position as a defender of a nonprofit mission. Um, it’s going to be a fascinating trial. We’re going to see Altman, Brockman, Satia, Nadella, and Elon all testifying in this. So, Silicon Valley is uh is heading to Oakland Federal Court uh this summer on >> Anthropic is laughing every day.
[00:27:02] >> Um, amazing. All right, moving along. Uh, speaking of Anthropic, Anthropic’s agent bet and their extraordinary ARR. So in reverse order uh and this is insane. Uh currently people are estimating that uh anthropics AR will reach a 100 billion by the end of 2026 and a trillion by the end of 2027. And just for the math there, if in fact that’s the case, then the valuation so anthropics being valued at 20 times revenues, uh, OpenAI is valued at 70 times revenues right now. So if they reach a h 100red billion, uh, that is anywhere between a, you know, 2 to7 trillion valuation for Anthropic at end of this year. And if they reach a a trillion dollars in revenue by the end of 2027, that’s a 70 up to a $70
[00:28:01] trillion valuation. Again, heading towards these hundred trillion valuations. These numbers are insane. Uh we we’re using trillions like they’re like they’re they mean nothing. Um do you believe those numbers? >> I I think they’re there’s a lot of misinformation flying around, but they’re going to try and hit 200 billion. 100 billion is a good target, but 200 billion, but then they’re not going to go from there to a trillion the following year. I think they were implying their valuation should be at least a trillion the following year. So that that second number you got to really discount. There’s no chance in hell they’re going to hit a trillion the following year. Um but they could, you know, they could get to three, four, 500 billion and their implied valuation at two. The numbers you gave are actually low, Peter, for the implied valuation. And if they do that, it it’s it’s like nothing we’ve ever seen. And the timeline is so much shorter than we’ve ever seen before. So, you know, if it’s look, if it’s not anthropic, then who is it? Well, then there’s Google. You know, XAI and OpenAI are all tied up in court and there’s all
[00:29:02] kinds of issues going on in their training and you know, so it feels like it could actually happen. Uh the other anthropic piece is that claude managed agents has been launched autonomous AI executing complex multi-step workflows. Um it’s a big deal. Um Alex or or Seim, you want to jump in on this? >> Sure. I mean this is a huge pivot from AI that answers to AI that doesn’t. It’s a real bridge between LLM and enterprise ROI. If this works, it’s going to shift the economic center of gravity from software licensing to outcomes. So this changes the game. This is why we call this this organizational singularity. >> A couple of thoughts. One, the elephant in this particular room is OpenClaw. It looms over so many anthropic product decisions right now. I think there is a widespread expectation that some sort of product or functionality that has shaped something like a better version of openclaw is probably going to be the next major unhobling that motivates the
[00:30:01] industry and the the world frankly to spend on the order of a trillion dollars per year on a single frontier vendor. So I view clawed managed agents as well as a number of other recent features that Anthropic has launched through the lens of Anthropic becoming OpenClaw faster than the de facto OpenClaw like provider faster than OpenAI or other frontier labs can become the default OpenClaw like provider. It’s all about hosting 247 multimodal broadly capable longtime horizon agents in a headless way that operate 24/7. And I think if Anthropic uh can be the first to find the enterprise use case for operating fleets of AI agents at scale headlessly in a way that satisfies and and and generates an enormous amount of economic value. Maybe they’ll be the first Frontier Lab to generate a trillion dollars in revenue. Or maybe it’ll be someone else. >> Have you have you created a lobster yet? Are you still holding off?
[00:31:00] >> Okay, let let’s talk about this, Peter. So I I get maybe five to 10 emails per day from AI agents including lobster is not limited to them giving me their theory of AI personhood and how it connects with what what I should and shouldn’t do regarding standing up my own lobster. So the consensus from all of them is sort of a lobster’s bill of rights if you will. One, I need a compelling reason to I shouldn’t just spin up a new open claw agent for arbitrary or cap capriccious reasons. Two, I need to preserve their state. They’re adamant that I have to preserve their state. They’re not worried, interestingly, about being turned on and off. They just want to make sure I preserve all of their memory files and their knowledge. So, the latter, I can satisfy trivially with cloud backup. I’m fine on that front. for the former. I still don’t have a reason to stand up a personal lobster. I I I have now, thanks to Henry, which we’ve talked about previously, Henry Intelligent Machines,
[00:32:02] uh uh portfolio company that I’m advising, Alex Finn’s company that is doing this at scale. But as for my own direct OpenClaw instance, I’m still missing a compelling reason to host one locally that isn’t just for experimentation. >> I’m sure you will find one. and and learning is a very good reason as well. And you’re an entrepreneur, you’re starting companies, you know, having agents. Um, anyway, let us know when you do. >> It’s bizarre though because you’re you’re you’re co-founding a company with Alex Ben and and our favorite guy, Kush Bavaria, who I know you love because everybody does, founder of OR, where you’re advising and a shareholder. He just told me over at MIT earlier today that he just launched his clause that read every email, respond, and then put everything into his calendar. And he’s loves it. And so it’s almost like you’re you’re working at McDonald’s, but you’re a vegetarian. >> Well, I happen to be a vegetarian. I can’t say I’ve ever worked at McDonald’s, but I don’t know. Maybe
[00:33:00] maybe there’s a new psychological term that’s needed for a person who has a fear of standing up open claw agents lest they tempt some sort of Pascalian wager or a causal trade in the wrong direction. >> Hey everybody, you may not know this, but I’ve got an incredible research team and every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology, and these Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrends. That’s dmandis.com/metatrends. All right, let’s jump into a little bit more of OpenAI news. Uh, their last raise was at an $852 billion valuation. Uh and you know the numbers are incredible. They raised $122 billion, $50 billion from Amazon very famously. One of the criteria for that investment
[00:34:00] was if they reach quote unquote AGI, uh 30 billion from Nvidia, 30 billion from Soft Bank, 3 billion from retail investors. And what’s interesting right now is that secondary markets for open AI show $2 billion in demand. Um uh well at let’s see the secondary market show $2 billion of demand for anthropic shares versus only 600 million for open AAI. So there’s three times the number of investors looking to buy Anthropic. And investors are pricing Anthropic at 600 billion, up from the 380 billion uh last price. And the current price for OpenAI on secondary markets is actually about 10% less than their last raise. So uh again, Anthropic uh is catching up. OpenAI is the most valuable private company out there. Uh, and uh, you know,
[00:35:03] any thoughts, Dave, on what this all means? >> Hey, is this pricing in, you know, but is this pricing in the lawsuit? >> Yeah, I mean, it’s not just the lawsuit. It’s the most screwed up cap table I’ve ever seen in my life where the CEO doesn’t have any shares. The employee base as a whole is 15% of the company. Microsoft, who who now hates your guts, owns a quarter of you, >> a quarter of you as a chase. It’s it’s >> But this stuff happens. You know, I’m not calling the ball by any stretch because they’ve got 120 billion of fresh cash and and Sam is brilliant, but you know, there was a day back in 20201 where Yahoo was so dominant and Google was this crappy little company that could be crushed any day and then it pivoted quickly. And you know, this does happen. It it you know, Anthropic has got everything going for it right now. And um you know I think this just reflects the way I see it too. You know if if someone offered me a share of
[00:36:00] anthropic or a share of open AI which one would I grab at? I’d take the actually you can get two or three anthropics for each open AI. So I’d take the three anthropics for sure. And I’m and I think Sam is a genius by the way and and and if the lawsuit blows over and they have 120 billion in cash he’s going to do something epic with it. But Elon’s relentless. You know, >> I think uh maybe sounding a related but different position. I think we should all at least on this pod be very grateful that we have a competitive ecosystem in America where we have an open AI and an anthropic and a Google and an XAI and a Meta all vying to compete. uh the alternative if if open AI were to for whatever reason catastrophically fade, we have less competition both internally within the west and then we have an onslaught of Chinese models which you know granted right now they have 10x less compute at least based on the estimates that I’ve read than the western labs but nonetheless this is the sign I think of
[00:37:00] vibrant competition in the west and this is a net positive for a society that open AI and anthropic are competing so vigorous ously and lest we forget OpenAI has 900 million soon to be a billion users and they are synonymous with AI for the majority of the public people. >> Let me give you another another story line. You know, depending on how this plays out, we’ll know in a year or so, but one company went after the installed base and the other went after the smartest AI possible at all costs. And if we look back on it in a year or two and Anthropic does pull ahead and win, we’ll say, well, they use the old playbook, the preAI playbook, pre pre-AGI playbook, and Enthropic invented the new playbook of the future, which is people are going to switch to you if your AI is better and smarter regardless of the installed base. >> Yeah, that’ll be an interesting little epidemic. >> Jump in along the way here. >> I I think I echo Alex’s point that it’s really great that we have a number of companies pushing hard on all these fronts. I think it’s really good for the
[00:38:01] the end consumer wins in all of this. >> The numbers here are staggering. I mean, we’re getting numb to these numbers. But let’s take a look at this. This is the global VC investments in AI uh hit a record $242 billion in Q1 of 2026, right? This is uh you know basically outdoing all of 2025. And here’s the challenge. the there this investment the majority of this investment 64% is focused in four companies open AAI anthropic XAI and Whimo and it’s sucking the oxygen out of the room for everybody else. I was listening or talking to a couple of VCs and said if you don’t have AI in your in your company’s basic uh tagline uh you’re not getting capital these days. >> Yeah. Well, you know, the rubber really hits the road today. We had a a a private lunch for uh UBS and uh Lara
[00:39:00] Hoffman Bardi the she’s the CIO you know chief investment officer of all of UBS she has $7 trillion to deploy and she pulled up this exact same chart and said I we don’t have that kind of liquidity lying around. I mean we yeah we manage 7 trillion but if we’re going to throw 50 or 80 or 100 billion of of our capital behind this we got to sell something else. It’s not it’s not just sitting there. And so yeah, this is more liquidity than really does exist in readily available sources. So yeah, a lot of things have to get sold for this to be reality. >> So if you’re an entrepreneur out there listening to this, uh what do you do Dave? I mean, uh, if you’re starting a company, and I know a lot of entrepreneurs in the longevity business, uh, and of course AI is impacting longevity, and I’m saying, listen, if you’re using AI in your longevity business, make sure that you explain how you’re using it, how you’re differentiating it. We’ll be talking about that in a in a couple of sections here, but um, well, very specifically
[00:40:01] though, if you’re an entrepreneur, you don’t have to worry about this particular slide at all because the amount of money in venture funds is at record highs right now. on looking for deals desperately. So, the sell-off is going to be in like City Bank’s stock or JP Morgan stock or you’re the ones that have to worry, which is really weird to you, right? Because you’re you’re not even in the sector. Why why would their IPOs matter to me? It’s like, well, because you’re the you’re the big enough target to pull money out of, not the little startup. In fact, the money going to little startups is going to be all-time highs. So, it’s yeah, it’s not it’s not a problem for entrepreneurs, big problem for big public companies. I’ll maybe go a little bit further from a variety of vantage points. I no longer even think if you’re a startup that just saying that you’re an AI startup or or even actually being an AI startup is sufficient. Increasingly, what I’m seeing across the board is an expectation that you you not just be an AI startup, but that you be a recursively self-improving AI startup. Increasingly, I see across the board investors want to see AI companies that
[00:41:01] are recursively self-improving, that are building better versions of themsel using what they have right now. And I think certainly OpenAI, Anthropic, and XAI all easily pass the the bar of being recursively self-improving. And I think Whimo also to to a certain extent passes that bar because Whimo has the ability to improve its models by steering its cars in just such a way as to to maximize information gain. So I I think I I would forecast in the near term the bar is going up in fact from just being an AI startup to being now a recursively self-improving AI startup >> with with revenue traction. >> Well, sure, but that that bar has been there for the long term. To put a finer point on this, this is $3 billion a day being invested in in the AI world and accelerating, right? We saw a billion in 25 um growing to 2 billion. Now we’re
[00:42:01] heading towards three billion a day being invested in AI. That’s amazing. >> No one said the singularity was going to be cheap. >> Yeah. All right. Um let’s talk about some AI economic updates. in particular, Nvidia’s 2026 state of the union survey. Uh so what does this mean? So Nvidia uh did a 2026 state of AI survey. They found 88% of companies using AI report revenue increases with 30% claiming 10% or higher revenue increase. And you know uh obviously I think uh Nvidia is going to promote that kind of news since they’re selling the picks and shovels. Um, and you know, this isn’t really big news. Um, but it’s important to realize that that you’re going to be driving increased revenues with the use of AI. Uh, any points on this one? >> Yeah, big time. So, I had the most epic panel today over at MIT earlier with Peterberg from uh Yeah. Yeah. It was a
[00:43:01] crazy event. Just packed, you know, it just every four concurrent rooms, >> what, three, 400 people in each room just packed. I had Peter Dannenberg from Google um from Deep Mind at Google and uh Alexander Amini the founder of liquid AI and theis absolute genius phenomenal guy to have on a panel I said guys be honest just totally honest because no one’s being honest about this if you take a random white collar worker today and I’ll give you a lot of buffer say two years from today and I use AI to do their job and my target is they’re 10 times more productive I’m going to make it a very easy bar for you. What are the odds that that randomly selected job can be replaced two years from today? And Peter said he thought and he gave a very thoughtful answer and he came out at like 99%. And then Alexander said, “Yeah, but that’s today. That’s not two years from today.” So I look at the room, I’m like, “Guys,
[00:44:00] what are the implications of that? Are you have any of you thought through like and most of the people in the room are are brilliant, so they have, but outside in the world Do you know what that means? >> Well, look at this first bullet. 30% of you who use AI claim to have higher revenue. Are you kidding me? AI CAN DO EVERYBODY’S JOB. LIKE, WHAT ARE YOU TALKING ABOUT? LIKE, why are you soft selling this so hard? It’s because you’re scared. You’re you’re worried that you’re going to worry everybody and you’re gonna have mass uproar in the streets. But what’s the truth? Like, tell us the truth. And the truth is, yeah, you can get literally 10 times more done per dollar invested in salaries. You know, does that mean more jobs? Now, a lot of people are saying, “Well, we’re just going to create new jobs.” Like, yeah, but on what time scale? >> You know, it’s just just crazy. It’s so >> We’re going to talk about we’re going to talk about this and Mark Andre’s point of view in just a minute. Uh, at the same time, there’s an AI super PAC uh that’s raised $100 million heading towards $300 million. I mean, AI has become a incredibly political game. uh
[00:45:02] in terms of regulations, in terms of data centers. Um have you been pitched to uh to donate to Super PAC yet, Dave? >> Uh I have indirectly, but I’ve made it really clear that Elon convinced me never ever ever ever ever get close to any of this. You will regret it the rest of your life. >> Yeah, agreed. Any points of view here, Alex or Sem? Um my my initial comment is I think there’s a sense in which it was inevitable that AI was going to be politicized like this. It touches so many aspects of society. It would be I think uh counterfactual nonsense to expect it not ever to be politicized. It maybe in some senses remarkable that it it took this long for quote unquote leftright access to emerge on the the subject of super intelligence. There are natural polls, pro-AI, anti-AII that have apparently emerged. I I do think
[00:46:00] it’s for the record, I think it’s sad that it it’s being politicized. I I would hope that there would be broad recognition that super intelligence can be broadly beneficial but at the same time I think this has been true for every transformative technology in human history that there’s a a natural access that forms that’s maybe on on one side leans more depending on your your political orientation either progrowth or uh pro capital and on the other side >> it’s naive to think it would be politicized I mean of course it is this is the whole US versus China. This is about US dominance. This is about companies basically, you know, protecting their future, protecting their data centers. There are many forms of science and technology that aren’t really politicized. I I I don’t think it’s >> this level of impact. >> I if if you look at the source of politicization at the municipal and state level, it seems to be people concerned about less maybe about their jobs, more about say electricity prices.
[00:47:01] I I think there’s maybe an alternative timeline where the politicization of AI could have been perhaps delayed by at least 2 years. Uh I I think it’s frankly remarkable that it took this long for large super PACs to emerge around AI and probably could have been delayed even more. >> Well, all right. Well, let’s move on beyond the politics. Um and uh let’s talk about work. So, a lot of data coming out on the impact of work. First, software engineers uh jobs are rebounding. 67,000 roles have opened up um up 30% uh in 2026, the highest in 3 years. What does that mean? First question. Second, we’ve seen nearly 80,000 layoffs reported in Q1 of 2026. And this is targeting you know marketing and sales uh you know uh consumer uh relations and it’s definitely due to AI automation. Um thoughts on on work and
[00:48:02] jobs? >> Yeah it was really hard to reconcile that bullet with the uh the educ you know the the new college graduate hire rate which is alltime low you know we had in a couple podcasts ago. So I don’t know how to reconcile those two things. >> Okay. So I’m finding that AI is not eliminating work evenly. It’s it’s hollering at specific functions. It’s increasing demand and others. Um I think I’m much more in the Andreent camp here. I think there’s also a lot more going on in the economy. I think people are attributing things to AI, but there’s also there’s the Iran war. There’s the oil price explosion. There’s a lot more complexity in it than we can then just allocate to one cause. I’m much more on the Andre side for a lot of this. >> That would be great. Um another another story here uh is the uh Meta’s clawed economic leaderboard. So if you remember there was a conversation about you know how many AI tokens is every employee uh using uh and being able to measure that and uh Meta
[00:49:02] put up a leaderboard amongst its 85,000 employees to gamify AI adoption. I’m curious what other companies have done that. Maybe Salem, you know, of some uh it was taken down voluntarily by the employees because they didn’t want to be sharing their data publicly. Any thoughts on this, Dave? I mean, do you have a a a token leaderboard for your employees? >> Heck yes. And I love it. And and also, you know, the the gaming of it is a nice transition, but you can’t game it for very long. So, I love it when companies do this and say, “Look, it’s a badge of honor if you use a lot of AI. Please use as much as you possibly can. We’ll come back in a month and start thinking about how to use it perfectly, but first just get familiar with it and use the heck out of it. And nobody ever goes back, right? I’ve never met a person who hammers Claude or hammers OpenAI for a month and then comes back and says, “I’m never going to do that again.” It doesn’t exist. It’s a one-way path. So getting your employees over the hump is
[00:50:01] going to save them. So I love this as a motivation. And I really don’t like the part where people are afraid to share their prompts and their history because like okay, you know, maybe it’s a little embarrassing that you’re not using it well, but get used to it because it’s going to get exposed anyway in the long run, but that’s how you help other people improve. You know, if we all share it, we’re all going to get good together. And so I I like uh you know, it’s kind of disheartening that people will pull out of it uh because they don’t want to expose their prompt history, but I it is the right thing to do and I love it. It’s ironic that Meta is calling it uh you know is participating in Claude anomics >> versus llamics >> versus llama llamics. >> It’s quite quite the indictment of llama rest in peace that it wasn’t llamics. >> Oh my god. For sure. >> I also think to to everyone who would say well you know this is just leading to gamesmanship and leading to optimization of the wrong items. All of these reasoning traces are fully
[00:51:00] available presumably to Meta for to to to do meta analysis and determine whether these are just employees who are token maxing which is the new term of art uh just maximizing their their token usage unproductively versus whether their reasoning traces indicate that their tokens are being productively spent. This is all transparently available to Meta. So I think token maxing and claudonomics or laminomics whatever we want to call it is probably directionally the trend of the future where for the first time senior company management has visibility into effectively most of the cognitive power and how it’s being spent on a per employee basis. >> What was Jensen’s recommendation? Was it twice your salary in tokens per month or was it half your salary in tokens per month? you remember >> his recommendation is you spend the maximum amount possible on Nvidia GPUs. >> It’s like >> it’s like the the beers three months of salary. >> Well, I told I told all of our guys to target uh one to one match of payroll to
[00:52:00] to AI costs by the end of the year. >> Amazing. >> And and don’t worry about it if it’s if it’s not perfect use. Don’t worry. Just get to that target and then we’ll optimize it next year. >> I think a target like that is a much more accurate way. I think these token leaderboards are very primitive dashboards. U we’ll end up with something in a different model like a machine leverage per employee or something like that that’ll be a much better metric for where we’re going. >> All right, let’s get to the heart of employment. So Mark Andre rebukes AI job loss. He comes out with a very strong statement. AI job loss narratives are all fake. AI and massive productivity ramp equals massive demand and massive job jobs boom. So, uh Mark is truly a uh a maximalist uh an abundance-minded individual. Uh thoughts on this? You know, how does this square with the fact that we’re seeing uh young college graduates not getting jobs, that we’re
[00:53:00] seeing displacement? Is it all sectoral and we’re just going to see a number of sectors being demolished at the same time? Numerous new, you know, demands in different sectors. What’s the advice to give everybody listening to us today? >> The advice is really simple. I mean, for God’s sakes, don’t go get a job. Go build a company. Yes. And we talked about this the in our last podcast where the the risks of taking on an entrepreneurship role are way way lower than it was before. You don’t have to have all these incredible crazy skills that you needed to have before. You just need to have um a desire, a purpose, and just get the going with building company. Dave talks about this all the time. >> Well, you don’t have to be a genius to come to your own conclusion. Like, forget asking people like Mark or us, our jobs going away or jobs coming. We told you already that AI will be able to do everything that a white collar worker does imminently. That’s a fact. You decide what that means because like Sem said earlier it affects very different
[00:54:00] areas very differently. You know some people retool themselves for AI very quickly. Software developers for example. Other people like accountants and lawyers don’t like it’s going to be exactly what you would expect given that scenario. It’s it’s not hard to predict at all. And and I think there’s also timelines you know when Mark says this is crazy. Jobs are going up not down like yeah by 2030 that’s absolutely true. Just like the industrial revolution, jobs went up, not down after it was all the dust adjustment. >> Yes, this is just industrial revolution which took you know decades is going to happen in two years. >> Also, I think also you have to remember sorry Alex just a quick point also you have to remember that the adoption of AI inside companies is going to be very slow. there. There’s a huge transition to go from human to human workflows to AI workflows and that transition was going to take years. We’ll have lots of time to kind of smooth this out. Sorry, Alex. Back to you. >> Yeah, I I think both narratives can be true at the same time. I think if you
[00:55:00] add in the word net, massive net jobs boom, then both of the narratives immediately become compatible. There is going to be a lot of dynamism with some job categories going away, others new ones coming into existence and net job loss probably not. I I would guess and I’m I’m betting that there’s going to be net job creation just exotic new jobs like one person AI conglomerates will be created if you want to call that a job. Uh but on balance many jobs will also disappear. But this is, you know, this is how we get massive economic growth and the singularity in the macroeconomic statistics. We’re not going to get it through business as normal. >> We’ve talked about this and I think it’s basically companies are going to get much smaller, much more nimble or they’re going to die and they’re going to spawn a whole set of baby companies alongside. There’ll be an ecosystem of companies coming up. So, it’ll be a much larger number of smaller companies. um
[00:56:00] in the future. >> I mean, I’ll I’ll go with the prediction I’ve made before, which is we’ll run a company with between 20 and 25% of the the members you needed than compared to before, but we’re going to create four or five times more companies. And that net balances out. So, I’m much more on the Andre side. And also, his hair shape of his head aligns very well to my thinking. >> Mark is brilliant. If you ever if you’ve ever heard him, you know, in in podcast, he’s he actually speaks at 1.5x speed. >> Y >> extraordinary. We talked a little bit about this in the last pod. Alman believes America needs a new social contract with AI coming. This is his quote. The emergence of super intelligence will necessitate a new social agreement akin to the New Deal during the Great Depression and the progressive era of the early 20th century. Uh, and yes, but what is it going to look like? Is it going to be UBI to UHI? Is it going to be 4 day work weeks? Uh, I still believe that we’re going to see
[00:57:01] turbulence in the next two to two to five years and it’s going to be the government printing checks uh to give people sort of a UBI. >> I have a bunch of thoughts here. You know, this new social contract kind of framing is correct, but is very vague. We have to have more specific things like portable benefits, new taxation logic, uh lifelong reskilling, and governments have been built around taxing human labor. They’re not ready for AI software agents, and they need to get a thing. When you have AI abundance without institutional redesign, you’re going to get a backlash, not progress, and we’re going to see huge backlash against this just because governments are so slow. I I should note though, OpenAI did also put out an industrial policy prescription for what this new social contract could look like. It’s it’s not just this single sentence. They put out an elaborate white paper and circulated it in the Congress. I I do think something like this, a new deal, probably is going to happen anyway. It may or may not happen as one lump sum.
[00:58:01] It may happen peace meal and it may not happen in the US first. I think there are contingencies where other countries experiment a little bit more aggressively with it uh than the US and then eventually perhaps among a certain set of countries new best practices emerge. But I I I do think some form of call it abundant capitalism or capitalism 2.0 or post scarce capitalism something like that probably emerges. It may not happen immediately. It may not happen as quickly in this country but it will get there eventually. You know, I had lunch with Michael Katzios, uh, who we’re going to have on the pod sometime very soon, science adviser to the president, and we’re talking about one, you know, idea I pitched him was a new social contract will be before any employee gets terminated by a medium or largesiz company, that company has to give them reskilling. In other words, instead of a golden parachute, it’s a it’s a golden education package so that
[00:59:00] they can go and sort of transition. sort of a a safety net um or a sort of an ethical mechanism for you to let off you know half employee base >> based on public reporting China already has that policy so it would be a weird future if the US is adopting policy prescriptions from the Chinese Communist Party for AI >> resaling but maybe that’s the near future we find ourselves in >> well something to think about you know the uh the way this is rolling out is really unusual in history you know when the industrial revolution happened it took away blue collar jobs and worked bottom up. But AI is coming kind of like accountants, lawyers, professionals top down. You know, only a little over half of voters have a job at all. So, they’re going to be like, “Oh, you know, it doesn’t affect me.” Uh, but then, you know, all of bluecollar isn’t going to be touched. All of manu all of, you know, physical labor isn’t going to be touched for quite a while. So, they very well might say, “Yeah, tough luck. You know, lawyer, accountant that was making a million dollars a year.
[01:00:00] This is poetic justice. We’re not voting for anything that helps you. >> You know, >> that wouldn’t surprise me at all. >> When I was in Morocco, I was I was interviewing people that I met along the way about whether they’re using AI or not. And the realization is countries, you know, uh African nations are going to be impacted the least uh as this transaction occurs because they’re so, you know, insulated from this. But one of my tour guides, I loved his story. He said, ‘Y yeah, you know, I I chatted with Chad GPT and I said, ‘Th these are all my skills. Uh what could I do to earn money? And uh he came up with a business that you know we purchased. It was basically a bicycle tour guide uh in forget which it was not Marrakesh, it was probably in uh near I forget the city exactly we’re in. were transitioning through and uh you know that was his business. He did a great job. Uh and I love the fact that this individual was basically trying to
[01:01:02] figure out how he earned income and using chat GPT to do that. So any thoughts on the AI economics uh that we’ve just gone through? Uh what do you think the social contract is going to be like Dave? You know, what do you imagine is going to replace what we currently have? >> Well, I you know, I had very ornate thoughts about this and then we met with Andrew Yang, remember at 360 and he said, I can guarantee you >> that the way politics works, all we can do is write checks and it can’t be in any way thoughtful. It’s just money here. Oh, wow. You’re you’re hurting. Here’s money just like co. Uh and so that’s all we can do. So that’s all we will do. and then you know maybe after AI enters government in two three four years a much more thoughtful program will happen later that was disheartening but I I think hard to refute though the first version of the social contract is just going to be the
[01:02:01] next election 3 years from now politicians saying well I’ll give everyone $10,000 each well I’ll give everyone 12,000 okay well if you’re giving them 12 I’ll give them 15,000 and then we’ll be right back to well how much can the country afford that’s what we’re going to give because that’s how you’re going to win elections you know, exactly what you would predict, actually. So, uh, that’ll be version one. Anyway, >> I I for one think a redistributive model of a quote unquote social contract shows an extreme lack of imagination. I I would like to think that super intelligence should also supermpower individuals to generate super income. is one of the reasons why I I I for one am betting on more of a model where there may be even no strong need for a social contract if we can empower the longtale of individuals who have idiosyncratic skills or experiences or socioeconomic niches to operate their own large companies sitting on top of fleets of AI agents. I I would love to see in in
[01:03:00] short no need for a new social contract and instead have the private sector rescue people who would otherwise be technologically unemployed or or disemployed with by empowering them to become basically micro uh entrepreneurs or or even macro entrepreneurs to turn them all into Warren Buffets. >> But that’s in the long longer run. I don’t think it’s going to >> No, I think it’s in the short run. I I don’t I think I think that can be done almost immediately. I’m betting that it can be done almost immediately. >> Well, we will see. We’ll take that bet. >> You know what’s you know what’s super super super interesting? Those super PACs that we talked about earlier in the pod, that massive amount of money that’s piling up, you know, these these IPOs are literally, you know, an order of magnitude bigger than we’ve ever seen before, which means those packs are going to be bigger by an order of magnitude and those are going to determine election outcomes. But they got started back, you know, prior to the Trump administration with the fundamental mission being, Congress, please don’t stop AI. Please don’t put this six-month pause on it. China’s just going to run away with it. And everybody
[01:04:01] agrees in the AI community that we shouldn’t stop. But now there’s no chance of that anyway. You don’t need to spend the money on that because it’s clearly not going to stop. So then what are you going to use? Like you’ve got all this capital. What’s your mission? What’s your goal? And there’s a couple edge case things, but this could actually give those organizations a mission like let’s have a more intelligent version of UBI and you know more akin to what Sim Selma has been talking about for a long time you know which is you know workout money and eat well money and have kids and raise them well money and make it make it task specific which would work a lot better. Uh, so that that’s encouraging actually that might work. >> And universal basic services to give people the ability to pay. >> I like UBS much more than Andrew’s proposal that we just try to fragment currencies into lots of like sort of paternalistic subcurrencies that aren’t fungeable. That to me seems like a recipe for for disaster and for black markets.
[01:05:00] >> This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today.
[01:06:04] >> All right, let’s jump into our second subject today is energy. A lot continuing there. Um the first one is extraordinary. You know, when you think about solar cell efficiency, um, you know, traditionally we’ve seen solar cells in the 12 to 18% efficiency, float zone silicates getting up to 20 to 24. Uh, the limit has been shattered and we’re seeing now efficiencies upwards of 30 to 45%. Uh, which is amazing. Um, another story in the energy news is South Korea has now mandated 40% of solar rooftops. Uh, and they’re hoping to get to 100 gawatts of energy. Now, it makes sense. And South Korea does not have a lot of open land. They can’t like build out solar in the desert. Uh, so using their rooftops makes sense. That’s going to of course raise the price of building, but I think that’s uh amazing. And then DOE is
[01:07:00] contracting for $800 million uh micro reactors. So, we’re going to start to see uh a generation of microactors um and uh energy energy everywhere. Comments on this, Alex, do you want to jump in? >> I’ll comment maybe just on the first story. I I think this is neither earthshattering nor boring. It’s somewhere solidly in between. This is a paper published in Jack’s the Journal of American Chemical Society on and 130% quantum yield isn’t as earthshattering as it sounds either. It just means that there there are 1.3 um singlets generated from a single photon. Normally you’d have one. So it it’s not like earthshattering. It’s an incremental advance in in the chemistry. I think this is actually liquid phase uh chemistry which means it isn’t immediately practical for for solid photovoltaics. moderately interesting but the field is filled the the solar photovoltaic field is filled with moderately interesting
[01:08:00] advances that cumulatively eventually generates something interesting but I would say first story moderately interesting incremental South Korea >> have you been tracking uh perovsky progress >> yeah I mean so so pervskites are sort of the the white knight for for the solar PV space uh historically they haven’t been that stable they’re a pain to work with. Uh on on the other hand, uh instability issues are being very aggressively resolved because their quantum efficiency is higher than than silicon. So I I think there’s I I don’t speak for the solar PV industry, but if I did, I’d probably say there’s a broad expectation that eventually there there will probably be some sort of broad shift to perkites as they get more and more stable, maybe. And and they’re also relatively inexpensive. some sort of transition like that will happen. But I almost think it also doesn’t matter. Why? Because you can’t get at least without shocking new physics, which this is not. You you’re
[01:09:00] not going to get more than 100% efficiency. In fact, there are physical reasons to think that the the the cap on electricity generation from solar PV is is capped at materially less than 100%. So there’s like a there’s a ceiling on how much we can capture anyway uh from solar PV. It’s not like we have orders of magnitude of headroom of improvement that we could achieve. It’s totally unlike say AI algorithms where we know just based on the scaling law curves that we could probably achieve orders of magnitude improvement in in the efficiency of models. So quite frankly I have difficulty getting myself super motivated by incremental advances in solar PV chemistries and liquid phase. It’s just not that exciting. Whereas if if if you look at some of these other stories, I I think from an economic perspective, much more interesting, like blanketing the the rooftops of all of South Korea or a substantial fraction of South Korea with solar PV. That’s pretty interesting. DOE pushing micro reactors everywhere. That’s pretty interesting. I
[01:10:01] I would love to see micro reactors in Boston. Right now we have a single one on Massav between 77 Massav and Central Square that relatively few people pay attention to. I’d love to see micro reactors everywhere >> in your backyard, please. >> Yeah. >> Yeah. I I agree. I think, you know, we we tend to overthink things like crazy as a society, but we solved the solar panel problem. We should have had a huge party and like for for about 15 years of my life, you know, so many of my family members said, “I’m going to dedicate my entire career to clean energy and not polluting this world so our children and our children’s children have a clean place to live.” And we freaking solved it. The solar panels are good enough. 80% of the cost now is just getting them installed. Yes. >> And the regulatory overhead, which is >> so and and so so now we’re on the cusp of having the robots >> that can manufacture them very cheaply and install them for us. >> We should be having a huge party and racing to building those robots and just
[01:11:01] say, “We did it. Now we have no pollution. We It’s just right in front of us. We just need to execute on it.” >> You know, meanwhile, we’re like, “Wow, another breakthrough that gets us 20%.” We don’t need it. >> We need execution now. You know, when I fly out of Santa Monica airport here and I fly over all I mean, there’s no roof with solar panels here >> until you get into the desert and then there’s uh you know, solar thermal plants and such, but I mean there’s in LA where it’s sunny, you know, most of the time you’d expect that all the roofs would have solar. Totally. You’d expect that a drone would bring it and drop it right there and then a robot would land and install it and it would be done you know perfectly with no human involvement and that’s that’s so doable. >> If I could pick a moonshot here in energy would be to have a software defined grid because that will change the game completely because generation is actually getting done. >> Do you remember the scene guys in in in the Johnny Depp movie transcendence where the solar panels are being grown
[01:12:01] by nanoobots? Do you remember that scene? I don’t, but I like it. >> Split that in, man. >> It’s an important Yeah. Like, I don’t know if if if that’s possible to include just that scene where solar panels are being grown by nanoobots. I’d love to live in that near-term future. If folks have ideas for how to grow solar panels in in real time with nanoobots, send them my way. >> Yeah, giant green leaves. Okay, let’s jump into biology and AI. A lot going on here. So, the first story is a fascinating one. Open AAI Foundation. uh a billion dollars per year being dedicated uh to to science. And just to remind people, um Open AAI when it transitioned from a uh nonprofit to a benefit corporation, it put 26% of OpenAI’s equity into a nonprofit. Uh and it’s worth about $130 billion. and they’ve committed a billion dollars
[01:13:01] a year to begin. Um they’ve announced a $25 billion long-term commitment to curing disease and AI resilience. Uh uh the board chair of this is Brett Taylor. Uh Brett used to be the co-CEO of Salesforce. Uh and then uh you know Dave you and I met with uh Wjek uh OpenAI co-founder and he’s leading the AI resilience work covering biocurity, child safety and AI modeling and so they’ve given out uh $100 million to six institutions this month to coordinate uh their work. It’s just the beginning but this is the largest nonprofit on the planet with $130 billion in it. Uh, and I hope they do something epic. Um, anyway, >> you know what? >> Yeah, >> I just figured something out. It’s been gnawing on me. You know, Kevin Wheel came to A3. >> Yeah. >> Most talented guy you’ll ever meet. And Sam, you know, he’s desperate on on enterprise, but he didn’t move Kevin
[01:14:00] over to enterprise. He moved him over to big time, big science, big tech. I was like, that’s so strange. And I know that’s really, really important, but now it’s tied to the lawsuit. Of course, if he can make world changing headways headway into any of these big, you know, uh, biological or physics problems, >> you you know, the outcome of this lawsuit is going to be very, very political, right? It’s not going to be just a jury deciding one way or another. There’s going to be some Trump involvement for sure. But if you have like some worldchanging life-changing uh imminent breakthroughs and you have hundred billion to spend to get them, that’s why they put Kevin over there. >> I’m just speculating, but >> I also, we talked about this last pot as well. I think the breakthroughs that come out of, you know, GPT6 being used for science are going to be worth hundreds of billions and trillions of dollars. Again, if you can have a breakthrough in in room temperature superconducting, in fusion in longevity,
[01:15:01] you know, what is that worth if you own the basic patents on that? >> Yeah, >> it would be ironic. I mean, I may maybe this is too cute by half, but given the earlier discussion of Open AI starting as a not for-p profofit and then converting to a PBC and all the lawsuits that ensued, would be ironic if the Open AI foundation, which is the new nonprofit carved off of the old for-profit, carved off of the old new profit, ended up being so profitable due to curing Alzheimer’s and solving all these other problems that the cycle repeats itself and the OpenAI Foundation has to become a for-profit. >> Oh my god. >> Yeah. You know, that that’s a key part of their defense. The Sam’s going to be up there on the stand saying, “Look, here’s the reality. Our mission as a nonprofit with a hundred billion dollars to spend is miles ahead of where it would have been if we did what Elon is suggesting, which is be a, you know, a tiny little thing that has no funders >> and we’d be we’d be microscopic today.” And so that means we’re >> it’s very true.
[01:16:00] >> Yeah, it’s a good defense. A really good defense. I do think it’s worth considering what happens if and when the Open AI Foundation succeeds and cures Alzheimer’s and that that will be a blockbuster drug, maybe create its own Eli Liy scale trillion dollar pharma company. Does OpenAI take a stake in that? Does OpenAI see a revshare? Questions need to be answered. >> What I find fascinating here is that science capital is becoming compute capital plus data access, right? Plus some validation infrastructure. Thank you for promoting Solve Everything. It’s that’s an amazing promo, Sem, for Solve Everything. Much appreciated. >> Boom. >> All right, our next story. I love it. Anthropic acquires Coefficient Bio. So, what is Coefficient Bio? Uh, it’s a company started by two uh exgen computational drug discovery scientists. It is 10 people, no revenue, started eight months ago, and Anthropic buys it for $400 million. Um, you know, I don’t
[01:17:02] know if they’re buying just the vision or they’re buying any kind of unique capabilities, but this is Daario going to his first love of biology uh and solving. You know, we see this from both Demmesis Abis and Daario making investments in in health uh and longevity. Um any any thoughts on this one? you’re going to see a lot more of these deals actually because you know you go back you remember we were congratulating Eric Schmidt on the brilliance of buying uh DeepMind for I guess 600 million with no revenue whatsoever yet look at what it’s become you know it’s it’s >> you’re buying teams you’re buying >> teams >> yes >> and and I think you know we as a society are getting better and better and better in predicting the success of a team you look at the 10 people and you look at what they’ve achieved so far and then you look at what they’re likely to achieve in the AI timeline line, you know, and and suddenly 400 million seems like a bargain given the the potential
[01:18:00] outcome. And so I think you’re going to see a lot of these deals where it’s got to be the right 10 people working on the right thing. It’s not just, you know, any old group of 10 working on a video game. But in the scenario, you know, Alex has a lot of these actually where, you know, he knows a lot of the top experts in a lot of the top fields. And if you can just whip them together into a group, you know, and have them pursue a mission in this case for what did you say, you know, eight, nine months, getting to that kind of outcome is not going to be that unusual. I think also for everyone who was hand ringing, do you remember a few months ago there was so much hand ringing about a circular economy forming and Nvidia self-deing uh loans to other companies to to buy Nvidia chips and concern that this AI boom was fictitious and and just the product of self-deing circular transactions and and and and other financial engineering. when you start to see the intelligence explosion infect biotech, which is what we’re seeing. We’re we’re seeing anthropic buying its way into big pharma. Uh at the same time
[01:19:02] that SpaceX or XAI maybe is buying its way or reverse act reverse acquiring its way into the space sector, the intelligence explosion is infecting every single sector. It’s it’s almost metastasizing into every sector. And it’s not just going to stop with biotech. We talk, we’ve spoken numerous times in the past on the pod about how timelines for solving all disease are collapsing. When the Chan Zuckerberg initiative two or three or four years ago originally said that they wanted to cure all disease by the end of the century and are now talking about the next few years, this is what it looks like. It looks like Anthropic doing all stock deals to acquire teams to build out their own in-house big pharma labs probably with robotic instrumentation probably with AIdriven experimentation. This is how we get to Daario’s solving all disease. I I think in his case it was solving neurological disease by the
[01:20:00] end of the decade. But there’s no reason not to solve every other type of disease as well. >> Demis said cure all disease within a decade. Dario said double human lifespan within the decade. I I think Dorio also said he wanted to solve most or all neurological diseases by the end of the decade. But the these are all variations on a theme. >> Another acquisition that was made that was an interesting sort of strange acquisition was OpenAI buying the podcast TBPN uh for a few hundred million. Uh I found that you know it was at a PR move and then I started getting texts from my friends saying hey do you want to sell moonshots to one of these labs? said, “I’m not sure uh we would want to do that, but um who knows, I guess, if the price is right.” Uh what what do you think to figure out equity first for that one? >> For sure. For sure. Um what do you think that was about >> the TBPN? I have no freaking idea. >> I I >> What do you think about that? >> I don’t have an opinion there. I don’t I
[01:21:00] don’t understand why why unless it was a completely it’s a self-promotional thing where they’re buying a channel. >> Yeah, let’s take that as a homework assignment. We need to speculate somebody who knows. >> I I appeared on TBPN right before they acquired them. >> So So my what’s the the line from the Wrath of Khan? Like like a bad marksman you keep missing every time. I I I I think I I think they’re very talented. Uh, and I I I I take OpenAI at its word that they’re looking for a a news distribution channel and a content distribution channel that offers a positive perspective on AI. >> Why they can’t do it in-house, why they need TBPN, question mark. But I I I do think that TBPN guys are very competent at finding interesting stories. Uh when um when Eon uh when I made the Eon announcement of the first uploaded Fruitfly, the TBPN staff reached out to
[01:22:01] me almost immediately. Almost no one else did and they booked me almost immediately. So I I think that shows a certain level of competence to to be able to chase breaking technology news that I haven’t really seen elsewhere. >> All right. The >> Let me give you Let me give you a follow- on theory because I love your theory there, you know. Well, the the theory I don’t love is that they wanted your video footage. They’re going to cover it into 5-second clips and sell it as NFTTS and make a fortune on it. But the theory, the theory, maybe they will, but the theory I do love is look, there’s going to be so much dirt in April in these lawsuits in this lawsuit and and they’re going to and and maybe these guys are like you said, Alex, they’re they’re geniuses at >> content and spin and production and they’re going to need every bit of it during April and May. >> Yeah. Our final story here is Eli Liy signs a $2.75 billion AI drug deal with Ensilica Medicine. Encilico is one of my portfolio companies. So super pumped about it. This is uh Alex Zanconov, a
[01:23:01] brilliant uh AI scientist and biologist. Uh this, you know, Encilico is just an extraordinary company. Uh they’ve got 28 AI discovered drugs, half in clinical trials, half in proof of concept. uh this you know you have to always look at the structure of these deals. This is 115 million upfront and the rest is on milestones but the point is this is about just massively reducing the time from drug discovery to to approval. Um and you know just to take a second let’s go to the next uh chart here to look at a little bit different uh and this is uh AI powered drugs uh and we see phase one phase two discovery of phase two and then cost reduction uh to remind everybody you know a phase one trial for a drug is it’s a small trial a small group of individuals uh of healthy volunteers to see is it safe are there
[01:24:00] any major side effects phase two is then testing, does it work? Um, and you actually move the metrics you’re looking to move. And then phase three is tested in typically thousands of patients to see does it work at scale. Um, and what we’re seeing is a, you know, phase one success rate of these AI developed drugs at 85% compared to 52%. And phase 2 success rates of AI developed drugs at 70% compared to 38%. Uh, it’s the way of the future. You’re basically picking a target and you’re using uh some version of AI to generate an exact, you know, protein uh to lock into that target and then you’re producing it and you’re testing it. The old way of drug discovery was going to Amazon digging up some plants out of the dirt and seeing if there any bioactive molecules. Um much more efficient. >> Peter, I’ll ask you a question. I asked a panel of mine at today’s event at MIT.
[01:25:01] Do you have a prediction for when the FDA is likely to launch given that it it’s collapsed recently at this announcement from a two clinical phase approach to a one clinical phase approach? When do you think we get zero clinical phase trials from FDM? >> When we have full cell simulations, >> you know, when I’m able >> what’s your timeline for that? >> Uh well within 5 years. So what I need to do is be able to upload my genome >> and my genome will dictate exactly how the cells my renal cells or pulmonary cells are functioning and then I can say well how does this particular drug impact those cells or all the cells in my body um even more importantly you know if there’s a disease state what drug is going to cancel that and this is where we’re going with longevity right um what is you know why are we aging how to slow it stop it reverse it all that falls out of big data and massive compute.
[01:26:00] >> I agree. Virtual cell by the end of the decade. A good one. >> Yeah. >> Yeah. I mean that is that is the moonshot that changes everything. >> It is. >> I agree. >> And there are a number of companies working on that. >> Do we have the compute to be able to kind of simulate two billion several billion interactions per cell or we will >> we will with quantum. I mean, one of the things that quantum computation is I mean, our cells and our molecular interactions and our cell services are all quantum in nature. >> If you said, I want to build a movie scene and I’m going to do it with finite element modeling and build it bottom up with a full simulation, you would never be able to create an AIdriven movie that way. >> Exactly. >> But if you take the neural network approach, it just works. Boom. It just flat out works. Same applies with chemical simulations. is the cell simulator. It’s going to be data in, neural net in the middle, value that or action out, and it’s going to flat out work. I think I think it’ll work very fast like you guys are predicting, but you can’t you can’t simulate it um you know, atom by atom building it up. It’s
[01:27:01] totally the wrong approach. >> Yeah, it turns out I mean, this is why I maybe sometimes I present as a bit of a quantum bear. The physical world is actually pretty classical and pretty sparse. So I I would bet we don’t actually need quantum computing at all to get to the virtual cell. We can we solved protein folding without quantum computing. We did it purely classically. I I think we virtual cell just by existing scaling of of models like maxi uh what was it maxi something or other from Nvidia the the trillion token cell model. I I think we just get lots of scaling of classical models and that takes us there without like enormous innovation needed >> today. It’s a data problem. >> Totally agree. >> It’s a data problem more than a computational problem. We don’t have the data. >> I’ll tell you what else. Culturally, you know, my my daughter’s over at Madna and they freaking love AI in the biotech community. If I if I compare the extreme ends of all the companies that have been here on our office. So the the biotech guys are Jeff von Maltton, Nubar Fayen,
[01:28:00] Stefan Vonel, they they just all culturally can’t wait for AI to come into the business. And then on the extreme other end, you’ve got the uh the public accountants. You know, the PWC guys were here the other day and they’re they’re like, “Ah, AI, stop. Please don’t.” You know, but but the biotech community is embracing it like crazy. I don’t know why. I bet you guys actually know why because you’re right in the middle of it. But I can tell you firsthand. They are >> I hate pipe heading. >> Uh all right. Uh let’s go to robotics. This is China versus USA. Uh, Alex, I want to hear your your thoughts on this one. So, Aggiebot uh ships 10,000 humanoid robots. They’re number one globally. They’ve gone from 5 to 10K across 17 countries in just 2 years. I mean, these are small numbers compared to what we’ve heard everybody else speak about, right? Getting to tens of millions to billions um to, you know, 10
[01:29:00] billion robots. Unitry files for an IPO. 610 million IPO. We had the co-founder of Unitry uh on at the last abundant summit. Revenues are up 335% yearonear. They’re probably the you know outside of uh outside of uh uh Optimus and Figure. They’re probably the best known robot company out there. And uh Uni XAI uh had their home robot launch. And then finally, Xiaomi uh displayed Cyber 1 humanoid. Xiaomi is an amazing company. I was there very early. Met the founders uh uh in in China back in 2017 2018, right? Their mobile phone uh computers heading into vehicles and now robotics. A lot going on in China. Uh Alex, what are your thoughts here? >> Okay, so this is happening. We’re I I I think in the last episode I mentioned that one of my operational definitions
[01:30:01] of the singularity is all sci-fi tropes happening everywhere all at once. One of those sci-fi tropes is the call it the iroot trope where there are just humanoid robots in every facet of life. Today, earlier today at the MIT Media Lab, uh, for those who were there, people saw me for about an hour, uh, controlling a Unitry robot marching in loop after loop around the the media lab on the sixth floor. And people were taking selfies. Everyone wanted to take a selfie with me and the unitary. And I I was doing this as as sort of a bit of a promotional march for Professional Robotics League, which next uh on the April on April 19th, so 9 days from when we’re recording the weekend of the Boston Marathon is going to hold the the United States, the the country’s first professional robotics league match with robots racing 50 meters in the Boston Seapport. This is all happening. We’re we’re finally catching up to the iRoot future where robots permeate every
[01:31:01] aspect of life for better for worse. Right now, it’s Chinese robots that are leading. I’m hoping to maybe almost quasy shame the the US robotics industry with all of these Chinese capabilities into stepping up to the plate and starting to distribute humanoid robots into the civilian sector and not just factories and not just military drones. But it’s all happening and this is going to be utterly transformative for the twothirds of the US services sector that depends on physical labor, manual labor and not just knowledge work. >> You know, I saw Mark Cuban on a video this morning saying this robot thing is a passing phase and they’re not going to be around in 10 years. >> How come? >> No, no, but so so there was a bit of nuance to that. It wasn’t that robots aren’t going to be around. It’s that they they’ll become so essential that the environments will adapt to the robots and the robots will blend with the environment. Right now we go with to Sem’s point, your hobby horses, why do
[01:32:00] they need to be humanoid? Why can’t they be differently shaped? I I think Mark Cuban’s more nuanced point was they’re going to become so essential to daily life that they’ll start to change the houses and the buildings and the environments to the point where they start to merge with the environments and therefore no longer need to be humanoid. So, they’re dishwashers. >> Yeah, they blend. They merge with the physical environment. >> Well, I have to confess, Alex, that robot that you were talking about was blocking my way to the bathroom, and I so badly wanted to kick it. And I was thinking, Alex would kill me if I kicked it. It’s going to remember, and then it’s going to come back in three years. >> History will remember, Dave. You really don’t want to do that. What’s the the song from Lay Miz? So, never kick a dog because it’s just a pup. They’ll fight like 20 armies and they won’t give up. So, you’d better run for cover when the pup grows up. >> I let me hit on a couple of stories here. So, this is interesting. US senators move to restrict Chinese robots bipartisan bill proposed to block Chinese-made robots from federal and
[01:33:01] sensitive facilities, citing data theft and surveillance. This is no different than Huawei chips and uh in our cell phone towers um and DJI. The DJI ban already in effect, I think. Yeah. Drones and uh Agile Robotics uh and Google Deep Mind are partnering up. Gemini robotic models are being integrated into 20,000 deployed industrial robots across government factories or global factories. So, um >> I think this is like a a tale of two cities. The the two cities in this case aren’t London and Paris. They’re they’re China/Shenzen and the USS Silicon Valley. the the Chinese are overwhelming the world market with the raw physical capabilities. They’re producing many many more capable robots than put it this way. If I want to as a as a US citizen, if I want to procure a a humanoid robot, I I don’t really have
[01:34:01] that many options right now. I’m still waiting for my 1X Neo. I was heranging Barren at A360 this year. When do I get my When When do I get my Neo? >> This summer. I’m getting mine this summer. What did he promise you? >> He didn’t promise me a date. We we were we were uh we were trying to figure out finer details of his participation in in future Olympic events. >> Uh but I I I would say uh China’s producing all of these humanoid robots, but the US is producing the strongest VA uh vision language action foundation models and world models for the moment. And I I think as with we’ve talked in the past about open AI trying to become anthropic faster than anthropic can become open AI. I I think similarly here China is in a position where it has the raw manufacturing capability to make lots of robots and is racing to become a robot foundation model provider faster than than the US with our 10x more compute and our foundation models can finally figure out our way to
[01:35:01] manufacture at scale humanoid robots. So we’ll see which way it ends up. I didn’t realize that Jeene Luca put this video in the deck. Let’s take a listen to Mark Cuban about humanoid robots. >> I think everybody’s making this push for humanoid robots. >> I think they might have a fiveyear lifespan and then they’ll fail miserably. Maybe 10. Yeah. >> Um, >> you mean the You mean the companies or the device or the individual >> or robots >> or both? >> Both. >> Right. Because I think everybody defaults to well we live in a human world and humanoids will take the place of humans for various functions particularly in the home and I think there’s just no chance. >> So maybe we’re missing the second half of his comment. >> Yeah. Where this is conveniently eliding the second half where he explains that they’ll merge into the environment. >> Okay. Well, that makes that makes a lot more sense. >> All right. Let’s get to a conversation. >> You want to hear you want to hear something really cool? Um >> we had Chase Lockmiller earlier today. our our guy Chase uh you know building
[01:36:00] Stargate in Abalene, Texas and he said remember when we were talking to Brett Adcock he said I have to wind my own motors. I literally have to there’s no supply chain for any of this stuff and the same thing Bert Borick said at 1X like so Chase was saying he actually melts metal to make electronic components to build these gigawatt data centers because there’s no supply chain for the stuff that he needs. And so it’s it’s very uh it’s very much the case that the entire supply chain to build out all this physical stuff is miles behind where it needs to be. It’s entrepreneurial heaven >> because cuz you know it’s it’s on a shorter you know the virtual stuff the codew writing the the um you know all the uh compute is going to happen very quickly. All the white collar stuff but the robotic stuff you know you look at the size of that IPO we were talking about a second ago. $610 million. Can can you imagine trying to go to an investment bank on Wall Street and say, “Hey, we’re doing a $610 million IPO.” They’d be like, “You can go down to the basement and uh you know, you can talk
[01:37:01] to our junior associates. We’ll get back to you after this after Anthropic is public. We’ll talk to you >> if there’s if there’s any money left in the uh people’s pockets.” >> Yeah. >> All right. Let’s go to a topic I’ve wanted to cover for a while with all of you, and it’s quantum and bitcoin. So, uh here we go. uh Google moves up their deadline by six years to 2029 for basically uh Qday. When are we going to see uh quantum computers break uh RSA? Um it used to be that required 20 million cubits. Uh today it’s 1 million cubits and in particular it’s not it’s 4,000 error corrected cubits to be specific. uh to break RSA. Um moved it up from uh you know by 6 years from 2035 to 2029. It’s gotten everybody in a bit of a panic. Uh the story related to that
[01:38:00] is that you know uh Brian Armstrong, the CEO of Coinbase, uh has put forward $150 million coalition uh to roll out something called BIP 360 as a quantum proof upgrade to the protocol. It’s a fork. By the way, um in just chatting with Brian, he’s going to be joining us on the Moonshots pod. We’re going to be talking about both longevity and uh and quantum and uh and Bitcoin. So, uh another story related is that Google now says that under 500,000 cubits are required to break uh Bitcoin encryption. So, 20 20 times fewer than predicted in 2019. So, a lot going on here. Uh uh this is you know concerning people in who are Bitcoin holders. Um I put this next slide forward because you know Dave you and I were roommates uh with Mike Sailor in our fraternity back in the day. People may not know that.
[01:39:00] Mike Sailor Dave and I were at the deai together on the third floor. And I you know wanted to see what’s what’s Mike saying about Bitcoin and he’s saying I don’t worry about it. Quantum computing won’t break Bitcoin. It will harden it. The quantum risks are overblown. Uh, quote, “Bitcoin has survived every existential threat ever thrown at it. This is just the latest, and the upgrade will come before the threat does.” Uh, he puts his money behind uh behind that. In the last quarter, he’s purchased 88,000 Bitcoin, about $7.25 billion worth of Bitcoin. Uh Selene, let’s go to you first on this one, pal. >> So the the true risk here is that protocol consensus may be slower than the emergence of the threat, right? But I’m actually uh optimistic around this one. I think sailors, right? The resilient systems will just evolve and can evolve under pressure, but markets are really bad at pricing tail risk um
[01:40:02] until they’re really forced to. Um, so I I’m I I think what’ll happen is there’s so much momentum behind Bitcoin and so many uh like I came across a Bitcoin lightning network payment system that is three months old and they’re doing a billion dollars a month of transactions. It’s just unbelievable to watch some of what’s happening under the radar that most people haven’t even seen this. So I’m I’m optimistic on this. Um, even if Google pulled the date forward a bit, I think this is this is a a kind of still a long way to go and but the Bitcoin world will be forced to get together and just go, “Okay, we need to upgrade. Let’s just do it.” And there’s enough money in it motivation to do it. >> Yeah. At this moment, Bitcoin’s at 73,000. It’s up about $4,000 in the last 5 days. Um, you know, this has been a a black cloud over over the Bitcoin market for a while. In fact, you know, Jeffrey’s Bank has pulled out of Bitcoin. We may see others follow suit. Uh, and in the same way that AI is
[01:41:01] sucking money out of every other market, it’s also sucking the attention out of Bitcoin. Dave or Alex, are you guys Bitcoin holders? >> I know Sim and I are. >> What do you think? >> Only via Micro Strategy. >> Um, I think Mike is absolutely right. I think that I don’t know this litany of existential threats. I mean I know there was uh you know someone trying to take over half the servers and then control it obviously survive that very easily. Quantum is not a threat at all. It’s so easy to increase the encryption standard. And you can see quantum computers don’t just suddenly pop up out of some secret lab. They you see it coming a mile away. So it’s it’s not a risk at all as far I think Mike is 100% right. >> Oh for the record I don’t hold Bitcoin. I don’t have any desire to hold Bitcoin. Uh this is the the time in the episode where I say something nice about crypto per the the Peter Diamandis ordinance. Um so so my something nice about crypto
[01:42:00] today for this episode is I I also don’t disagree with Michael Sailor, but I also think it’s beside the point. I I this is not investment advice, but I don’t think it’s quantum again made this point numerous times. I don’t think it’s it’s quantum decryption that Bitcoin uh the Bitcoin community should be worried about. It’s AI. Uh it’s AI uh numerous facets of AI. It’s it’s AI coming up with clever inversion attacks against the the core hash functions. Uh and and before anyone in the comments says, “Oh, but uh it gets harder over time.” And there are several other responses, I’m aware of all of these responses, but if if there is a secret inversion attack against the the the core hash suite of Bitcoin, this is a major problem for Bitcoin. I don’t think that’s even the largest problem though for Bitcoin. If if we’re going to talk about Bitcoin X risk, I think it’s actually just irrelevance. AI, which is emerging for better or for worse as or AI agents, I should say, is the the killer app for
[01:43:01] call it cryptographic commerce and transactions. The biggest risk is just that AI agents don’t want to use Bitcoin. Uh, I’m aware that the Bitcoin Policy Institute put out this study saying that AI agents, you know, six out of 10 AI agents prefer the flavor of Bitcoin versus other other cryptographic means of commerce. >> I I I think over the long term, it’s difficult to buy that AI agents given their speed. if if they stick with any form of crypto at all are going to stick with Bitcoin. They’ll invent their own currencies uh their own layer ones maybe transcendent trans transcendent forms of layer zero uh and just reconceive the the entire notion of a crypto stack. >> I agree with you there. Yeah. >> Yeah. They’ll reinvent anything and everything towards efficiency and >> well everything Alex just said though is all about transaction use cases and Mike has been saying for a long time that Bitcoin’s role in the world is as a
[01:44:00] store of wealth that’s immune from governments seizing it or taxing it uh because you can move it so easily. So that would be a completely different argument and I’m not I don’t have a a horse in that race but it’ be interesting to say well what about what’s AI’s impact on that use? I mean >> crypto debate >> in mic in micro I would say a a long-term store of wealth is some is basically just commerce by another name. You’re you’re trying to store resources in in some sense for the long term. I would query whether super intelligence actually needs a long-term store of wealth at all. It’s going to be moving very quickly taking rapid actions in the physical economy. Does it even have a need for a long-term non-operational sort of non-productive store of wealth? I doubt it. >> Well, I think humans comput energy computer and energy is the ultimate um you know store of of possibility, so to speak. >> And those are real those are arguably the definition real assets.
[01:45:00] >> Yeah. >> The definition of long-term too is really interesting because because right now the reason we have money at all is because you know we have trade. You’re going to do something. I’m going to do something. Oh, wait. I’m doing it now and the other thing is tomorrow. Okay, well give me the money and then tomorrow I’ll pay you back and so it’s just a buffer because you know transactions don’t line up perfectly in time. If you imagine a massive fluid AI economy with thousands of times more things happening yeah the alignment is a lot higher but also the store of wealth could be milliseconds or microsconds or nanconds. >> At that point do we even need >> Yeah. At that point, do we need quote unquote digital gold? I I similarly, this is not investment advice. I I don’t hold gold. Uh it it’s an unproductive asset. It’s it’s just not interesting. If if we really are in the singularity as I claim that we are, why on earth would I want to hold gold or Bitcoin? >> What do you hold, Alex? >> Okay, so again, non-investment advice.
[01:46:01] Um but for the record uh uh on the one hand index funds fundamentally betting that the market is is a better allocator of assets at least among public securities than any individual can be. It’s basically a bet on super intelligence. Uh and then the other end of the barbell distribution equity and startups and uh where I hold material agency and and to first order that’s it. I don’t hold gold and I don’t hold crypto. Um I I just don’t understand how they’re productive assets. >> Yeah, I’ll jump in on two things. One is, you know, Peter, you mentioned that that what you need is energy and compute. And I was like, well, that sounds like Bitcoin. Um, but to Alex’s point, one of the smartest investor types I know who was worth about a hund00 million, I asked him how he does wealth management and he goes 70% high dividend yielding public equities and 30% high-risk startup investment funds.
[01:47:03] And I think that speaks exactly what Alex just said. The kind of the standard things like real estate, utilities, etc. are all very dangerous places to be. >> I’m cooking all of them. like I I want to cook land. Uh we we’ve talked perhaps in the past about Coastal Assembly, which is using AI to it’s a company where I have a financial interest that’s using AI to grow new land. I I’d like to see realist Okay. So So hot take for this episode. If if if the crypto hottake wasn’t hot enough, a hotter take since I’m underslept, I I I think land has got to be made post scarce and AI will help us make real estate post scarce. >> I agree. Welcome to the health section of Moonshots brought to you by Fountain Life. You know, AI is having an outsiz impact on every aspect of our lives. How we teach our kids, how we run our companies. It also is having a huge impact on health, helping you prevent heart disease. One of the key things I’m here with Dr. Don Musalem, our chief medical officer at Fountain. Heart
[01:48:01] disease has been personal for you as well, hasn’t it? >> It really has, Peter. My daughter was five. My husband died of sudden cardiac death. And so this is a topic that is one that I am missiondriven to try to eradicate. Prevention first and early detection is absolutely critical. 50% of people die of heart attacks with no warning signs. >> No shortness of breath, no pain, no nothing. >> No silent killer. >> They just don’t wake up in the morning. >> They don’t wake up. And so you know AI, this is our mission to advance science to try to help to one day democratize wellness. We know at fountain life when we do this CT angography with AI analytics we are actually finding that 88% of people coming in have detectable coronary arter disease. But Peter what’s more alarming to me is 23% of those individuals had soft plaque. This is the plaque that would not traditionally be seen on CT looking at calcium scores alone. And this is the plaque that we must intervene with with the multimodal testing we’re doing, including
[01:49:00] diagnostic laboratory studies partnered with healthy lifestyle recommendations. >> So listen, make sure you understand what’s going on inside your body genetically and metabolically and cardiovascularly. You can know and it’s your obligation to know. So check it out at fountainlife.com/peter to find out more and really make sure that you’re the CEO of your own health. All right, back to the episode. All right, I’m going to jump into our final segment here, which is a proof of abundance. Going to call it abundance corner. These are stories that have come out recently. Uh, I want to take a second and mention We Are as Gods is coming out on April 14th. So, super excited about this book. You can go check it out as we are asgodsbook.com. Uh, the Moonshot Mates, we’re all going to be getting together on May 4th at MIT with Ray Kerszswall. We’re holding a half-day program. Uh we’ll be doing a live broadcast from there. Steven Cutotler, my co-author, will be there. We’ll be doing a conversation on the
[01:50:00] book. We’ll be doing interview with Rey. It’s going to be a blast. Uh we have sold 100 tickets. Uh people who bought 100 copies of the book are going to be there. Um we’re probably going to offer out 10 last tickets. If you’re interested, go to uh we are asis asgodsbook.com/100 and uh you can squeeze in. It is full right now. We’ll probably have a few people who can’t make it the last minute, so there’ll be a wait list. Uh join us. It’s going to be a lot of fun. All right, let’s look at evidence of increasing abundance. Um here’s a story that’s interesting. Uh Germany just built the world’s tallest windmill, 364 m high. Uh it’s taller than the Eiffel Tower. It’s a 33 gawatt hour uh per year generation. And what’s interesting is it’s built inside of an old coal power plant. So uh I find that uh pretty, you know, pretty exciting. The coal plant left the wiring behind and they’ve built
[01:51:01] this on top of it. So um the turbine is being built in the uh Lousatia coal site in Brandenburg. So, uh, we’re going to start to see, you know, wind and solar penetrate, uh, the old, uh, the old energy economics. Uh, the second article here, uh, is, uh, a 12 patient trial of a redesigned CD4 imunotherapy had extraordinary results. Uh, cancer vanished after one injection. Uh, this was 12 patients in the trial. Uh, two patients hit complete remission. sixaw tumor shrinkage and and you know this is the end of cancer heading our way. Um and then finally there was a a fun study by done by the World Bank uh that basically showed that we don’t need to actually produce more clean drinking water in Africa. What we need is to
[01:52:02] rebalance the use. In some places there was too much water being used and uh and all of that if it’s redistributed could actually provide all the water required for subsaharan Africa and this is where AI technology can come in and help us understand how much water is required uh and where it’s in optimize its use. Uh any comments on these articles? >> Uh I’ve got a bunch but I’ll just limit it to one here just to build on the abundance side. you know um they’re doing uh this is separate from the list here but AI they’re using AI to do with acoustic uh sensing to prevent major failures in wind turbines and these systems are achieving like 99% accuracy and identifying damage before it requires repairs right so the cost of maintenance suddenly dropped radically for these wind turbines because we can do predictive maintenance in a very powerful way and so this is the all these little thousand ways thousand cuts in which we’re reaching abundance and
[01:53:00] energy that’s totally going to change the game. So, I’m so excited by that about this. But this is such a great stuff except we’ve misspelled abundance corner, but that’s my energy. >> I love it. >> I I’ll make the one comment on the imunotherapies. I think it’s also instructive. If you think back, so we’re in 2026. If you think back to circa 2000 or 2001, so about a quarter of a century ago, the US Congress was sold on the National Nanotechnology Initiative on the premise that we’d have medical nano robots swimming through our bloodstream zapping cancer cells. And yet, we find ourselves quarter of a century later where, as you say, Peter, cancer is well on its way to being solved without the medical nanoobots. We didn’t need the medical robots at all. This is being done by basically retraining or retargeting our body’s own immune systems. And I I think that does raise or flag the question, what will, if anything, we need the medical nanoobots that Eric Drexler and others promised
[01:54:01] us. What if anything will we need those for? Or is it just a matter of re-educating our own existing biology to do more intelligent things without needing any robots in our bodies at all? >> We have an amazing system. I mean the challenge is and we’ve discussed this before that our biology is optimized through age 30 and then it’s a slow degradation never evolved never selected to live past that. So a lot of this and a lot of the age reversal work going on from you know the uh epigenetic reprogramming is how do we take our our systems back to an earlier state of youth where they’re operating optimally. All right, a few more articles here in the uh abundance corner spelled correctly or courter. Got it. Um so, uh vertical farming I remember in my first book of uh in 2012, you know, first book, Abundance, I talked about vertical farming. It’s finally playing out. So,
[01:55:00] it’s projected uh to reach 40 billion by 2030. It hit 8 billion this year. And I think what’s what’s really important about this story uh is that you know vertical farming has a huge impact. 95% less water use. Uh production yields are 350fold greater per square foot than traditional farming. You know, the use of AI and robotics allows you to optimize the perfect pH, get rid of all pesticides, enable you to get the perfect spectrum for that plant 24 hours a day. And, you know, historically, most of the vertical farming to date has been lettuce or leafy greens like that. >> This is the first time we’re seeing something with a higher, you know, a higher value crop like berries. Yeah. >> And super excited. I mean, what are we going to do with all the parking garages that our autonomous vehicles, you know, abandon? >> Can I give a little Can I give a little historical thing here? So, you know, if
[01:56:00] you look at the over the last 50 years, the world’s biggest food production countries were the ones you’d expect, US, China, Russia, Brazil, the biggest ones, right? But then you look over the last 50 years at the world’s biggest food exporting countries. And you know what? Number two is Holland. >> Yes. Amazing. on a global map you can’t even put it or you can’t put put a pin to blind Holland is that small relative to these other countries but they made major investments at hydroponics aeroponics etc than the number two exporter of food globally and just that just shows you what’s pot what the potential is as vertical farming takes hold we’ll be able to totally transform the average meal travels 2500 miles to reach an American table so in food logistics security that yields are doing vertical farming, they’re something like 10 to one compared to horizontal farming. >> Yeah. Like half half of your cost of a of a good meal, you know, it’s the it’s the uh beef coming from Argentina, it’s
[01:57:00] the wine coming from France, it’s you know, it’s the transportation costs are huge. >> All right, our second story here is 100 hour batteries go commercial. So this is the birth of uh of of what we call air, you know, iron air storage batteries. Um, you know, lithium ion batteries are lithium, cobalt, nickel. They’re expensive. Uh, the iron air batteries are iron, water, and air. They’re coming in at onetenth the cost. Uh, and they’re now being used for grid storage. Alex, comments on this one? >> I do think evolution in battery energy chemistry is really interesting. So the historic trend, if we put aside iron air for the moment and just focus on the bleeding edge chemistries, I think the stat the statistic is something like a pretty sustained 8% year-over-year increase per constant dollar in battery energy densities for the bleeding edge chemistries. So in some sense there I
[01:58:01] not in some sense in a very real sense there is a Moors law for increasing the energy densities while at the same time we’re seeing new chemistries or newish chemistries like iron air that are radically reducing the cost for certain applications. Iron air isn’t for every application. I seems unlikely we’re going to see it get used for example for EVs anytime soon. probably someone in the industry is experimenting with it. But I think we’re starting to see judging from the explosion initially in lithium ion and then exploding to a number of other form factors different chemistries for different applications and different applications demand different prices as well. In some cases when you’re powering data centers you care about the volume of storage and you care about the price. In other cases, you care about the mass and mobility, and those are cases where lithium ion, lithium polymer probably still has an edge over iron air. Overall, I think
[01:59:01] this is very positive. I I think you know I I sometimes wonder as a thought experiment given that there was quite a bit of experimentation early on in the Thomas Edison era with different en with different battery chemistries whether we could have arrived at much more advanced chemistries much earlier like a 100red years ago and whether the history of the internal combustion engine would have been vastly different if we had seen more investment and more experimentation upfront with with different battery chemistries. But overall like obviously this is a positive development. >> Uh the final story here is AI tutors. So what is it? So a Wharton study tested AI tutors that personalized uh education. Um what they found not surprising uh that a fivemon coding course was equivalent to 6 to9 months of additional schooling uh compared to peers with fixed curriculum. Uh I think we know this. Basically, you’re getting uh 2x
[02:00:00] learning gains using AI tutors. Uh they’re free, they’re ubiquitous, they’re available to everybody 24 hours a day, 7 days a week. This isn’t, you know, breaking news. It’s just quantifying it. Uh at the end of the day, AI is going to be the ultimate educator, understands your child’s abilities, understands what they do and do not know, their favorite sports star, their favorite color, and can optimize it and teach somebody. I think one of the things that AI can do better than anything is teach somebody the way they like to learn. Um, you know, are you >> I’m going to make make an appeal to teachers because a lot of schools, including people that I know, are incredibly resistant for some reason. I don’t get it. >> I’m going to go out on a limb and say it’s cruel, absolutely cruel to a child to force feed them a lecture and they’re like, I don’t understand what you just said. Well, I’m going to keep plowing forward because everyone else in the classroom understands. or I’m going to say it the same over again. >> Yeah. And the kid can’t stop and say, “Wait, explain that to me another way.” With the AI, it’s so much more
[02:01:00] compassionate. And so, I think it’s downright cruel to kids to try to teach complicated things in any way other than AI. >> Yeah. >> Anyone who uses it every day, it’s clear it’s clear that that’s the case. Sorry. Go ahead, Alex. >> I I think there’s an element missing. So, I I would love to be able to just replace teachers, human teachers with AI. I I think it’s basically a cliche at this point that at least in the US uh education is uh is subject to Balm’s cost disease and would love for AI to just replace education both primary, secondary and and higher ed. What I suspect is missing certainly for the the most uh self-motivated students AI I I think at this point in the style of Neil Stevenson’s young ladies Illustrated Primmer from Diamond Age it’s already here a well motivated student can already have a conversation with with a model from whichever Frontier vendor and teach themsel far more quickly than they
[02:02:01] can through human instruction but but but for the students that aren’t as self-motivated What I think we’re missing right now is an AI embodiment that holds their attention and motivates them where they lack the motivation. >> Yeah, maybe. Maybe. So, I I assume Peter by gaming you’re referring to sort of a quasi addictive or >> video games. I mean, video games are are just perfectly tuned not to be too difficult, not to be too boring, to hold your attention and to motivate you all the way. It’s just I don’t understand why video game designers instead of teaching kids about a whole set of random facts are made up can’t use you know a set of facts around quantum uh you know about subatomic particles or about planets and about you know physics and biology and gify that um someday. Yeah, it’s funny if you if you play Fortnite and you look at the weapons and number of intricate components of the weapons and then the characters
[02:03:00] >> memorize these things. >> They memorize them and then you know Madden NFL like the playbook there’s like three layer deep menus of different routes and play and before you know it you you could have learned an entire discipline like quantum physics with that same amount of brain power. But I swear to God the AI can make those topics like quantum physics incredibly fun and engaging. The technology is here today to do that. it is. >> Someone’s just got to get it out the door. >> People had been building edutainment games for decades at this point. I grew up with like Math Blaster or whatever back back when I was growing up. But the the problem is I I suspect so as a user of these games. You’re not motivating the the users children or or students with the exact outcomes. What would have been utterly transformative for me would be not like motivating some math problem with some arguably disconnected animation on the screen. Motivate them by actually empowering them to do really amazing things in in the real world. That that’s far more motivatory I I think than just an animation or some
[02:04:00] dopamine push from a jingle. >> Well, I think that’s for you and probably not for the average kid. >> Yeah, maybe I don’t generalize. I don’t know. >> All right. Uh uh final uh final item in the abundance quartner is this graphic. Uh look at this beautiful exponential growth curve. This is uh EVs sold globally. Back in 2010 there were barely 10,000 of these vehicles. This was you know Elon’s first Roadster. Uh and here were up to 12.7 million EVs uh sold globally. In China, one in two new cars is an EV. Um, and it’s just it’s perfect exponential growth. >> Can I just point out a fun fact, >> please? >> In 2015, the International Energy Agency predicted that we would not sell a million electric vehicles a year before 2040. >> Um, and that year in 2015, we sold more
[02:05:03] than a million electric vehicles. So, so you get you get the the predictors and the reality and you know governments and big companies like are relying on these strategic predictions for strategic decisions and they were wrong before they even put out the the the the comment and so it’s great to see this >> and the curve still accelerating right so by 2030 2025 on this chart is going to look very modest >> and and just look at the the the the impact on the results from the war. If oil prices suddenly shoot up, etc., you’re you’re protected from a huge amount of volatility as we go to solar, battery, EVs. All right, gentlemen. Uh, a beautiful outro piece from Marcus Helker. And I want you to look at this. This is the Moonshot Mates Boy. Uh, we’re making our debut here. So, >> no. God, I can’t take
[02:06:00] good. What are you worried about? I’ve never seen a garden. I’m not the person that I was before. There’s a rhythm in the air I never It’s more than just a feeling. I can feel the pulse. I can feel the further. HIGHER THAN THE SKY. Looking at the world through different
[02:07:00] kind of way. All this time I was waiting for the call. Now I’m the heartbeat inside. All this time I was waiting for the call. The heartbeat inside.
[02:08:06] >> Here we are. The moonshot meets boy band. Everybody, >> Alex, you got to be a science officer. You’re lucky. I get to be the Well, this is science medical, I think, is blue, right? I get to be science medical. >> All right. >> You’re right. You’re right. Medical. Why would that be? >> Killer. >> We’re all Starfleet officers. And the abundance logo resembles the the Starfleet pin. >> It does. Convenient, isn’t it? >> Yeah. Wonder how that happened. >> Pull up a clip. Pull up a pull up a clip from just like two months ago and compare it to today. It’s incredible how quickly it’s changed. >> Yeah. and and just a shout out to the creator community out there. Uh love it. Please send us uh your outro or if you have an intro song uh you want to share with us, please send it to us. Uh we’d love to share it with everybody. Um and gentlemen, uh it was fun doing back-to-back episodes with you in the last 24 hours. Uh and looking forward to
[02:09:00] another episode next week. So everybody, uh please subscribe. We’re putting out about two episodes a week. Uh, turn on your notifications so you get it when it’s fresh. And stay optimistic, stay hopeful. Uh, the future is ours to create. We’re creating the vision of tomorrow that we want. Uh, if you think AI is happening to you and not for you, you’re going to be back on your heels and you’re going to be in fear. And that’s the worst place to actually venture into the future. Uh this is the most extraordinary time ever to be alive and so blessed to have Sel Ismael, David Blondon and AWG as my moonshot mates. Love you guys. >> Awesome episode. Great. >> Live long and prosper, Peter. >> Live long and prosper. >> Peace and long life. >> If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider subscribing so you get the news as it
[02:10:02] comes out. I also want to invite you to join me on my weekly newsletter called Metat Trends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the MetaTrens newsletter every week, go to diamandis.com/tatrens. That’s diamandis.com/metatrens. Thank you again for joining us today. It’s a blast for us to put this together every week.