Google commits to a $40 billion investment in anthropic. >> Daario needs comput. He signs up with Amazon. Google’s already a shareholder in Anthropic. >> They’re trying to maximize the economic value per token. >> It’s all bottlenecked at TSMC. That’s the actual bottleneck to all of AI and only Elon will talk about it. >> Google Cloud is dominating. They unveiled their eighth generation of TPUs, uh, in particular TPU 8T for training and TPU 8i for inference. I still believe Google’s the winner in the long run here. OpenAI unveiled GPT 5.5. It very much feels like a release that’s intended to strengthen OpenAI’s codecs. Math is cooked. Bunch of other things are cooked as well. Things are moving so quickly now that on a month-by-month basis, we’re able to see the hardest of these benchmarks creep up 1% per month. So, not long now. >> Now, that’s a moonshot, ladies and gentlemen.
[00:01:00] Hey everybody, welcome to another episode of Moonshot. Uh your favorite AI exponential tech pod out there in the universe. Uh here with my incredible Moonshot mates, um AWG back with his orchid fil room DB2 in in his uh headquarters of all exponential investments. And of course, Seem is on the road. I mean, you remember the book Where’s Waldo? I think we’re going to replace that with Where’s See? So Salem, where are you today? >> I’m in a car in Guadalajara in Mexico transiting to the airport and this was the only way I could do this is to do it in the car. So hopefully the friends hotspot we’re piggyback airing off last. We’ll see how it goes. >> I can’t believe you brought up Where’s Waldo? You know, Peter, do you know we’re the still the exclusive licency of Where’s Waldo for data mining? >> Okay, we we used to go to trade shows and we’d have an actor dressed up in that where’s Waldo suit and we’d be like, “Hey, our neural nets can find anything in your data. It’s like a where’s wall and we gave out all the books and everything. It’s amazing. You still remember that.
[00:02:00] >> So, you’re in Mexico. The the Blitzy team is in Mexico and they’re raving about the podcast by the way in uh so I guess we have a big fan base down there. >> We we do it turns out. Yeah, big time. >> Um about half the I was at a conference of about 1100 people and quite a few of them are avid watchers. >> That’s awesome. >> What about the rest? Did you convert them? >> Yeah. We got to think international whenever we’re commenting on these topics because because you know everybody It’s a big world and everybody out there is is >> my Spanish is not quite up to snuff to say everybody should watch moonshots in espanol. >> You know there’s translators now just >> I know >> I did my I did my meaning of life session last night in Spanish with the translator and you should have seen the the translator at the end of the night. She was so fried. >> And of course, when you’re touring through India, what do you speak? Hindi or do you what? >> No, I speak I speak English. It’s my native tongue cuz I come from a diplomatic family. I have pretty bad Hindi.
[00:03:00] >> I can get by. But um, you know, it’s one of these where my grammar is bad, my vocs. >> Uh, I can get through about 50% of our conversation. >> Well, we’re at almost 500,000 subscribers. So, next time you’re in front of our large audience, tell them to push us >> I do >> over to 500,000. >> Okay, I’ll tell them. >> Uh, let’s let’s jump in. Another incredible crazy week. >> Um, let’s kick it off with a conversation around the AI race and the agentic boom. So, check out this slide, right? I mean, 15 major releases in only eight weeks. We’re getting a pace of, you know, two major models per week. I think you’ve got to be retired and just focusing only on this to keep up. There’s no way otherwise. Uh so in this segment, what I’d love to do guys is really hit on the last three Kimmy K 2.6, GPT 5.5, and DeepSeek 4 V4. Um
[00:04:00] they’re extraordinary uh releases. Uh each of them, you know, hitting new capabilities. You know, one thing, Dave, we saw uh the, you know, the acquisition or the the invoked acquisition of of Cursor by XAI. And I think what’s interesting is that the winners in this crazy model race are going to be those that are providing the best abstraction layer. So, it doesn’t matter what models underneath. Um, do you agree with that? >> Yeah, totally. Actually, I I just had a meeting with a data center uh company here in Cambridge, and the amount of effort going into the TPUs and the and the Nvidia B100, B300s is incredible. But at the abstraction layer, there’s factors of five and 10 just being thrown away by mismanagement of the context window. And I mean, there’s just so much opportunity in this stack, which which makes sense because it’s all brand new, >> but it’s it’s just and also there’s a lot of vertical integration going on. the the warfare is really really
[00:05:01] stepping up. But I can’t believe how Kimmy K2.6 is keeping up. >> I mean, it is just shocking that the open source world is is actually on the radar and keeping up. >> And we’ll get to that in a minute, you know. But what’s interesting is the speed of these releases. I’m guessing that these these new models are sort of uh you know it’s competitive marketing where the the models are probably already cooked and they’re just waiting for someone else to release and then releasing right on top of it. >> Anthropic you know is is holding back on mythos. So you know that that there’s at least one case where you’re exactly proven to be right which means there may be others as well. U but it’s uh you know it’s funny the the dot releases are coming faster and faster and faster. I mean, what’s shocking about this list is it’s it’s US versus China, right? There’s no European models. There’s no UK models, no Japanese Indian models. It’s just all US and China. Everyone else is a spectator, it looks like, at this point. I don’t know if if you agree
[00:06:00] with that, but >> well, the models are definitely self-improving now. >> Well, no, you’re 100% right, but the models are self-improving now. And so, you know, the the rate is accelerating. Exactly what singularity theory would have predicted. the rate is accelerating, but because the models are improving themselves is it’s hard to start from a cold start and catch up. But I’m surprised that other countries aren’t using the Kimmy Ku K2.6 model to bootstrap their own internal research. And may maybe they are and hasn’t popped under the radar yet, but um you know, I’m not finding it too hard to design new neural nets using existing neural nets. It’s it’s a very doable thing. And and I’m curious, uh, Alex, that chart down below on this slide here that’s showing all the leaprogging. I mean, it’s leaprogging all the time, but you know, is it that they’re all just cherrypicking? They’re all just sort of uh, you know, studying for the test on the particular benchmark and then they’re just, you know, releasing whatever the latest benchmark that they’re best at or is this truly
[00:07:01] >> Yeah. >> Yeah. Uh, I I I think we’re down in the west to a three-way race at the frontier between OpenAI, Anthropic, and Google. And I think those three labs have been pretty good about not benchmaxing of overfocusing on just one benchmark. They’re they’re pretty good generalist models. I I think we’re seeing an honest to goodness arms race or horse race or rat race, depending on which metaphor you prefer. My my friends at the Frontier Labs often call it a rat race. And as as to the Chinese models, it’s interesting, you know, the apherism, why do you rob banks? Because that’s where the money is. To to the earlier point about why no European models, where’s Mistrol in all of this? For example, it’s because the US and China are where all the compute is. And ultimately I think OpenAI’s Noam Brown who of course is quite famous for having led their reasoning approach. He’s recently started um almost pondering with a a mo
[00:08:03] a bit of onwe whether the weights actually matter as much as they used to or whether it’s really turning into a race for compute in in some sense as inference time reasoning becomes more and more important his argument not mine but I I think it’s a credible one the weights themselves start to become less important in the same sense that uh say individual units within a transformer style architecture become less important as the transformer itself starts to scale. The overall weights for an entire model may become less important as more and more reasoning gets used and you see in effect a space-time transformer that’s rolled out over time in reasoning token space. So, if that argument holds, and I I think it’s it’s a pretty interesting one that I hadn’t heard elsewhere before, that would almost suggest that while at the same time we’re seeing a race to the bottom on say
[00:09:00] uh per token intelligence densities between American models and Chinese models, open PNS, the American models are still about six months ahead. And this has been pretty consistent for the the past couple of years. Closed PNS, it may not matter in the end. what may matter in the end that at least according to the scaling laws we have at the moment is who has more compute at the end of the day to do more reasoning. So >> and we’re going to see that >> we’re going to see that in a minute. But you know I mean 15 models uh you know over the course of two months is insane. You know some of these are just improvements on existing models and some of these are completely new pre-trained models. I think that difference needs to be uh pointed out. See any any thoughts on this insanity? Um, you know, just the fact that we have that many releases in 8 weeks kind of blows my mind. We’re watch the cost of cognition, coordination, execution is all collapsing at the same time. I mean, this I think that it’s so much not so much the breakthroughs, it’s the compression density is crazy.
[00:10:00] >> Well, and the the capabilities, they’re mind-blowing. These are these are not just like, you know, fake little dot releases that are benchmaxing. if you use them firsthand and you you what’s really helpful is if you look at our podcast or at any postings on the internet from three months ago, 6 months ago, 9 months ago, 12 months ago and look at the predictions of capabilities, we’re so far ahead of what even the upper bound of predictions would be in terms of the capabilities as the parameter count grows. And so, you know, if you extrapolate from there, you know, we’re we’re we’re just on this, you know, neat knee curve of the acceleration in the singularity >> and raw parameter count and more chain of thought reasoning is just going to push us to, you know, limits that are way beyond human. >> So, what does the average person care about? >> What does the average person care about this? Right? Like my my one of my boys says, “Okay, great. A new release with a new numbers over and over and over again.” I at the end of the day uh stuff is getting better, it’s getting cheaper, it’s getting faster. Uh you know what
[00:11:02] does the average user I mean do you recommend someone sticking with a particular model? You know I’m just going to be on on OpenAI I’m just going to be on anthropic or just going to be on Google. Um any thoughts there? >> See I I think the question itself is a red herring. Why? because OpenAI bet the company on consumers using all these reasoning tokens that consumer oriented strategy for all of this these trillions of dollars of capex that they’re building out would work and they’ve had to pivot rather rather prominently in the past few months back to enterprise. So I think the question of what does the average user care which I construe as what does the average consumer care I almost think the market is telling us the average consumer in the short term isn’t even part of the equation anymore. This is really the question should be what does the average enterprise care because they’re the ones >> asking for I’m asking for our listeners right uh a lot of them are entrepreneurs or general consumers I mean at the end
[00:12:01] of the day is it okay for someone I’m just using chat GPT I’m just using you know Gemini 3.1 Pro I’m just using you know the latest version of anthropics models you know is it important for people to be driving to the latest model or is it okay because ultimately Everybody’s basically leapfrogging everybody else. And you know, if you’re just a, you know, a mom, a dad, a student, and, you know, maybe an entrepreneur just getting going, um, this insanity of 15 models in 8 weeks, bouncing back and forth. I mean, Dave, you you’re using, you know, two, three or four models all the time, right? >> Oh, many more now. And and that’s the biggest change. the uh the coordinator model can now manage dozens or hundreds of other models successfully and and 6 months ago or three months ago that wasn’t true. So, you know, for the average consumer, the ability for the stuff to install itself, like you can go into the model now, you have to use the latest ones. Uh, but it doesn’t matter a
[00:13:00] lot whether you’re using cloud 4.7 or or GPT 5.5, you know, just use one of the latest ones, but ask it to install itself. Ask it to build something on your laptop for you and it just works. Now, you don’t have to understand, you know, the Linux command line. You don’t have to understand any of the underlying infrastructure. It’s smart enough now to explain itself to you as it goes. So I think for the average listener that’s a massive unlock. You know, someone who’s never built software before can just think of something and then create it in an hour. And that that just wasn’t true, you know, 6 months ago. >> Hey everybody, you may not know this, but I’ve got an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/metatrends. That’s diamandis.com/metatrends.
[00:14:02] Yeah, let’s jump into our our first uh our first model story here, which is Moonshot AI launches Kimmy K2.6. I just downloaded onto my Mac Studios this weekend on top of Skippy uh who’s orchestrated by you know Opus 46. >> So Kimmy uh K2.6 it’s a trillion parameter openweight open- source model uh activates 32 billion of the parameters at a time. It runs 300 parallel agents. Uh very importantly natively it can process text, image and video all at the same time and costs 30 times less than the most capable closed models. Uh, interesting enough, uh, you know, Moonshots AI, uh, didn’t get its name from us. Uh, the three founders based in Beijing. Their favorite album is Dark Side of the Moon. And so, that’s where it came from. And the company’s backed by about 4.7 billion in capital from Alibaba, Tencent, and IDG. And this model right if you look at the numbers
[00:15:01] in the bottom uh on the benchmarks compared to GPT54 Opus 46 and Gemini 3.1 Pro this it does amazingly well against all those models and this one was trained they report for a total of $4.6 6 million uh compared to hundreds of millions or billions on the other, you know, closed source models. Dave, I mean, I find that amazing. I mean, almost so >> almost incredible. >> It’s so much to say about this, you know. You know, starting from the fact that, >> yeah, Alex said a minute ago that, you know, that the Chinese models are running about 6 months behind the US models. But if you look at the benchmarks, you know, this is up there or beating Claude Opus 4.6, which was only three months ago. That came out in February. And so that’s that’s not a six-month lead, that’s a three-month lead. And the uh the price performance, you know, most people when they first start, they don’t care too much. It’s it’s cheap. You know, all AIS are pretty cheap. But then when you realize that you can run 10 or 100 of them
[00:16:00] concurrently, you’re like, well, this is going to start to add up. So, uh, if you run this on fireworks AI, it’s about 1/8 the cost of running the cloud API or or the open AI API. So, uh, you know, 1/8 is a pretty damn big price cut. If you download it and run it like you did with Skippy, then you’re running at about 1/30th the cost. Uh, so, so that’s a big big deal. Uh, and then of course the caveat is, as Alex has pointed out many times, you’re not 100% sure if it’s not, you know, spying or doing code injection. Uh, it’s probably not, but you can’t guarantee that. So, that’s that’s, you know, so somebody tells me this is 130th the price. Try it. you’re like, I’m a little sus. Like, why is it 130th the price? But I I doubt it’s code injecting on you, but you can’t be sure. Whereas, if you use enthropic or open AI, it’s definitely not code injecting on you. In fact, it’s safeguarded all over the place. So, there there’s your landscape. Chaotic as always. It’s only going to get more chaotic. >> Alex, how big a how big a deal is Kimmy
[00:17:01] K2.6? I think it’s helpful for certain enterprise use cases where you want to be able to self-host the model and you don’t want to say use AWS Bedrock, which by the way now hosts GPT 5.5 in addition to the Opus models. I think it’s helpful in that respect. It’s helpful if you want to be able to self-host fine-tuned models for yourself. Ditto with Deepseek V4. But I I think in general again it’s a few months behind for other use cases for consumers uh that uh that want to be able to self-host for whatever reason privacy or otherwise probably very helpful for folks who want to self-host their own clause. Very helpful. So I I think there are many use cases where these typically Chinese openweight models like Kimmy K 2.6 6 and Deepseek V4 are very helpful. I I do think, however, they’re not at the frontier. And to me, the the big headline is that disparity between the American frontier
[00:18:01] closed weight and the Chinese frontier open weight seems at least for the moment to be in place. >> Yeah, Peter, I think your your setup is perfect. It’s exactly what I do, too. I I use uh you know Opus 4.7 as my orchestrator because you want that extra notch of intelligence and then if you have simple tasks or you’re just subtasks you can farm them out and save the money using um you know using chem K2.6 six uh and then if the results coming back don’t make perfect sense then your orchestrator will tell you hey this is garbage and so you can actually rely on opus 4.7 to give you the the straight truth on what the underlying models did for you it’s a so it’s exactly the way you set it up Peter >> yeah Dave a week ago you said you moved from 4.7 back to 4.6 six. Did you move back to 4.7? >> Uh, I have both running now. 4.7 is kind of wordy and sounds kind of PhDish, which annoys me sometimes. Uh, and 4.6 is friendlier, but then, you know, it’s
[00:19:00] clear that 4.7 is a little smarter. And so, sometimes you just need the right answer no matter what. And so, I actually have both running in parallel uh agent windows now. See, I’m curious in Guadalajara, Mexico City, you know, in parts of South America and I know in parts of Asia, what are you hearing about the use of US models versus openweight, open source Chinese models? >> So, I get a mixture of both things. Um, a bunch of people use the hosted models, the big ones, just because it’s easy. uh there’s a subset of people that use the uh open source models and and the Chinese models and they don’t really care. Um I think they should care at some point that’s going to come up. One question I have for Alex and Dave is how do you protect against the code or prompt injection in these open source? Is there a way of defending against that? Um if that’s if there is then there’s a huge case for this because everybody here is looking for the lowcost approach, right? Um the but for
[00:20:02] the most part I’ll be blunt um the conversation is not around which model and open source or closes like what do we do with AI like that’s literally uh at a level of um lack of sophistication around this that you would expect. Um but the opportunity is also there for startups then to leaprog lots of people and and build aggressively for the coming madness that’s upon us. Yeah, it’s almost an impossible question, Seem, because because if you well, if you sit on the sideline and you don’t use this stuff aggressively, you fall way way behind. >> Yeah. >> But if you start using it aggressively, you’re generating thousands or millions of lines of code before you even know it. >> And so then the odds go up, right? that you know so I think what you’re trusting right now is that the guardrails that uh that Anthropic and OpenAI put on their models they’re very very cautious uh when they’re pulling in code open source or otherwise I mean almost annoyingly
[00:21:00] cautious so you kind of assume they’ve done a very very good job of filtering out you know nasty code injection but the numbers work against you at scale you know so there there’s no there’s no simple answer I mean you could you can when I got into it I was like hey I’m just going to look at the code. I’m not going to just run it. I’m going to see what it does. That’s a joke, right? That’s just laughable. It’s generating so quickly now that there’s no chance you could even scroll through it. So, uh, you have to use AI to it’s like a lot of things actually, you have to use AI to to protect against AI. There’s no other other way to to get the scale. Uh, so it’s tricky. I know that wasn’t much of an answer, but it’s tricky. You know, one thing I’d love to point out here, uh, we talk about this on occasion, but I don’t think we’ve ever really spoken about in detail. The Kimmy K2.6 uses something called a mixture of experts, ane. And it’s interesting, uh, and just to take a moment about this. If in fact you’re have a trillion parameter model, uh, and you ask a question, it’s
[00:22:01] basically accessing all trillion parameters every time to analyze every token. And what they did here is they actually created you know a set of 30 plus uh experts and so that you know some percentage of all the parameters are dedicated to one expert system. So if you ask a coding question, you know, the orchestrator looks at this and and says, “Okay, this is a coding question. We’re going to send it to, you know, experts number three, 7 and 12 and only uses a portion of the of the parameters, right? It only uses, you know, uh instead of uh all the experts, all the experts, it uses some subfraction thereof and it saves money and saves time.” And I you know how many different models are using that right now Alex? >> Sparsity which is the term of art I think that we’re talking about here is endemic to all frontier models at this point. It’s also the basis for the human brain. If if we look at the brain most
[00:23:00] most neurons don’t at any given point in time have action potentials that that are going in and out. So sparsity is a great way to reduce the memory footprint of models. To my knowledge all of the Frontier models use sparsity one way or another. It’s it’s also a good way to uh another term of art regularize the models. So to to make sure that particular weights or parameters in the models aren’t overfitting to the training data. One of the age-old techniques is just blasting away individual weights or parameters in the neurons making them disappear entirely as a a so-called regularization technique. So sparity is everywhere at this point and it it’s only going to I I think become more important with time as we try you know I’m what one of my uh one of my holy grails is as as I’ve mentioned on the pod previously I’d like to see a million parameter or smaller diamond or black hole of of a model at the end of the scaling race and I think sparsity and cranking the knob on
[00:24:01] increasing sparification in these models is one possible path to getting us Hey, and just to add something that to what Peter said, uh the thee innovation, mixture of experts innovation that came from deepseek is actually layer by layer. So most of these neural nets are about 140 layers deep now. And uh it’ll route the expert layer by layer. So it’ll say look within this layer I’m just doing basic you know image classification and this layer I’m doing deeper thinking and this layer I’m doing higher level math. you know, as it moves through the neural net, it’ll actually route to, you know, now I think up to 128 different experts layer by layer. So, it’ll find the optimal pathway through the entire neural net. On top of that, you can also have dedicated experts like here’s a surgeon, here’s an artist, here’s a coder above and beyond that, but is actually within the neural net layer by layer. >> All right, next story is OpenI unveiled GPT 5.5 literally just 7 weeks after GPT
[00:25:00] 5.4. for Greg Brockman calls it a new class of intelligence. It’s natively uh omniodality. It’s you know it’s able to process uh text and audio and video and images all in a single unified endto-end architecture. Uh it has a 37 point increase in uh over 5.5 over 5.4 in long context reasoning which means 5.4 4 and 5.5 are both a million token windows, but 5.5 can actually remember the beginning of the million tokens and provide, you know, complete context across the entire thing. Token efficiency, 40% fewer tokens with the same latency. And I love this hallucination is down 60% over 5.4. Let’s go to our resident genius, Alex. What do you make of 5.5? How important is it? I think it’s very important both intrinsically and also relative to 5.4. So I want to highlight two key stats here. The first is the leap from GPT 5.4
[00:26:02] thinking to 5.5 thinking. That’s probably the biggest leap overall uh on terminal bench 2.0 specifically. So one way to interpret this terminal bench is a benchmark that’s focused on the ability to agentically operate from a command line terminal. Where is that useful for codecs and for cloud code type environments? So one way to construe this huge leap which is larger than most of the most or all of the other leaps that we see in in terms of other benchmarks is that 5.5 is being very seriously focused benchmaxed if you like. Although I I really having used it don’t think it’s narrowly overfitting just to creating and making codeex a a better clawed code competitor. but it very much feels like a release that’s intended to strengthen OpenAI’s codecs. That’s thought one. Thought two is my favorite benchmark among all of these is Frontier Math Tier 4. Frontier Math Tier
[00:27:02] 4, which I I think we even had a New Year’s bet about that we’re going to have to revisit sometime later this year, is one of the best proxies for the ability for AIS to solve professional level research problems in math. And what do we see? We see from GPT 5.4 Pro to 5.5 Pro, approximately a 2% leap in approximately the last 2 months. What does that tell me? That tells me that we’re seeing now approximately 1% gains per month in research level math coming from frontier AIs and we’re getting closer to approximately half of all of the frontier math tier 4 problems getting solved. So you can extrapolate this and realize if the present rate just stays the same, which I guarantee it won’t, it’s going to accelerate. But even just at the present pace, we’re talking about essentially all frontier math tier 4, all professional research
[00:28:02] grade math problems being solved in the next four or 5 years. So math is cooked. We I I’ll say it, you know, second time. Math is cooked. Bunch of other things are cooked as well. But things are moving so quickly now that on a month-by-month basis, we’re able to see the hardest of these benchmarks creep up 1% per month. So, not long. Now, >> it’s worth pointing out that the API pricing on 5.5 is twice that of 5.4. So, it’s five bucks per million input tokens versus $2.5 on 54 and 30 bucks uh per million output tokens versus 15. Uh I I like the simplicity of that pricing. Dave, have you been playing with this at all? >> Yeah, absolutely. And and I think what Alex said earlier in the pod is is really really important and insightful. like no one Brown is saying, “Wow, maybe the weights don’t matter so much as this chain of thought process is is just way ahead of of any expectations on how intelligent it can get.” From a user’s
[00:29:01] point of view, that first benchmark uh terminal bench, if you ask it to do something complicated like configure an entire system for you, download some software, integrate it, make it all work, you know, connect it to my Outlook, connect it to my whatever, uh it just works. And that that exactly ties to that first benchmark. It just flat out works. Uh and so it just it feels like this incredibly capable brilliant assistant no matter what you’re trying to do because of that first benchmark. Then that the last benchmark the frontier or second to last frontier math uh you know Demisabus came out and said yeah I think it’s a kind of a coin flip. This was on Alex’s uh innermost loop. You know, it’s kind of a coin flip now on whether just the existing architecture scaled up solves everything. >> Yeah, >> I think I think coin flip he’s moved a long way. >> He has, you know, we need new breakthroughs. >> We’re out of breakthroughs apparently. I I remember 10 plus years ago when I was chatting with Demis, he used to say there were five breakthroughs remaining
[00:30:00] between where we were then and AGI as he construed it. Now we’re out of them. It’s it’s half a breakthrough or zero breakthroughs at this point. You know, next week we’re going to be on with with Ray Kerszswall again. Um we’re doing that May 4th event for the launch of We Are As Gods. And I’m curious, we should ask him uh you know, what does he think what’s required to get to true AGI or ASI? Are we just going to extrapolate what we’re doing? Uh or do we need breakthroughs? I I think that that requirement’s been been falling um better or less. I mean, it’s starting to feel a lot like like actually Alex, you know what would be great for that is to put together a chart of Dennis’s number of breakthroughs, which because at Davos it was down to two. Now it’s down to 50/50 that it’s zero, but you you mentioned five. That was what maybe a year and a half ago. So, that’d be a really cool chart. >> The five number from him was when I was chatting with him in uh this is 10 plus years ago. >> Yeah. >> Okay. >> Well, there’s some sort of exponential
[00:31:01] decay of breakthroughs. Clearly, >> Alex, you said it a little bit earlier. You know, this is ultimately a compute race. So, uh let’s talk about that. You know, a couple of a couple of stories here around Google Cloud. Um and Google Cloud is dominating. So, what do we see? We see Google announcing at Google Cloud Next 2026, their major conference. They unveiled their eighth generation of TPUs. uh in particular TPU8T for training and TPU8 for inference. Right now we have uh training and inference chips separately just like Amazon has their tranium chips for training and their inferentia chips for inference. Uh these new GPU these new TPUs are three times faster in training performance 80% better performance per dollar. They’re designed to run millions of agents in real time. So Google is really all in on the agentic era. Sundar Pachai, the CEO, who I had a chance to spend some time
[00:32:00] with last weekend, he made it crystal clear. He says over 16 billion tokens per minute being processed in 75% of Google’s code is now written by AI. So um, fascinating. Dave, what do you make of this? >> Yeah, you know what’s surprising to me is that the the price performance of the TPUs is landing right on top of Nvidia. not not much different at all, which is surprising because it’s a completely different architecture. It uses a systolic array design. I mean, it could not be more different from a GPU under the covers, but for whatever reason, it’s all kind of canceling out and landing identical, which is fine from Google’s point of view because now they have their own total, you know, chip fab through data center through model. >> They do everything >> solution. Yeah, >> I’m still I still believe Google’s the winner in the long run here across the board. I don’t know if you agree with that. >> Terraab is a big thing. >> That’s true. I’m sorry. Yes. Okay. I’m I’m thinking in the o in the open AI and anthropic ecosystem. Yeah. Terrafab >> on the other hand, Google owns a
[00:33:00] material percentage of SpaceX. >> They do. They do. >> I don’t know if you saw I don’t know if you saw there was a uh a tweet out recently about Google’s investments that were made. Um, yeah. And they just had massive returns on their investments on SpaceX, on Anthropic, across the board. Huge returns. >> Yeah. I was at a board meeting yesterday, a company that I’m the chairman of that has massive cash flow and a huge cash balance. And they were like, well, I don’t know if a public company can really do seedstage investments. I was like, have you looked at Google? >> Yeah. >> They have multiple hundred billion dollar gains on on their investments. And and they don’t even do it for the money. They do it for the knowledge and >> well and strategic relationships, right? Larry and Sergey were just bonding with Elon and they said, “Okay, Google is going to invest a billion dollars and now it’s worth, you know, God knows how many hundreds times more.” >> Yeah. >> It’s also just investing in the future. I I I I remember conversations with Larry and Sergey about the nature of the frontier, and I I think to their credit, they’re investing in the frontier, and
[00:34:01] SpaceX is part of it. And and also compute. Uh Epic put out this really I think eye-opening stat in the the past week that Google now accounts for approximately quarter of all of the AI compute on the planet and I’m sure eighth gen TPUs will be part of it. I I think it’s also worth keeping in mind that the TPUs at this point are being designed by TPUs. I have number of friends at Google who are responsible for designing nextgen TPUs and they’re all just using Google AI to do it. the the recursive self-improvement goes all the way down to the silicon at this point. >> All right. Our our next story in uh the Google ecosystem again also announced at uh at their large cloud next conference is Google commits to 960,000 Nvidia Ver Rubin GPUs for their A5X. So uh pretty extraordinary. A5X is Google’s new bare metal virtual machine instance uh delivering 10x lower inference costs and 10x higher token
[00:35:02] throughput. Uh just an interesting FYI, Vera Rubin uh for whom uh these chips are named was an American astronomer who discovered the first conclusive evidence of dark matter. Um I love the fact that Jensen is naming chips and systems after you know famous individuals. Uh now what I find fascinating, this goes back to the conversation a minute ago, is that this cloud is two times bigger than Colossus 2 and 2.4 times bigger than Stargate Abene. Uh so Google is winning on at least based on what they’re building and plan to build. Again, uh Dave, thoughts here? Well, you know, part of the I just to touch on one thing you said there, Peter. Part of the acceleration we’re seeing in society as a whole is that all the really really smart people are working on real tech now and >> hard hardware >> hardware and and space and and medicine and like real tech. >> And you know, if you go back to the meta
[00:36:01] era, the Facebook era, the rewards were all in either uh you know, cheesy consumer experiences or banking. and doing deep tech was kind of a like a way to to die poor. So, it’s it’s creating a whole new era for society. You know, the postAI era, we all knew it was going to be very very different. But now the rewards are in actual uh deep tech that benefits humanity in really big fundamental ways. But I think if you just counted the number of people that you know that have been pulled into this vortex, you it would have been just a few percent working on worldchanging deep tech real stuff just you know 15 20 30 years ago. Now it’s almost everybody that you know is getting pulled into like you know do something big and world changing and it’s actually working and so that’s a big change for society. So that’s helping accelerate things as well. >> All right our next story is anthropic is cutting deals for cash and compute. Um I mean huge amount of capital flying back and forth between the frontier labs and the hyperscalers here. So uh Google
[00:37:01] commits to a $40 billion investment in anthropic. So last week, Google committed to a ton of money, $10 billion in cash right now at a $350 billion valuation. And note, you know, we talked about this last time, anthropic on the secondary markets is now at a trillion valuation. So this $350 billion is coming in at roughly onethird the cost of what others are paying for it. And they committed to another $30 billion if Enthropic hits certain performance targets as well. uh they’re going to be providing 5 gawatts of TPU compute committed over 5 years. That’s the equivalent of you know literally providing power to 3 to four million people. Um I’m finding this pretty extraordinary. Uh we’re going to see in a moment a conversation where Anthrop has cut deals with Amazon in a similar fashion. Actually, let me go ahead and hit that and we’ll talk about uh this uh these money for guns uh conversation
[00:38:00] that’s going on. So Amazon and Anthropic are trading cash for compute. So here’s a second deal. Uh Amazon is investing a total of 33 billion. They’ve committed to 25 billion on top of the 8 billion they’ve already invested. Um in return for Amazon’s cash, Anthropic is committing to spend hundred billion or more on AWS over the next decade. Anthropic will run claude on Amazon’s custom tranium chips and Amazon will provide 5 gawatts of AI compute capacity for Anthropic. So I mean we’re seeing Anthropic becoming beholden to both AWS and Google in a significant fashion. Gentlemen, uh thoughts on this one? >> Well, it’s it’s so funny to me like obviously Anthropic needs much much more compute and is growing. Oh, actually a very good friend of ours, Peter, I won’t mention him on the podcast, but he’s an investor in Anthropic, uh, and he was telling me at the board meeting yesterday, he can figure it out from that comment, but he was telling me at
[00:39:01] the board meeting yesterday that, uh, Anthropic under the covers is thinking they might hit between 40, 50 up to 70 billion in revenue by the end of the year. >> We talked about 100 billion uh, by the end of the year a few pods ago, but still, I mean, dillion last month, doubling, tripling. >> It’s extraordinary. And the only reason wouldn’t hit those numbers is because they can’t get enough compute to keep up with the demand. >> And one of the things was that they they didn’t release Mythos because they have enough compute to deal with it. Right. So it’s a limited uh a limited release of the capabilities. >> Yeah. And OpenAI cut Sora. I think one of the reasons is probably compute. Uh so yeah, there’s an imminent >> which is Yeah. Which is energy. And and I think that um it’s so funny to me to see all these deals. So, okay, so Dario needs compute. He signs up with Amazon. You know, Google’s already a shareholder in Anthropic. Uh, they’re all and now OpenAI is going to be running on GCP and also on um uh on Bedrock on Amazon.
[00:40:00] >> So, you can get it through, you know, get it through Bedrock. So, everybody’s partnering with everybody else, but >> it’s all bottlenecked at TSMC. Like, like this is all great. You can all partner with each other up the yin-yang, but whose chips are actually going to get made? you know who and no you don’t see TSMC in any of these podcasts in any of these deals any of these meetings and and you saw Jensen actually recently say he doesn’t have any long-term agreement with TSMC they just kind of make it up as they go so all of this is bottlenecked and only Elon is talking about look the fundamental constraint to all of this is the tarap and I already locked up 16 billion could be 45 billion of of Samsung’s capacity the only three companies in the world capable of making any of this are Samsung, Intel, and TSMC and like that’s the actual bottleneck to all of AI and only Elon will talk about it. >> Alex, is it is it compute or is it energy? End of the day right now. >> I think they’re indistinguishable at this point. I I think permitting for
[00:41:00] on-site energy is a major limiting factor. I think it’s probably on balance more of a limiting factor at this point, maybe not a year from now than TSMC, but it is a limiting factor having powered land, having data centers that uh that you can take all of these, you know, infamously Microsoft even in the past few months spoke about having lots of GPUs that they’d love to to rack mount in a data center, but lacking the powered land and lacking the the data centers to plug them in. I I think at this moment energy at least in the US but I I agree with Dave that in the medium to long-term semiconductor fabrication supply chains doubly so if if there’s any geopolitical conflict are likelier to be a strangle hold once we solve our energy story. >> So let’s talk about the not investment advice segment here. Uh you know where do you where do you invest your capital? uh you know if compute and energy I mean I’m seeing the energy stocks beginning to fly right a a friend of mine uh just
[00:42:02] had this IPO of X Energy and it popped like 30% in in the first day uh we’re seeing Bloom Energy and other energy stocks beginning to uh to you know skyrocket creep up over the time. So, you know, I don’t know if you’re going to invest in chips. Do you invest you, we saw Intel pop up and AMD, I mean, all of these guys, you know, that entire ecosystem of chips and energy. Uh, ultimately, if they’re really the constraining part of the innermost loop here, uh, I think the most, you know, most demand is there. Any thoughts, Dave? >> Oh, to so many thoughts. I could go for an hour on just this topic, but but uh invest like crazy in anybody who has access to chips and can find a power supply. Uh that’s that’s you know pretty straightforward. There are power supplies everywhere. All these legacy manufacturing uh operations, aluminum melting and all that uses a huge amount of electricity and swapping it over to data center is a massive increase in the
[00:43:00] value of that energy supply. But you have to have a you know a line on the chips. Then at the kernel level, you know, because the chips are so constrained and the demand is is through the roof. Uh at the kernel level, anyone who’s writing software at the kernel level that empowers, you know, AMD chips or or you know, legacy GPUs to participate uh or just makes the uh the inference more efficient on Nvidia chips. Uh those companies are worth a a fortune. So anyone who’s building kernel level software is is a brilliant investment. And then in the vertical use cases, uh, Anthropic rolled out something called skills, which you should absolutely play with. Uh, it’s just a way to use the the context window more efficiently by designing skills that the AI can then pull in. So rather than have to reinvent everything every time, just build a skill. Yeah. >> And then you can call on the skill very efficiently. So companies are now discovering they can refactor their entire business or their entire whatever they do around a hundred or a thousand
[00:44:00] different defined skills but those skills then become the defendable intellectual property >> within that vertical domain. >> So you know any vertical domain where you’re racing to build out the entire skill database for that for that use case is also an unstoppable investment theme right now. You >> I could go forever. The other thing that’s interesting is that both Google and and uh Amazon are getting their shares in Enthropic at one-third the going rate. I find that extraordinary. You know, three $350 billion valuation versus the trillion dollar valuation. >> Well, it shows you how important the compute is. I mean, again, you’re going to you’re going to be sold out for forever >> if you can get the compute. >> And these hyperscalers are kind of hedging their bets, right? They’re not picking a winner. uh they’re buying every horse in the race, you know, because this, you know, AGI ASI race is just way too important to lose. So, they’re just investing left, right, and center. >> I would also just parse these as the market doing what the market does. Some
[00:45:00] of the participants, some of the frontier labs like Anthropic have an insatiable hunger for the compute and they have the revenue generation to to generate the demand and sustain the demand. And so if you’re anthropic, you’re going to go to every possible source at scale of compute that you can find, whether it’s Amazon or whether it’s Google or whether it’s other sources, you’re just going to to go and seek as a a hungry customer uh for compute, whatever the market will provide. I I don’t think necessarily the story needs to be any more complicated than that. It turns out there the world demands a lot of compute to solve some of these really interesting problems in code generation and otherwise. And what we’re going to see over time is all of this demand is going to translate into supply. It’s going to translate in the short term into what looks superficially like a bit of a circular economy between call it the the top 10 or 12 companies after we see the IPOs of SpaceX and
[00:46:01] OpenAI and and Anthropic. But that’s going to diffuse throughout the economy over the next few years would be my prediction. >> People are hungry for compute. See was hungry for bandwidth. Sem, welcome back. I see you’re in a stationary from the from the airport now in a stationary spot. So let’s see. >> Dude, I’m just going to call you Waldo from now on. All right. Uh let’s move on. A couple of fun stories. I’m gonna add this segment every time for the podcast, which is what did Claude just kill? Uh so this is the stock chart for eBay. Uh and uh this comes out from Anthropic Research. There’s a new anthropic research project deal. Uh we created a marketplace for employees in our San Francisco office with one big twist. We tasked Claude with buying, selling, and negotiating on our colleagues behalf. Basically doing what eBay does. And we see a drop in the stock price. You know, I think this is, you know, eBay is not really dropped anywhere beyond this, but I I think this
[00:47:01] is going to be more and more common. Um, any thoughts, Dave? >> Well, I think a lot of this is just uh, you know, immediate knee-jerk fear reaction, but then things kind of settle out and you realize, wait, Anthropic is going to build all kinds of marketplaces because they can, but it’s not going to hurt eBay. You know, like I think what you’re going to see more and more is AI is growing so quickly that it’s going to largely grow around the legacy economy. So around the banks, around the insurance guys, it’s just it’s going to be its own world and it’s going to be feeding on itself and building just you know colossally large constructs that that some people are not even aware of >> and it’ll all happen very very quickly. So I think eBay will be fine. >> Any thoughts here? >> I have a slightly different take. You know, there’s so many places because lots of problems in companies exist because coordination is hard and AI makes coordination easy and that’s going to threaten big chunks of places. Marketplaces, customer support, uh listing optimization, dispute handling.
[00:48:00] There’s huge categories of these that will become agentic workflows. And I think the bigger question about what the what did the AI just kill is what workflow category did it just encompass encompass and automate. >> You want to hear something cool like related to this? Uh the the data center CEO that I met with this morning, you know, we were talking about data centers going into space because power is basically free. You know, solar is basically free in space. And he said, “Data centers there there’s power all over the planet that’s not tapped that doesn’t disrupt society at all. That’s not why data centers are going to space. Data centers are going to space because because there’s no regulatory authority preventing it. You try to do anything. I mean, that’s not true. You still have to if you’re going to be flying all these data centers and communicating, you need licensing, you know, domestically and the ITU for bandwidth. I mean, there are going to be regulatory hurdles that, you know, Elon and Google need to get. Um,
[00:49:00] especially if you’re launching 500,000 satellites. I mean, when you’re putting up a debris field like that, there’s going to be push back. There is going to >> Yeah, it’s interesting. I’ll square the circle. >> Doing anything on land. Sorry, go ahead, Alex. I I’ll square the circle here and and and say I think in the short term for suns synchronous orbit, yeah, that requires FCC and and other approvals. In the long term, if we start to say launch AI data centers from the moon, that will probably and and we’re building them on the moon, that will probably require fewer approvals, at least under the current regulatory regime. To go back, >> that’s 20 years away to to actually get manufacturing on the moon. >> I’m talking about >> 20 years away, Peter. um you know to get listen it if you look at it deeply I mean I know 20 years away is infinity I get that but we’re talking about I mean just to be to to be clear the stuff I’m concerned about is the next 5 years right if you’re launching we talked to Elon about this you know 500,000
[00:50:02] uh you know V3 you know satellites in a constellation there’s going to be debris issues you know Elon pushed it off by saying, “Oh, you know, we’ll have super intelligence to figure that out.” I just I you know, we have this everything everything looks amazing from far away, but the reality is by the time it comes closer, there are real issues and so it’s not going to be just the, you know, the promised land of going to space. We’re going to have challenges going there still. >> Yeah. It’s interesting how the timelines line up too because between here and there you there’s all kinds of constraints but between here and there we’ll have solved all math and we’ll have discovered all kinds of new physics. And so >> listen I’m the space cadet. I’m the super space enthusiast here and I I can hope for nothing more than that vision to happen. But it’s always uh it’s always you know easier on the promised
[00:51:00] land. Peter, I’m gobs I’m gobsmacked to to hear that you think it’s going to be 20 years before we have fabs on the moon. My goodness. >> Fabs on the moon manufacturing and pumping into Earth orbit with mass drivers. >> You think that’s 20 years away? >> Well, okay. Maybe 15, but it’s not the next 5 years. >> Do I hear 10? >> It’s hard for me to It’s hard for me. I guess Optimus robots will improve that. uh demand will improve that but uh you know the the concern is if you have a uh you know an in a I not an explosion but a collision of spacecraft in orbit generating debris we still don’t have any mechanism for removing debris from orbit and so it’s it’s going to be a challenge >> I’ll make two your concern is your briefly your concern is Kesler syndrome is going to sabot >> is going to to sabotage moonbased fabs No, it’s going to it’s going to sabotage uh the next 5 years of 500,000
[00:52:01] satellites in Earth orbit. I mean, right now we have 10,000 satellites from Starlink, right, which is the most ever pumped into orbit ever. And we’re talking about 50 times that. And we’re talking about not just the US, you know, Amazon’s going to do their best, right? Uh Jeff is not going to stand still while while, you know, Elon’s doing this. And then you’ve got Chinese constellations. So, do you double or triple that number of satellites in orbit? Um, and it I mean listen, I I can’t wait and it’s going to have challenges. See, you were going to say, >> yeah, I’ll I’ll give a couple of thoughts here with with just finger in the air here. I think humanoid robots are five to seven years away minimum at in in mass at mass scale uh in widespread adoption. Okay. Minimum. And I think >> I don’t agree with that, but that’s okay. >> I agree. I agree. I I understand. And I think a fab lab on the moon and consistently doing fabrication and all that stuff is 15 years away minimum.
[00:53:00] >> So I’ll say that not that it’s not coming. It’s just a question of it’s a when not an if which is >> Oh my goodness. This is lunacy. Utter lunacy here. >> Well, we are in the Moonshots podcast. >> Yes. Can we get get back just maybe to project deal and anthropic? I I I think we’re we’re missing an important point. everyone who hand rings over the the latest Anthropic project purportedly sabotaging or or or killing some SAS company, Anthropic doesn’t want to to be triggering SAS apocalypses left and right. There there’s relatively little economic motivation there. I think if you look through the through line through all of these anthropic projects or research projects other than the alignment ones anthropic is can all of their projects can be explained by and corporate strategies and unhoblings can be explained by very simple principle they’re trying to maximize the economic value per token >> that’s all that they’re trying to do that claude code it turns out through claude codegen is actually quite
[00:54:02] economically valuable per token. Turns out per token it’s more valuable to generate useful working code than say to generate video or cat images or whatever other consumer plays OpenAI and some other frontier model providers were chasing. They’ve dropped that now. Everyone’s focusing on codegen because on a per token basis it’s so economically valuable. So I would look at projects like project deal running a marketplace running a business as anthropic looking for new ways to increase the per token economic value of their output. It’s as simple as that. >> Brilliant, Alex. That’s absolutely brilliant. >> Um to move us along here, we’re we’re coming into the battle season. Uh it’s Elon versus Sam and Open AI. Uh this just got posted today. So today is the start of a very important day in the AI world. The trial between Elon and Sam and OpenAI begins in the Oakland Federal Court. The jury
[00:55:00] selection is happening right now. So I just put this up to keep us posted. Uh we’ll be learning a lot. Of course, uh you know, Discovery is unveiling a lot of texts, a lot of emails that I bet both Elon and Sam and a lot of other people uh would rather not have aired in the public. Any thoughts here, Jent? >> I think it’s it’s sort of sad that it’s come to this. It’s going to make I I just one remembers the Bill Gates versus Steve Jobs docu dramas that were made from critical Apple versus Microsoft era. This has I think a similar feel to it. It it’s sort of sad that I I think this ended up in court versus settling earlier on, but I I do think many will I think history will probably view this as sort of an iconic struggle that will get the the full uh Aaron Sorcin if not similar uh like movie treatment. This will this will be the the full Hollywood type totally titanic battle.
[00:56:00] >> Yeah. See, any thoughts here? How’s this playing in Guadalajara? uh no recognition awareness uh at all and that’s probably a good thing. Uh this is kind of soap operation. I’m with Alex on this one. Uh it’s just heavy drama. We wish it hadn’t come to this. It would have been great to get these guys to settle off thing, but their positions are hard and baked in and so it’s going to come. >> How does this how do you unravel the the you know the movement of open AI to a for-profit company? I mean do you like back it up to a nonprofit? And what about all the capital invested in open AI? Does that disappear if they lose the case here? >> I’d like to do it more. I mean, I I’ve said I think on the pod in past that I if the model of changing nonprofits, large nonprofits to public benefit corporations can be scaled. I’d love to do this to a number of major American research >> unities. My question my question is how you know what happens to all the capital invested you know literally hundreds of $122 billion in the last couple of
[00:57:01] months. Uh, you know, there’s so much pressure for this court case not to be won by Elon. >> Well, I mean, if if you’re following the the detailed Tik Tok of the way that this trial is being structured, it’s it’s being structured in two phases. the the first phase is more of deciding uh whether the the claims that uh Elon at all have made uh are in fact the case and the second is is the equivalent of um like an a reward type section uh deciding what awards if any to to make as conditioning on the first phase. But I I think there there are a number of details in this uh in this court case that are notable. One is so jury selection. Uh there’s been public reporting that already selected members of the jury are aware of entanglements that Elon’s had with the present administration and may view him negatively as a result. I think that’s the fact that jury members are being
[00:58:01] selected um reportedly with uh with some political influence seeping in. I think that’s very interesting. I also think it’s interesting that the the the district judge in this case um has uh again reportedly decided that uh she’s going to take the the jury uh the jury outcome uh as an advisory opinion, but that if there is an award, she’s going to decide ultimately from the bench on the final award. So, there are a lot of nuances here. >> Wow. Dave, any thoughts, opinions here? >> Yeah. Do we ever figure out if we get to see it live? It’s not it’s not being broadcast, but I’m sure they’re going to be sort of court reporters giving us uh a lot of details here. >> You you can wait for the full Hollywood treatment in a couple years. >> By the way, there will be a Hollywood treatment of this as with every other major uh of course it may be an AI generated feature film, but nonetheless, >> it will be for sure. >> Well, I’m surprised how many uh texts like personal texts have already come
[00:59:01] out. >> Yeah. you know, the emails get discovered right away and everyone all your email gets thrown out there for the wild to to read, which is crazy, but it happens. But texts traditionally have not been thrown out, but yet we’re seeing them all. So, I don’t know exactly how that’s happening, but you know, for Elon to win, uh, he doesn’t have to win the case. He just has to slow down Open AI. I mean, in the in this middle of the singularity, if you lose three months, you know, you’re basically you lost. >> You’re correct. >> Correct. Yeah. >> All right. Another fun topic. a few stories here. It’s about AI surveillance and privacy. Uh so let’s check this out. OpenAI’s Chronicle uses agents to build uh memories from screenshots. So Sale Alman described this one as telepathy like so Chronicle uh runs on OpenAI’s codecs uh where background agents are taking periodic snap snapshots of everything on your screen. The screenshots are sent OpenAI’s uh servers for processing. agents use optical
[01:00:00] character recognition and visual analysis to extract the context of what you’re doing every minute on your screen. Structured memory files are created and stored locally. Uh, and you know, we talked about this before, AI monitoring everything. Uh, ultimately it’s sort of the camel’s nose under the tent of being able to replace any worker. uh you know we have significant privacy concerns that come up on this and no one’s raising that. I don’t know if you guys remember when I was researching this. So Microsoft uh had launched something uh uh recently called recall. It was a product that they they put out there and then they retracted because all the cyber security people said this is a privacy not nightmare. It’s litigation bait and uh they pulled it back. But when OpenAI announced uh this product uh no one no one’s pushed back. Uh >> can can I can I first of all point out what a beautiful double entandra from Microsoft’s crack product marketing
[01:01:01] department naming a feature recall >> and then recalling it. >> Nice. >> I I I think what we’re seeing here is one big architectural cluge. uh and I I think it’s going to to be cluji both from Microsoft’s uh perhaps illarchitected recall as well as OpenAI Chronicle. This wants to be built into the operating system and the hardware. It doesn’t want to be an add-on. I don’t think I’ll just speak for myself. I don’t want an agent taking constant screenshots of my desktop, sending it to a server, and then parsing it, sending back results. This should all be built Apple style. I I’ I would hope that Apple will get its act together in the next few months and build this into the window manager and the compositor and the operating system. The operating system is rendering the screen. Why can’t the operating system understand what it’s rendering? I mean, this is this is ambient AI is the term of art here where AI is monitoring everything all the time >> and enabling you, right? This is in one
[01:02:00] sense this is what I did this past weekend with with with uh my open claw with Skippy where I gave it access to everything right every single uh granola gets put into memory every WhatsApp message every email every calendar everything and it just makes it so much more useful uh and I think something like Chronicle as well uh would just enable it to be like Sam said telepathy >> well that’s the quandry I mean a lot of people who get in trouble with AI you know, or they get stuck. It’s something they’re doing on screen. The AI doesn’t have visibility into it. >> Yeah. >> But if you unlock that, the AI can be incredibly helpful, but it’s also seeing literally every mouse move. So, but when we talk about our moms are still not using AI, why not? This this is a big unlock, a big part. The voice interface and this are the two big unlocks because it can then say, “Oh, I see what you’re doing wrong. In fact, let me just do it for you and save you the trouble.” and that you know all these configuration screens on any Apple device and you know the menus are ridiculous now like you know the number of layers of of of
[01:03:02] configuration you can do I think something like some crazy stat like 70 80% of all iPhone users never change any defaults >> just it’s just too confusing to to do anything >> this is a huge unlock for all of that but you said it’s hugely intrusive >> right now you know I take screenshots and I send it to uh to Claude or whomever and say, “Hey, can you please help me figure this out?” Uh, but this is going to have sort of a uh, you know, sort of an expert over your shoulders, always there to support you if you need it. >> Well, and most people when they first start playing with AI, like like Alex’s standard first query, you know, to test a new model is, “Build me a firsterson shooter.” It’s it’s a better prompt than that. Sorry. Sorry to but but people want to do something visual and graphical to learn how it all works. And then when it doesn’t work, they want to show the AI, hey, >> this doesn’t look right to me. Fix it. So, they screenshot it, just like Alex or just like Peter, you just said. Yeah.
[01:04:00] >> Uh they screenshot it. But here, this is just a much more convenient way to get video, not just a screenshot back into the AI’s brain and say, “Look, this doesn’t look right. Fix it for me.” And so, you have a much more fun dialogue with the AI. >> But you have to accept that privacy is, you know, being compromised there. I think I don’t I I I I’ll take a very different position here, Peter, on that, which is I think any loss of privacy here is just due to this being an architectural atrocity. This wants to be built into an operating system like Mac OS. It wants to take advantage of the secure enclave. It wants to have secure hardware that’s cryptographically guaranteeing that as it captures pixels that come out of the compositor and the window manager and the renderer that all of those are securely handled and kept local. The reason that this is one big privacy dumpster is because it’s not being baked into the hardware and the local operating system. But that can be fixed. >> It will and it will be fixed and I want that. You know, I’ve often said, I’m going to give up everything, every piece
[01:05:01] of detail because I want my AI systems to be that much powerful. See, you’re back with us. Talk to me about what do you think about this? I think this is a I agree with Alex. Two other things though. One is that this is going to cause massive privacy issues for workers, worried about big brother watching over them. Already today, there’s a crazy statistic that 44% of Gen Z workers are sabotaging AI’s efforts to automate their own work. they’re putting in the wrong data. Um, throwing off the AI training. It’s really crazy what’s happening right now in in workplaces. So, I think this will just exacerbate it and we’re bringing this whole conversation to the front. >> Oh, I talk about a losing battle. You’re far far better getting on the wagon than you are trying to trying to do that. >> That’s such poisonous behavior. Protect your job.
[01:07:32] All right. Uh here’s our our next story. Uh basically world ID verification integration into Zoom. Uh and uh here it is. So the backstory I think that’s important here. So in 2024, engineering firm called AUP AU lost 25 million after
[01:08:01] an employee in Hong Kong authorized a series of wire transfers during what appeared to be a routine video call with the company’s CFO and several colleagues. The problem is that everyone on the call except the victim turned out to be an AI generated deep fake. Uh we’ve seen similar attacks in multinational firms in Singapore. Uh and the the impact of this is huge, right? So what we saw in 2019 to 2023 was $130 million in losses due to deep fakes. 2024 was 400 million. 2025 last year it was a billion. It’s projected to reach $40 billion by 2027. And so step in uh our friend Sam uh with his device called the Orb that takes a photo of the back of your retina and you verify on Zoom that you’re an actual human. Uh it uses world ID and a real time face authentication from a selfie
[01:09:00] as well as video and it says yes verely this person is a human. Uh so and you get a verified human uh human badge on your on your Zoom link. Did you just say gay barely? That’s fantastic. We’re right back in Shakespeare here. That’s awesome. >> Yay barely. You’re a human. >> Uh you still you still have to go you still have to go and actually scan your eyeball in one of these orbs. >> Has anybody done it? Have you guys done this yet? >> No. No. Apparently it’s bouncing all over Africa. People are scanning away. But um I haven’t done it. But I love it because uh you know I was on I don’t know if I told you Peter but I was on stage uh here at a companywide meeting and we took a little five-minute break in the middle and our controller came up to me and said Dave I’m so sorry I only got half of those wire transfers to China out. I’ll get the other out right away. >> Seriously, what are you talking about? And then so I got back on stage. I’m like I wonder what she was talking about. And so then the whole second half of the company meeting in the back of my mind I’m like wait a minute. So, I got off. When I got off, she said, “Okay, I
[01:10:01] got $300,000 out.” And I’m like, “What are you doing?” She’s like, “Well, you told me it was an emergency and we got to get the money to China right away.” Like, why would we be be wiring money to China? I don’t understand. So, anyway, only about 75,000 got across the border. We never got that back. Then, the rest the FBI got into it right away. But I’m like, man, digital transfers like this, you know, everything should be logged anyway. I I really feel like the the digital fraud world is going to get solved and this is a big part of it, but everything should be logged all the time. It shouldn’t be that hard to deal with digital stuff. I’m much more worried about chemical, biological, radiological. >> Yes. >> Stuff than I am about digital stuff because I think we’re going to get it fixed and this is this is part of it. >> Yeah. Uh Alex, any thoughts here? >> This is Minority Report. This is the sci-fi future that we’re catching up with. Apple with its Face ID was focused on the face, not on the retina. But if you remember the Tom Cruz Steven
[01:11:01] Spielberg Minority Report division, this is it. I I think it’s been interesting to watch as world evolve from World Coin and it’s been interesting to watch as the company bounced back and forth between more crypto focused and the economics of it versus the identification of human as a human side of it. But it seems, you know, from uh from a distance like the the human identity verification side is ultimately the bigger seller than uh the crypto side. And to the extent that’s the case, uh as resident cryptobearer, I’m I’m very supportive. >> We will have that debate. See, don’t worry about it. >> There’s a wild there’s a wild irony here that the more AI scales, the more valuable verified human identity becomes. This is kind of interesting. >> Yeah. >> Yeah. No. So, here’s this next story that’s related. So, uh, Grock creates a realistic AI French woman with a reflective ID. I’m going to play this
[01:12:01] little video here and take a look at it very carefully as she holds her driver’s license up to the camera. Look how beautiful and real this looks. So this was posted and it went viral by this uh gentleman Dr. David Lutke. Uh he says this AI Frenchwoman was created by Grock uh complete with perfectly reflective ID a few more months and video ID verification may no longer be reliable. So I mean how many times have you taken you know a picture of your license or your passport and uploaded it? Um uh it is going to become more and more difficult. We’re going to have white hat, black hat competitions up the wazoo here, Alex or >> Well, I I I would maybe just comment the the IDs themselves should be verifiable with a centralized database that that’s
[01:13:00] how you can maintain a single source of truth and whether people are flashing IDs or not maybe on blockchain less of a >> no with centralized database not a blockchain but good good one Peter good >> I’m just poking you buddy I’m just poking you I I also think you know there are so many other technologies that we have to bring to bear. We can do hardware level cryptography for example chain of custody for video that we it’s not that as a civilization we lack the technologies to ensure that any video or images actually originated from the real world without tampering. It’s just that we lack the demand for it right now. And if I would predict that if ever this the situation of deep faking gets so bad that it’s causing real problems at a societal level that’ll just unlock all of these technological solutions including hardware level crypto for cameras cryptography not not cryptocurrencies that that uh the market will will speak for itself and we’ll get all those tech >> I’m still waiting for the laws to come
[01:14:01] out that require all you know Grock and every other video generation to really uh identify it as AI generated. Um, >> there’s a bill working I I I covered in uh in my newsletter, the Innermost Loop, there is a bill right now, bipartisan bill, working its way through the House that will cover elements of deep fake uh fingerprinting like that. >> Yeah. Yeah. All right. Uh let’s move ourselves along here. Uh we’re going to talk about the economic impact of AI. There’s a lot going on. Token maxing uh word of the year. So this is from a report from 404media.co. Startup CEOs who are token maxing are bragging that they are spending more money on AI compute than it would cost to hire human workers. Astronomical AI bills are now in a certain corner of the tech world supposed to be the marker of growth and success. Look how much I’m spending on my tokens. Everybody, you should invest in me so I can spend more on tokens. Dave,
[01:15:00] >> come on. >> No, this is this is this is a warped story. This is a great thing. The the way you get left behind is by not trying. That’s the worst thing you can do right now is not get in the race, not play with AI, not try. And token maxing is fine. Like, you know, a CEO that’s proud of the fact that they’re consuming a ton of tokens. You can come and optimize it later in the year, >> but get every one of your people on their AI platform like now, like yesterday, and go ahead and start burning the tokens, and then you’ll you’ll have no trouble making it more efficient later. if you get in the game now. So, I think it’s great when a when a startup CEO says, “I burned, you know, three million of venture money on on compute, uh, fine. You’re learning a ton along the way.” And, you know, nobody nobody incinerates money for very long. You know, they’re not that irrational. So, it’s, uh, so this is just sort of the backlash story. >> What was the Jensen factor? It was like half your salary in tokens. Um, >> I I’m saying Yeah, I’m saying you’re full like I’m telling everybody by end
[01:16:01] of year. So you have, you know, you have nine months, 50/50 is a good target. Half payroll, half token use. And then again, you’re not going to have, you’re not gonna have any trouble optimizing it. You know, the token use is effectively about a 10x force multiplier. So if you’re at one, it’s like I’ve got one humans and 10 AI equivalents, uh, in my, you know, bucket of endeavor. So I’m actually underinvested in tokens at that point relative to to human salary. So one to one’s a better target, I think. See, are you doing that? Well, I think the bigger the more healthy question is are what’s the ratio of tokens to uh reducing iterations and maximizing efficiency rather than just a raw spend. I think for now raw spend is fine, but that’s kind of a vanity metric, right? You’re better off kind of looking at it as to what extent can you compress iteration cycles. That’ll be where we’ll end up. >> It’s what it’s what Alex you said earlier. It’s dollars per uh per token economic a good time for a trip. I think
[01:17:02] that’s a great trip. >> If you if you ask a great great salesperson, how many miles did you fly this year? >> That’s a terrible metric of sales productivity, but if the answer is zero, it tells you it’s a bad salesperson. Like like I I think it’s great when a salesperson said, “Oh, I had a million mile year last year.” And they’re proud of it. Like that’s that’s great. You know, it’s not it’s not the right metric, but it tells you they’re like they’re proud of what they do. And token maxing is a lot like that, I think. >> Alex, close us out on this one. Yeah, I’ve seen a variety of asset allocations uh in recent months between humans and AIs. I I think tokens to humans is is one interesting way of framing that. A pessimist will look at this and say this is replacement theory. This is humans being replaced by AI. How awful. An optimist will look at this and say how incredible we’re empowering fewer people to do more and achieving higher per capita productivity within an organization. What I don’t hear very many people asking is where does this
[01:18:00] end? So right now I see asset allocations, humans to AIS or at least human labor versus AI compute budgets ranging from 1 one2 at some of the frontier labs. It’s an even more asymmetric ratio. Question in my mind is is there any stationary end point? Is there a fixed point as this evolves? I tend to think it’s going to to tend trend towards one to infinity effectively that a as we start to phase humans out of the service labor force we’re going to see all tokens and no humans >> it has to there’s no other way around it the capitalism will demand it totally agree >> until the humans merge with the tokens at least >> token prneuron all righte this was your story so UAE launches is agentic AI government models. Uh this is from Shik Muhammad the prime minister of UAE the ruler of Dubai. He says the a he says
[01:19:01] UAE is launching a new government model. Within two years 50% of government sectors all sectors all services operations will be run on agentic AI. The UAE will be the first government globally to operate at this scale uh of autonomy. Sele brief us on this one. Yeah. So I had a I did a talk for uh his highness three four years ago and talked through where this is going and you know minister Al Lama the minister of is a good friend of both Peter years and mine >> and they are going full speed on this. I got to give them massive credit. This is the benefits of the of the uh authority you can wield when you have a benevolent uh dictatorship. I think absolutely get it done and and and the when you have that you have to make sure that the the whoever is in charge is doing the right things for the country and the ethos here is 100% alignment um and they are going uh at a massive speed on this just
[01:20:02] to give you an example I was given a a golden visa right and I was I was asked to be the test case and the the thing was could you get a golden visa authorized and issued within 5 hours and the they were freaking out going you know Singapore takes 5 days and his highness said, “Okay, do it in 5 hours.” >> And there for them, but they got it done. And and so there’s an ability to cut through legacy thinking in a very powerful way. And this is such a massive competitive advantage. Uh we’re actually working with a few of their folks in the prime minister’s office on this. Uh and so we’re very very excited about where this goes. >> There’s another quote from Shik Muhammad. He says, quote, “AI is no longer a tool. It analyzes, decides, executes and improves in real time. It will become our executive partner in enhancing services, accelerating decisions, and raising efficiency. So, I mean, you can do this in an absolute monarchy. You can
[01:21:00] move this fast. I mean, the what’s shocking about this story is the speed at which it’s moving, right? Um, you can there’s no parliamentary approval, no public debate or consultation. And the question is, you know, can Western democracies even keep up? I mean, you’re going to see this in probably Saudi, uh, maybe in Singapore, other Middle Eastern, uh, nations. Um, can we see anything like this in the US? >> Actually, yes, you can. And I think we will. You know, I tell the story of there. It used to take 6 months to get approval for a wind turbine in I think it was Colorado, one of the western states. And then they finally just got together and mapped all the power lines and water manes and flight paths on a on a GIS uh plotted on Google maps and made it available and now it takes like 30 seconds right to get approval um because it knows where everything is. It doesn’t need to take 6 months and I think we there’s the economic impetus of this. This is the basis where I think AI can make the biggest and most incredible difference because uh in prescriptive workflows you can absolutely completely
[01:22:02] automate and almost all of government certainly implementation of or policy enforcement is is prescriptive workflows. We know exactly the steps to remediate your drivers like this. We know exactly what needs to take place. So there’s no reason why that can’t be handled automatically with AI in a very short future. Step one, you know, uh, give a person a super frustrating experience. Step two, make them wait in line longer than they need to. Yes. Anyway, uh, Dave, do you want to jump in on this story? >> Yeah. I don’t think the US has ever copied a good idea back from another country since the American Revolution. You know, we stole the British legal system, but since then, I don’t think there’s been anything. But this is the opportunity. Well, I mean, look, the you’re exactly right. You know, a monarchy can move very very quickly. The the rate at which things need to be regulated and new services need to be rolled out is way way way faster than any government in history has ever run before. So only AI is going to be able to do it. So if we get a great system
[01:23:01] together in the UAE, we’re inevitably going to want to copy it back to the US. I think Peter asked the right question though. Is the US ever going to like the way Congress works, are we ever going to take a good idea and bring it back in? Yeah, I’d bet against that. But I you know it’s the right thing to do. Weird weird weirdly on this one I’m more optimistic than you guys which is weird. >> All right. Um let’s move on. Uh we’re going to have some fun here in the biomedical space. So there’s a new wave of biomedical innovation that’s coming. And you know I want this segment here to give people hope. Uh we talk about longevity escape velocity on this pod. We talk about the health span revolution. Well, it’s happening. Uh, you know, I was with Demis uh last Saturday at the breakthrough awards talking to him and he’s absolutely convinced that we’re going to cure cancer and solve all disease inside of the next, you know, 5 to 10 years uh hopefully on the 5year side. So, uh the
[01:24:02] first story here comes out of OpenAI. OpenAI releases chat GPT for clinicians. So, it just gave away to all US clinicians. These are physicians, nurses, physician assistants, a free AI co-pilot. And this co-pilot outperforms all human doctors. So they have a healthbench benchmark they use. It scored 59 versus 43.7 for human clinicians. Um, pretty extraordinary. They validated this on 700,000 uh model responses. Um, and they got a 99.6% 6% accuracy uh using their physicians evaluating the AI versus human responses. Uh and pretty extraordinary uh something that will uplevel I think medicine nationwide. Uh and you know from my standpoint I’ve been saying this for a while I think it’s going to become malpractice to diagnose a patient without AI in the loop. Um there is so much going on that no human doctor can
[01:25:00] possibly uh you know understand it. You know, at Fountain Life, we upload 200 gigabits of data about you. Um, and across your genome, full imaging, full, you know, microbiome, metabolom, you know, 140 blood biomarkers. Humans can’t analyze all that, but AIs can. So, um, Jent, any any thoughts on this? Alex, do you want to weigh in? >> Yeah, I’ll chime in and say the professions are cooked. Yes. >> This was a widely expected release. This wasn’t a surprise. Those of you watching early releases, leaks out of OpenAI saw this coming months in advance. You can even know from those leaks what the next one to drop is, what the next profession it’s law. There’s also one coming for management consulting and financial work. OpenAI thanks to uh GDP val in in some sense mapped out all of the knowledge work verticals and is in a good position thanks to their own internal and and now external benchmarking to know the relative
[01:26:01] strengths of their model as appropriately fine-tuned or post-trained for different verticals. So I would expect to see many many more of these chat GPT4X for different verticals in in the case of clinicians thanks to open evidence and work by Epic in the form of up-to-date and and other clinical AIs. This is already a somewhat crowded market that open AI is coming into. If I were OpenAI, I would release this sort of product more as a reference design and a way to ensure that capabilities that are built into the underlying models and then post-trained via a variety of evals are broadly available and that OpenAI maintains its status as a favored foundation model for clinical and biological work. Maybe they’ll try to monetize this as best they can. Right now, it’s available for free, but I I tend to think it’s worth more to OpenAI more as a distribution channel for medical knowledge and one that they can
[01:27:00] build on. OpenAI has released a variety of statistics over the past year for how many people are self- diagnosing or otherwise trying to to treat themselves using chat GPT. And I think offering a a standard regulatory compliant channel for that is is a very clever way to then do a sales up pitch to biomedical enterprise uh in and life sciences in general which is probably where the real money is. >> It’s also a data aggregation strategy right I mean OpenAI is going to be getting a huge amount of data far more verified than I feel this way or I think I might have this >> you know bringing in a million plus clinicians into the loop. The other thing that’s worth saying here is that, you know, at least current estimates are that we’re going to have 86,000 uh shortage of 86,000 physicians in the next 10 years. But it’s going to be interesting, right? You you know, I have two nieces that have gone through medical school, my sister, myself, you know, lots of friends. And you’re you’re
[01:28:00] you’re spending literally between college, medical school, and and post-graduate training in whatever field you’re going into, you’re spending well over a decade and, you know, half a million, close to a million dollars to get this degree. And will you even need it? Is a medical doctor going to need to be in the loop or is it a nurse plus an AI that’s going to be giving us all our, you know, medical advice, our diagnostics and our therapeutics with a optimist robot giving you surgery? Um, there’s a lot of change coming here. >> Yeah, a huge amount of change and also it’ll be a great case study and like we’re not about replacing doctors here. were about detecting thousands of things that were not previously detected and cutting them off early and extending longevity and making life better. And you know, it’s not a given to me at all that the number of doctors goes down. Just the the number of things we want to do goes up 100 or a thousand. >> Are you going to spend that much money to go through medical school and get
[01:29:00] this little profession when the AI is doing the diagnosing? >> Of course, this is about replacing doctors. I mean, let’s call a spade a spade. Of course, when fully developed, this and comparable solutions are about automating away medical practice. How could they not be? And and also, by the way, nursing and also, by the way, the HMOs and uh drug design, uh OpenAI and and other frontier labs are all pursuing drug design and drug delivery. Of course, it’s about the the full picture of if you’re going to solve medicine, are you just going to leave millions of human doctors practicing as sort of meat puppets for the AI? No, this is going to be the endto-end solution. We’re just seeing the beginning of it. >> Agree. A couple comments here. One is, you know, in an ideal world of doctors getting a cognitive exoskeleton with all of this, right? You get this amazing capability to expand your own intuitive thinking. But Alex is completely right. But on the other hand, this huge backlash here, this is a very regulated
[01:30:00] industry. Remember a few years ago, Texas banning tele medicine, okay? Just outright banning it because, you know, for every spot on my hand, I must have to go to a physical doctor. I can never do that over video. So, the immune system response is going to be very, very fierce. I expect to see this battle play out heavily over the next few years because there’s vested interests up the yin-yang and healthcare has the third worst immune system ever behind religion and education and academia. >> Yeah, >> I’m not so sure that the the immune response I if you look at what happened with the broad transition to electronic medical records like Epic based systems for example, every clinician that you speak with will complain about Epic. They’ll complain about EMRs, how much EMRs distract from direct interaction with the patient, all of that. And yet, every major medical system is either completed or is in the late stages of, at least in this country, their EMR transition. If they can’t resist EMRs, if they can’t resist EMRs, how are they going to resist strong AI that
[01:31:00] outperforms humans? >> Wait, hold on, hold on, hold on. EMRs are kind of as add-on helpful aid because you saves you in documenting the process, etc. This This is the whole Yeah. >> The clinicians The clinicians hate the EMRs. They hate the interface. They hate the process, >> of course, but they’re going to hate this 10 times more because it’s a direct replacement for the cognitive ability that they’ve trained for 10 years to do. So, just let’s prediction is huge uh regulatory and immune system backlash on this one. >> Yeah. And AI labs have been using healthc care as the reason why they can’t slow down as well as the fight with China. Right. If we slow this down, we’re going to lose lives has been sort of the heralding call. >> Totally totally agreed. All everything Alex said earlier about we this needs a to be a wholesale replacement of the medical system is absolutely correct, but the path there is littered with with stones and speed bumps. >> For the record, and this is an interesting
[01:32:00] >> Go ahead, finish up, Alex. Yeah, go ahead. >> Is this an interesting micro debate? For the record, my intuition uh and I I interact with a lot of clinicians is the exact opposite. The clinicians hate the EMRs, but they they love the AI that helps them do a better job of what they want to do. And it there may be an extent to which AI interfaces like this end up being framed as the solution to all of their EMR wos better until it takes their job. All right, that’s the way this works. >> Let’s let’s move this let’s move this along here. Uh our second story here is AI to reduce wasted donor hearts. And I love uh you know I just want to show a number of stories here how AI is going to be interfacing and changing the medical practice. So I don’t know if you guys are an organ donor. I am. Anybody else? >> Yeah. >> Um so so currently there’s 4,000 patients who need a cardiac transplant today. There’s 103,000 who need some type of a transplant. Kidney, liver, lung. And when an organ donor is on the
[01:33:02] table, end of life, and the physician has to analyze the organs and decide whether they’re viable uh for transplant, you know, you’ve got like 15 minutes, typically at 2:00 in the morning to make that decision. And so in the heart world, only a third of the hearts are ever actually chosen for transplantation. >> So here comes something called top heart. Yeah, just a third make it out the door. Uh so here comes something called top heart uh from NYU and Stanford and Topheart is able to look at 20 different variables right typically the physician is looking at how old is this person do they have a drug history if they know and looking at coronary artery disease to say should we ship this off um their goal by looking at 20 different variables is you know give that surgeon at 2 a.m. in the morning a second opinion and they believe that they can get an additional 500 hearts uh into the organ replacement ecosystem. You know this is on top of the fact that there’s an entire uh you know sort of
[01:34:02] synthetic biology world going on right now uh to provide an abundance of organs from bioprinting and xenotransplantation you know pig organs you know the antigens being replaced by human antigens. This is the work of George Church at Eugenesis and Martin Rothblat at United Therapeutics. So, um, you know, this is an abundant story of going from a limited number of organs to an abundant number of organs. Um, Alex, you tracking this as well? >> I’m tracking the space broadly. There are other advances as well, like trying to create a national market for organ donation versus a bunch of state markets that would be greatly enhanced with improvements in uh, vitrification and cryopreservation. I I I think it it’s good that there is a an a vibrant and growing distribution channel for for donor hearts. I think that’s great. But I I also think it’s it’s very painful that
[01:35:00] the need for one human to to die or at least that one human dies and donates a heart to another human that that’s such a a zero someum type situation. it it’s painful to think about and I I while it’s you know it’s it’s great on margin to to have more efficient ways of distributing donated organs I I really really would like us to get as soon as possible to a situation where donor organs are completely unnecessary. >> Yeah. and and we will I think uh eenesis um uh Dean Cayman’s company uh advanced uh organ generation they go from your skin cell to a pur potent stem cell to regrowing your heart liver lung or kidney a lot of this is going to be up and operating by the end of uh end of this decade hopefully sooner um >> can’t come soon enough >> yeah and of course as we have autonomous cars having less car accidents you know the ability to have organ donors is going to become reducing though still
[01:36:02] motorcycle accidents are probably the number one uh number one reason we get organs donated. Um let’s move on to our next story and and this goes in line with the fact that we are beginning we’re at the beginning of the slaying of cancer right so uh this is a great story pancreatic cancer mRNA vaccines show lasting results in trials so I don’t know if people have been tracking this but we now have these cancer vaccines um and this is using mRNA we used it as a co vaccine this is actually ability to create a uh an mRNA that activates ates your immune system against the cancer that you have. So there more than 120 of these trials going on uh against lung, breast, prostate, melanoma, pancreatic and brain cancer. Uh in this particular case, a 5ear survival rate for pancreatic cancer uh has just gone through the roof. Historically, it’s 13%. If you have pancreatic cancer, it’s a it’s a death sentence. Only 13% of
[01:37:01] people are able to survive that. So in this report, eight out of 16 patients who generate a strong immune response to the vaccine, uh that’s 87.5% still alive after 6 years. So how does this work? Um you have a surgery uh to remove as much of the tumor as you can. You sample the tumor. It’s sequenced uh and then that sequence is identifying 20 unique mutations in your cancer. uh that is then built into a personalized mRNA that activates your immune system like killer missiles, activates your your killer tea cells to go after and attack your cancer. So this is um this is a breakthrough in how we deal with with cancer. Uh and the fact that you’re durable after 6 years is pretty extraordinary. >> I remember this incredible quote from Raymond Macaulay, our biotech guy at Singularity. said mRNA vaccines are the first battle in the last war against
[01:38:01] disease. >> Yeah. >> Amazing for me and I think this is showing up. >> My daughter works on this uh my daughter works on this over madna actually mRNA vaccines >> and it’s madna got >> madna got a bad rep on their mRNA for covid but this is the holy grail right I mean being able to go from your cancer to here’s the injection that’s going to save your life is extraordinary. Well, and if it works pretty good, that’s the amazing thing. It’s it’s a universal solution. Like, you know, when Alex talks about all of math is cooked. This is the difference between in the old days, I solved one math problem. Now I have an AI. It solves all math. This is the equivalent in biology where if it works, it should work everywhere. >> Yeah, Alex, you’re right. I mean, mRNA was a was an, you know, Project Warp Speed. I’m just saying afterwards a lot of people were coming down on mRNA vaccines, but >> there’s a lot of politicized griping over mRNA vaccines in general, but there’s going to be political griping
[01:39:00] over almost anything at any scale. I I do think I I think back quarter of a century to Eric Drexler and engines of creation and the national nanotechnology initiative when the US Congress was was sold a story that with billions of dollars of congressional and national investment that we would get medical nanoobots that would swim through our bloodstreams and cancer ourselves. >> Well, we’re getting it though. But we’re not getting it with diamondoid nanoobots. We’re getting it with these lipid nanop particles and Maderna and Fizer style mRNA vaccines. I think it’s interesting to almost as a retrospective to say we actually got the nanoobots. They’re they’re just fat. >> They’re not silicon. They’re not diamondoid. They’re fat. >> Yeah. We’re using our own machinery to do the battle for us. >> That’s the other angle. I mean do do you have a prediction Peter given that imunotherapies in some sense like really really coarsely imunotherapies
[01:40:00] we’ve known about some form of imunotherapy for 100 plus years and people who were infected with a a virus 100 years ago in some cases or or bacterial infection showed tumors shrinking. We we’ve known at some level that some form of imunotherapy would work and we’re only now figuring out how to fully weaponize it and operationalize it. Where do you think this goes? Do you think like in 10 years we’re all wearing Apple smartwatches that are looking for evidence of uh tumor DNA or RNA in our bloodstream and then send our daily mRNA update to a programmable implant or something? >> I I I think that is basically it. Either they’re implantables or you’ll be sampled on a regular basis. I mean the goal of course is find it at the very beginning especially if there are solutions. >> There’s one more point about this that I think is really powerful. This is personalized medicine is actually becoming operational >> and that’s a huge inflection point we’ve been waiting for for a long time. >> Uh here’s another example again just to
[01:41:00] give people hope and to see the data. You know longevity mindset is about seeing this over and over and over again saying yeah the world is changing. You know, the things that used to kill us are being either solved or delayed. So, a singleshot CARTT infusion shows strong response from melanoma. So, it’s not just a strong response. 100% cancer free after a single shot. So, this was an unexpected result. Within two months of treatment, all 20 patients in this trial had minimally residual disease, MRD negative, right? That no disease uh identified after they were assayed. again meaning uh that all patients you know had a median follow-up of 15.3 months without any show up of their melanoma. So it’s gamechanging in timing. So how does this work? Uh you draw blood uh you identify you have melanoma. The doctor finds it. We should all be scanning ourselves all the time. Uh we do this at found using visual. At a minimum if you have a family history of of skin cancer please have yourself
[01:42:01] checked on a regular basis. So the doctor draws blood uh extracts your te- cells from the patient genetically engineers the te- cells right a gene is inserted giving those te- cells a new receptor called a car a chimeriic antigen receptor that is specifically programmed to recognize the protein from your melanoma your tea cells are then reinjected back into your body hundreds of millions of them and they go identify the melanoma and they slay it um for the first time ever this type of a therapy we’re using the term cure on this particular type of uh I mean it’s extraordinary. So >> just another example of what’s coming. >> This is both amazing but can also can you see the clumsiness of it requiring blood extraction and then carti cell uh creation in in in vitro? Why can’t we do this in vivo? Why can’t we do this uh in in individual cells even we we’re seeing the beginnings. This is almost like uh horse and buggy era of of
[01:43:00] amunotherapies, but surely we should be able to to do this in a fully autonomous like intracellular environment. >> Take the win, Alex. Take the win. >> Oh my god. >> Yeah. I want my FSD. >> Yes. And you shall have it. All right. Here’s one more story. Uh and this is a fun one. So, you know, MRSA, MRSA, people have probably heard about this. It’s methasylin resistant stafloor orius stafloccus orius. Uh it’s a killer infection, right? This has been typically in hospitals. It’s now getting out to the community. So 2.8 million people have uh MRSA infection every year. It kills 35,000 people in the US alone. The problem is all the firstline antibiotics for MRSA have failed. uh methasylin, penicellin, moxicylin and now even vancomyosin which has been the uh you know antibiotic of last resort uh is no longer working. So this particular drug uh uh candesartan is now being
[01:44:02] used. It’s a uh FDA approved BP medication for blood pressure and it works uh to basically stop uh and inhibit aa infection. And so this is an example of taking an existing drug uh and it’s now fully usable by uh by uh the scientific and medical community because it’s been approved. It we know its safety protocol. So I love this. >> Um >> do you remember Salem on stage we had uh at the abundance we had David Fagenbomb from? >> Yeah. So this is similar to his story. I just tell his story and just congratulate him a donor to his uh to his foundation. So, here’s the story here. So, in 2010, uh he’s a 25-year-old medical student. He comes down with this rare disease called Castleman’s disease. And um uh they throw everything they can at at him. Uh and he’s literally read
[01:45:00] his last rights. Uh he has four near-death experiences. And then as a medical student, he starts experimenting on himself. and he discovers that his disease is caused by a hyperactivation of the mTor pathway and he says well if it’s the mtor pathway I can probably downregulate it using rapamyosin and he does that and he finds out that it works so he’s been remission free from tw for 12 years and he comes up with the idea are there other diseases out there for which an existing approved drug can be used to cure the disease and here are the numbers are 18,000 recognized diseases out there, but only 4,000 FDA approved drugs. And so he’s now using AI to match the existing drugs uh and repurposing them against new diseases. Uh and it’s working. >> I think that’s such a great example of citizen science also, right? Take a personal problem and then just start hacking your way through it. I think we’re going to see hundreds of thousands
[01:46:00] of this example. And this is where people should pick and understand why are we so excited about technology is because this is now possible. >> Yes. >> And this was not possible 10 years ago, five years ago even. And now it’s just going to become more rampant and any problem can now be solved by kind of just focusing on it, attacking it with AI and going after it. It was incredible. >> Solve everything, right? >> Yeah. Solve everything. And also I would say historically like before this era offtarget indications were were a dirty word or or dirty drugs that that had lots of offtarget side effects uh highly undesirable. But now if if we have amazing AI models of individual cells and the body suddenly offtarget side effects they become a secret weapon. And we can we can repurpose drugs we can combine repurpose drugs. I’m very bullish on the space. I I I advise I have a portfolio company Senjam Therapeutics that that is focused on uh increasingly on AI for repurposing
[01:47:01] medications for anti-inflammatories for other purposes. I I think the space has enormous potential thanks to AI. >> Yeah. Amazing. Uh for folks who are interested, you go to every cure.org. Uh you can see what David’s doing. It’s a nonprofit. Um and support his work. He is uh he’s brilliant. All right. Uh let’s get into some fun conversations here. Uh the robots are indeed coming. A few stories to report here today. Uh the first is the pingpong champion of the world is now an AIdriven robot. Let’s take a look at this uh this little bit of a match here and we can discuss it.
[01:48:02] The background music is killing me. >> Sorry about that. >> Anyway, so the robot’s using nine cameras and three vision systems. Um, it won three out of five games. >> Oh, let me pause this here. It won three out of five games. I’m surprised it didn’t win all five games. Uh, and of course it will. >> Doesn’t have a lot of top spin, actually. Just very nimble. >> Note that this is the worst it’s ever going to be. That’s kind of incredible. The speed of response is amazing. >> Yeah, this is uh this robot’s called ACE. Uh, and it’s, you know, I’m not sure if I would see this in the same lineage as Deep Blue or Alph Go, but it’s the beginning. >> It’s totally not. This is a much lower dimensional game than than any of those board games. It it frankly is astounding to me that it took this long to reach human performance in table tennis because it’s such a simple game. You only have a handful of degrees of freedom in the ball. You have the the position, you have its linear momentum,
[01:49:00] you have its angular momentum, and I think that’s about it. the the rest is is just modeling the trajectory and maybe doing a little bit of Monte Carlo research for tactics that your opponent might take. This should have been solved years ago. I don’t know why this took so long. because >> let’s answer that question actually because that’s really well said and this is a this is very similar to many many uh robotic operations in your home in a factory you know and whatever the barrier was I think it’s probably related to the the vision system >> uh you know it’s not a high margin problem right you know really like investing a billion dollars to solve it but now because the vision systems and the feedback systems are are dirt cheap and easy I bet it was solved by one or two people in like a few weeks. >> Yeah. >> And that means all these other home robots can now be built by one or two people in a few weeks. >> Similarly, a little deeper. >> Similarly, there’s a tennis playing robot also, which I’m excited to play with. Uh, which would be really cool. >> Uh, but it’s all the same category.
[01:50:02] >> Yeah. So, total no-brainer. The, you know, if you ever use a ball machine, then you go pick up all the balls for like 20 minutes with the like a robot that does that is literally MIT class 270. you could have done it. What’s the barrier? And and I’m sure the barrier is related just to the feedback control and the vision which you can now just use with a transformer. >> Well, also people that are in robot labs don’t play tennis, so they don’t have an incentive to go do that good work and other things. >> They don’t not to generalize. >> I don’t want to go too far down this rabbit hole, but there’s a massive correlation between successful founding entrepreneurs and the MIT tennis team. It’s all it’s basically 100%. It’s crazy. including Warren and I. >> All right. Uh here’s our here’s our next story. Uh the Tesla Cyber Cab is now in production. Uh take a quick look at this video here. Um so Dave, you and I saw this uh and we saw the production line.
[01:51:00] We were at uh in Austin in in December and December here. >> Of course, uh no controls, no steering wheel, no pedals, uh an operating cost of 20 cents per mile. And you know, Elon’s announced he’s going to sell it for $30,000. I think an incredible investment if you can afford it is you buy 10 of these and you put them out in your community and it earns money for you while you sleep. >> If you’re just listening to this podcast and you’re not watching the video, go find this video clip. >> Yeah. >> Uh you know, in the podcast, you got to see the interior of this to believe it. It’s like you’re walking into a car, but it’s just a love seat. >> And it’s only It’s only two seat. It’s only a two-seater. Um, >> yeah, >> which is, you know, which is the average load for an Uber is like 1.2 people per Uber, >> right? So, two seats makes total sense. >> Well, look, if you have four people, just push the button twice and two of them come. >> When is this expected, by the way? I need this to get my kid to school so I
[01:52:00] don’t have to do that. >> So, I mean, production going off the line. We >> production officially started this past week, April 24th. Um, and you know, the challenge is can they really build at the rate that they want? They want two million of these per year is their goal. Um, >> yeah. >> Regulatory hurdles or that’s been passed now with >> No, it’s all same as Whimo. >> Okay. >> Yeah. It’s town by town. It’s state by state, town by town. But if you’re covered, Yeah. you just >> The difference is a Whimo because of the LAR and all the camera systems and just the base. I think the vehicle probably, you know, it tops out over $100,000, probably $150,000. I’m not sure if they get into higher production if it’s going to be coming down, but at 30k um >> Yeah, >> this is insane. >> Yeah. No, there’s so many parts if you if you look at the parts just laid out, you know, because there was that great exploded car. Yeah. >> In the in the showroom >> for an ice versus compare it Yeah. Compare it to a consumer gas powered car and just in raw part count and it’s just
[01:53:01] it’s got to be 80 90% reduction in components versus a gas. >> I’ll give you the statistic I always have in my head. uh the combustion engine of a of a the the number of parts in the drivetrain of a combustion engine car about 2,000. A Tesla has 17 moving parts in the drivetrain. >> Oh, it’s just the future of transportation is so good. >> It’s just better technology. >> Yeah. And and it doesn’t need a huge battery range either. It can just go and hang out and recharge itself whenever it wants. Another one will come. >> And guess what? On the on the transportation technology line, um here is uh here’s the next story here. Joby Aviation. This is Joen who starred velocity 11 with Rob Nail if you remember Rob. Sure. >> So Joby just did its first first air taxi flight from New York to JFK. >> Let’s take a listen to this uh news report out of New York. >> And hello. I live in New York. Hello. >> I know. Well, this is going to help you out, buddy. Check this out.
[01:54:00] >> Massive. >> Getting to New York airports is a nightmare. electric air taxi demonstration took off from Kennedy Airport and made the short trip to the West 30th Street Helport. At 11:00 a.m. this morning, if you were to drive that 16 miles, it would take more than an hour. In this cutting edge plane, roughly 7 minutes. Joby Aviation’s goal is to make this type of travel the gold standard, pointing out several pluses, including zero emissions and how quiet it is. 100 times quieter than a traditional helicopter. This so-called air taxi would shuttle people from JFK to the West 30th Street Helport as well as the one at West 34th Street and the downtown Skyport. The aircraft seats up to four passengers. There’s one pilot, room for luggage, and it will fly between 1 and 3,000 ft. Right now, Joby does have the green light from the FAA for this phase. And if things pan out, the company hopes to have its fleet up in the air and running within the next
[01:55:00] year. But for now, for the next week, you will see this aircraft that kind of looks like a large drone buzzing over our area. >> It’s a time machine, gentlemen. Can you imagine? >> I’m standing at I’m standing at the helport with my bags ready. >> I love it. You know, EVs down. That’s >> EV talls is a lousy name, though. I just call these things flying cars for lack of a better term. We need a better term than flying cars. A better term than EV talls. >> It took too long. We were supposed to have these by 2015 in Back to the Future Part Two. Here we are in 2026. Why did it take so long? >> We need We need Mr. Fusion to get >> You think Mr. Fusion is the reason we didn’t get our flying cars? >> Absolutely. That’s what the movies. >> Oh man. >> Well, in our in our robotic segments here, we had two backto-back Alex, why did this take so long questions. So, let’s very much in a why did everything take so long? I I guess hello regulatory. >> Um this >> you think it’s regulatory. Do you think regulations are why we didn’t get >> the technologies been there for quite a
[01:56:01] while? This is a long it takes a long time to do ask Peter how long it took him for the for the zero years to get approval to do something that NASA had been doing for 20 years. Anyway, yes, the FAA is not happy till you’re not happy. That’s the rule. Um >> well I think we got to answer that question because you know a lot of the AMA questions are around what are the jobs of the future going to be uh if white collar gets obliterated but I think a lot of the answer lies in these last couple segments. You know robotic stuff is going to be abundant imminently but it doesn’t just naturally happen. And so if we can answer Alex’s two questions on what are the bottlenecks those are jobs. >> Yeah. >> Whatever those bottlenecks are those are your jobs. >> Those are AI models. If there’s a bottleneck there the AI will solve it. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines
[01:57:00] of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today. >> All right, let’s jump into AMA with the mates. Uh so guys, thank you again for all the comments that you give us on the YouTube. We read them all. I have Skippy read them all as well and summarize. We
[01:58:00] pick out uh eight questions that we can answer every week. So, please keep them coming. Uh and let’s go to those questions. All right. So, uh, gentlemen, uh, pick your favorite question off list number one. Uh, Salem, do you want to go first? >> I’ll go with number four as I know that world a little bit, which is, um, what’s the future of large consulting firms like Accenture or Capgeemini? This is from Steve Bottle501. Um, this it goes fullon into the transformational effort that’s going into enterprises here. Traditional consulting is in very big trouble if it remains a pyramid of junior labor producing uh analysis and decks. AI totally destroys that model. But u consulting firms, you know, in the land of the blind, the oneeyed man is king. In the in a volatile world, your clients are slower than you are and they need help. The model will have to change. The future of consulting won’t be like a people pyramid. It’s an intelligence platform plus domain expertise plus
[01:59:01] change management. And we’ve been focusing on the and we’ve been holding the change management side for a while. The winners are going to be become bringing agentic workflows and benchmarks and governance and implementation capacity to their clients. The loser is just going to keep selling headcount for from an exo perspective. Consulting moves from experts to rent to uh transform a transformation operating system and the companies that help their clients do that will win. >> Yeah. You know, it’s uh we’ve talked about this before that in the old scarcity model, you put a wall around all of your experts inside and you meter them out by the hour, right? And uh that is going to get collapsed. Alex, why don’t you go next? >> I’ll take question number two, which asks, “Everyone can be an entrepreneur with AI as a tool. However, what action do you take when you genuinely don’t have a creative idea for a direction? For most, the answer is none.” And this is from 3 billionth random user. I don’t
[02:00:01] agree with the premise. I I think and one of the reasons why um one of my funds O21T Capital backed a firm started by friend of the pod Alex Finn called Henry intelligent machines or him is to solve the problem of creative ideiation for for starting new ventures. I think just as AI can take over as an operator of a business or a fleet of businesses, AI can also automate the process of creative ideiation for those businesses. And I think in that world, in the the HIMYM world, if you will, the role of the human sort of a a one-person owner or magnate overseeing a conglomerate of maybe hundreds or thousands of AI run micro businesses. The role of that AI, that human entrepreneur then becomes one of a taste maker. You you have opinions. Everyone has opinions as a consumer of goods and services. But those opinions can shape the taste over fleets of AIs
[02:01:02] that are providing the creative ideation for businesses that they bring to you. They say, “Hey, I I want to start this micro business for you. Do you like it?” Yes. No. And then the human can have an opinion. The AI is performing the ideation. The human and the generation part the human provides uh sort of the discipline and the taste uh and the discrimination for which ideas pass the filter which ones don’t and that’s the solution. That’s how we square this this circle of humans not actually in extremists needing to generate all the creative ideas themselves. >> Yeah, agreed. You know, idea generation has never been the limiting factor. um you just have to get around different people or just notice the problems around you. It’s historically been execution uh that’s been that’s been the issue, you know. Go check out uh Pulsia. I think it’s pulsia.ai, which is AI slot backwards. Uh if you sign up for that, it will scan all of your background and
[02:02:00] it will generate ideas for you. And in fact, it will generate a website of a business based upon what your passions and interests are. Anyway, fascinating stuff. And I’ll say it maybe instead of Pulseia, I’ll I’ll talk my book here since I have a financial interest in this one. Check out meet Henry.ai. >> Okay, fantastic. Dave, uh, number one or three? >> I’ll take three and leave you with the hard one. >> Happy to help. Uh, if you eliminate entry level, entry lever jobs, uh, but keep experienced jobs, uh, what happens when the experienced people retire? Isn’t that like eliminating babies from humanity? Says Todd Marshall 416. I don’t think it’s quite that dire, Todd. Eliminating babies from humanity about the worst thing could possibly happen. Uh if you eliminate entry- level jobs, well, look, this was going to happen anyway. If you think about uh you know actually we have a weekend place up in Vermont and there’s the Simon Pierce uh glass blowing factory is up there and if
[02:03:00] you want to glow blast blow glass you have to apprentice with a senior dude for like year like a decade and then they let you make glass like it’s it’s it’s like a a page out of like 200-y old history. Uh that mode of operation is going to go away in all forms of white collar work no matter what. So the the rate of change of the world in the singularity is so fast that the entrylevel career path was kind of a dead end anyway. So now Meta announced a a 10% layoff which is really going to be more like 30% according to the insiders I know and they’re definitely not hiring new entry-level people in the middle of doing the layoff because AI can do all the coding. That was not the career path you wanted in the first place. So we’re going to have to find a new way forward. I think AI is going to be the ultimate teacher. We’re going to save a ton of time on like Peter was saying earlier in the podcast, the four years of medical school followed by four years of fellowship and internship, eight years
[02:04:00] of your life after you’re already done with undergrad. It’s just way too much time. So, it’s all going to move to AI based nimble training. And then, you know, this massively expanding economy creates huge amounts of new opportunity every day, but it’s opportunity that didn’t exist the prior day. So the entry- level job wasn’t really likely to lead you on that path anyway. So it’s all got to get refactored. It’s nothing like people stopping having babies. >> I think it’s so well put, Dave. Really well put. >> All right. Question number one I’m left with is from Gianluca uh Pacani Pacani 808 who asks uh you guys say AI will create jobs but for whom? It looks like AI is creating jobs for AI not for people. Uh so uh Gian Luca um the fact of the matter is in the long run yes AI will be able to do any job uh I think that is the case uh but people still like working with people still like
[02:05:00] hanging out with people uh and I think it’s ultimately going to be the fact that two things are occurring number one um as every technology destroys a layer of jobs right new jobs are created on top of that you know internet kills travel agents, but it spawned millions of social media managers, app developers, YouTubers, and everything else. So, they’re going to be new layers of jobs coming out. And yes, those may well be displaced by AI. Again, at the end of the day, the question is, what are you passionate about? And how do you use AI to help deliver that? There’s going to be a human interface layer for a lot of things cuz people like hanging out and interfacing with people, people, you know, us meet puppets. Uh so um it’s going to be navigated. Um it’s going to be important. And I’ll just remind you of one other thing. A deal of a job is a recent creation. And most people don’t love the jobs that they have. They have the jobs they have right now because they frankly um you know need to put
[02:06:00] food on the table and get insurance for their families. So if you could do anything, what would it be? Would it be to work? I mean in a future of universal high income, you know, where everything is demonetized to such a point where you don’t have to work, then you start doing the things that you love. So that’s my take on it. All right. Um let’s move on to our second set of questions. Uh Alex, why don’t you go first? >> Ah, well, let’s go with question number five. Wasn’t all of this originally predicted by Ray Kershw to be happening sometime around 2040? Are we genuinely that far ahead of schedule? And this is from Brett Avalon. I’m not sure, Brett, what all of this you’re referring to may mean, but I do think broadly we’re well ahead of where friend of the pod Ray thought we’d be. I think we achieved as I’ve mentioned on numerous occasions. I think we achieved a AGI uh which isn’t Ray’s concept but was popularized by
[02:07:02] Nick Bostonramm uh and uh and co-conceived by Ben Girtzil and and some others. I think we achieved that by no later than summer of 2020. and Rey’s approximate, you know, Rey may say I’m I’m misconstring his timelines was predicting his version of AGI by 2029. So, call that a 9-year gap. Rey, and I’ve discussed this with him on the pod, is predicting the singularity, his version of the singularity by 2045. My version of the singularity isn’t a point in time, and it’s certainly not in 2045. It’s now and it’s an interval and we’re we’re right in the middle of it. So, are are we genuinely far ahead of Rey’s schedule? I I think we are. I think Rey would probably at this point and and has arguably said that we are in in some ways ahead of his schedule. And I I think the benchmarks reflect that. And I I think the 2045 timeline that he
[02:08:02] provided where the super intelligence would be collectively smarter than all of humanity. I I think we’re going to hit that so far ahead of of 2040. >> Ask him next week. We’ll be with him in 6 days. Yeah. >> From the horse’s mouth. >> Yes. All right. Dave, why don’t you go next? >> All right. I’ll take the hardest one on this one. Number eight. Poom. Probability of universal destruction of all humanity. estimates. Uh Musk and Hinton say 10 to 20%, Emma Day says 25%, Alman says nonzero. He actually said more like 10% when I interviewed him. Uh how can any of these CEOs think it’s it’s it’s acceptable to have a 1/5if chance of human extinction? Uh they all agree with you that it’s completely unacceptable and they all say stopping research and letting China run forward isn’t going to solve the problem. And so they trust them. They each individually
[02:09:00] trust themselves. You can debate whether that’s good or bad, but they they do. And that’s why they want to not lose the race individually. And that’s why they’re pushing forward at full speed. I think uh you know uh Musk and I think along the way Amade have both suggested a six-month pause but it wouldn’t work. It’ll at the same time they say it. They say it’ll never work. It won’t happen in the real world. So I’m just going to keep moving as fast as I can. But they 100% agree with you. This is completely unacceptable, ridiculous, and the lack of government involvement across the world is utterly insane. Uh so that doesn’t solve it in any way. It’s just that is what’s actually happening and that’s what’s going to continue to happen. And I’m I’m continually shocked as is Alex. I know with our inability to get any kind of government reaction to the what’s now the obvious we were telling them a year ago when maybe it wasn’t 100% obvious, but now it’s 100% obvious yet still so slow. Uh so anyway,
[02:10:01] there there’s your answer. Do you think answer? >> I’d be curious, Dave. Do you think that they believe their own estimates here? Or is this a case of revealed preference where they think maybe it’s more socially acceptable to estimate a higher number, but actually through their actions they’re revealing a preference that that suggests their internal estimate is much lower. >> I think it’s lower. I don’t know if much lower. I think they they all have the same, you know, consu chemical, biological, radiological terrorism is the number one risk. Um, and so I think it’s it’s probably lower, but I don’t think it’s like 0.001% low. >> Interesting. >> See, you have two to choose from. >> Uh, I will take number uh seven. Okay. Which is when white collar jobs are erased, where does the consumer demand come from to buy from all these new entrepreneurial ventures? This is from
[02:11:00] T. Tilman. Uh, you know, this is a a tough one, right? This is the central political economy question of AI. If productivity explodes and but income does not flow to the people, demand collapses and the system becomes unstable. Uh, capitalism needs customers, right? So, we need new distribution mechanisms. We need lower costs. We need new ownership models. We need AI dividends. You need equity participation. You need sovereign funds. All of this points and this is similar to the previous question where uh you know on an optimistic side AI makes goods and services cheap while giving individuals more well leverage to create income. That’s the good side. The pessimism side is that you have extreme concentration and then you have massive collapse of the economy. The path we take is a is a governance and an institutional design choice not a law of nature. So governance and and our institutions need to freaking wake up and smell the roses here. We have to
[02:12:00] rethink this whole thing. The social contract which is what you’re basically talking about is essentially being wiped out and we can be optimistic about it. But the pessimistic case is very has a very big downside here. >> All right, the final question in our AMA today comes from James Williams CU2Q. How can a new CS engineer get experience to become a lead AI engineer if you can’t get a job in the first place? Um, James, first of all, uh, you know, as we’ve said many times, getting a job is the old model. You know, the old model of do well in high school, get a good college, get a diploma, get a hired as a as a junior person, and work your way up the chain. That is vaporized or at least being fully vaporized right now. Um the option right now is uh build yourself outside the job. Build in public, right? Uh basically go and find something that you’re passionate about. It’s based on your massive transformative purpose, something you care about. We’re going to
[02:13:00] be launching an X-P prize in this area very shortly. Um and use the tools available today to build and ship. Um and you know, your GitHub is now your resume. Companies are increasingly hiring if you want to get a job versus start a company yourself. They’re increasingly hiring based upon uh what you’ve done. I remember Elon said, “I don’t care if you have a college degree. You know, I care about what you’ve done. Uh you know, that is your degree now. That is your resume. Um show me that you’re brilliant in what you build, not what you happen to learn in some college or graduate degree or entry- level job.” So, build in public. the barrier to entry has never been lower for you to build something extraordinary um that shows your capabilities. And once you do that, you’re probably unlikely to be going after a job. You’re probably going to want to partner with a couple of friends and build uh a product, a company, a service yourself. So >> that’s that’s my answer. I’m sticking to it.
[02:14:00] >> Great advice. Just we’re advising a couple of universities around this Peter and one of them is an engineering university and they’re like well what is an engineering degree and it’s pretty clear that the engineering degree of the future will be go build some stuff and at the end what did you build >> and you get a degree granted on on not what you learned but what did you build? >> Yep. I love that. And if you haven’t if you haven’t done anything to start yet other than listening to the podcast, add Alex’s innermost loop to your daily regimen first thing in the morning. And and that that alone will inspire you to to shift gears and get into this. >> Oh, thank you Dave. That’s that’s very sweet. Um yeah, for for those who want to read the innermost loop, just go to alexw.org and I provide links to Substack and X and Spotify, etc. But appreciate the promo, Dave. It’s very kind. All right, our outro music today, which is beautiful, is from Hitham Said. Uh, it’s Aitopia. All right, gentlemen, get ready for some some beautiful video and audio.
[02:15:06] Everyone living with no needs in sight. A home made to order with fancy little lights. No need to worry, it’s paid for. Don’t scared. The mortgage is dead. No debt will you bear. Energy is endless. We harness the sun combined with safe atoms forever in fun. Needs to make widgets far. No punching a clock. It’s all just beun. bots and models will take on the pain for us to live heaven on earth once again passing the time and thoughts and desires enjoying the peace. >> All right, thank you to my brilliant
[02:16:01] moonshot mates AWG. I wish you a beautiful week. Dave and See, I can’t wait to see you guys next Monday. We’re all together again. We’re going to be physically at MIT uh at the book launch of We Are as Gods. We’re going to be recording a podcast episode there. Can’t wait to do it face to face. >> May the fourth be with us. >> Check out what’s May the fourth be with us. >> Yes. Yes, for sure. >> As a Star Trek fan, I’m not allowed to say that. >> By the way, check out what’s right above me is a world vision camera identity camera. Right. I’ve got surrealist camera literally right over my head. No, it’s just a it’s just an omni camp. >> It’s just a surveillance camera. But I I couldn’t resist. >> And by the way, it’s not easy standing in Guadalajare airport holding a laptop at eye level. There’s nowhere to put it down. >> You did pretty damn well as a mobile. >> I’ve got my I got my exercises for the day. Was this >> a mobile mate? I I saw you moving around where you’re trying to avoid like policemen or something or what?
[02:17:01] >> You know, I just have to shift positions down there, shift hands, and once in a while lean on something. There’s nowhere to sit here that’s easy and I didn’t want to risk losing a connection that I fought so hard to get. >> Oh my god. Okay, if you’ve got an outro song or intro song, please send it to us media diamandis.com. We’d love to hear it, see it, and potentially play it. Um, and thank you for subscribing uh to this and and thank you to all of the fans out there. I know all four of us run into you on the street, at the airports, at events, and uh so great for >> Yeah. If you see us, do come up and say hi. >> Yeah, for sure. >> Although not too many. >> All right, take care, folks. >> Bye. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to
[02:18:00] join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the MetaTrens newsletter every week, go to diamandis.com/metatrends. That’s diamandis.com/tatrens. Thank you again for joining us today. It’s a blast for us to put this together every week.