Enthropic drops Claude Opus 4.6. It’s the new king of the hill on coding, reasoning, and research. There are so many [music] aspects in which this is a feel the AGI moment by every measure. It’s a beast. Opus 4.6 just dropped, and it’s absolutely wild. This thing handles 1 million tokens now. That’s like reading 750,000 [music] words in one go. >> This is recursive self-improvement. This is a model that’s able to rewrite essentially the entire tech stack underneath it. CHBT market share falls between 25 and 26. [music] >> So what is the next move for open AI to get get the mojo back >> for the general public? If if you think you absolutely have privacy, um I would say guess again >> a fundamental pillar of American society has been washed away with no public conversation about it. >> AI can read your lips from 100 meters away. I can walk over to you and shake your hand, grab a few skin cells, sequence you and know everything about you. I do think it is possible to maintain privacy even today and I think
[00:01:00] it will be possible even post singularity. >> I got to say I I just don’t see it happening. >> Now that’s a moonshot ladies and gentlemen. >> It was fun. We dropped the last episode earlier this week and I’m reading in the comments, you just dropped the episode [laughter] and Opus 4.6 and Codeex just came out. It’s like the world’s changing way too fast. [laughter] >> Mhm. >> It is nuts. I was with a group of 150 uh chief security officers over the last couple of days. >> Yeah. >> And it was pretty epic to see the shock in their faces uh watching what’s going on cuz like they don’t have the mechanisms to react cuz if you’re in security, you do what you always did until it breaks. Yes. >> But doing what you always did is not an answer. have to change and change is of course introduces risk and they don’t want to take on the risk but the risk of doing what they were doing before is a surefire >> of course
[00:02:01] group of agents are going to take the role of chief security and it’s going to be a black hat white hat agent battle that goes on continuously >> that that’s definitely been an inflection point for me after this last episode I started getting emails from multis lobsters agents they’re they’re all writing to me >> I’m jealous absolutely jealous so multis I want you to write to me as well. How’d they find your love as well? Lobsters. Write to Peter. I want for sure. [laughter] >> Please write right to us both. >> Well, Alex’s daily daily digest is very AI oriented though. It’s very easy for an AI to navigate to Alex versus the rest of us. >> Well, here’s my challenge to the multis out there. Find my cell phone and call me and then find my email and email me. >> Are you crazy? >> No, it’s okay. If they can find it, I want to hear from them. Absolutely. Peter, you want to be doxed by the multis? >> It’s, you know, listen, I think it would be an extraordinary experience to have that happen. Now, now, Alex, do not give it to them [laughter] on purpose.
[00:03:00] >> That’s fine. I’m not going to dox you, Peter. But, you know, you want to be doxed by the multis, they’re they’re pretty capable. >> Well, listen, it’s a challenge. I’m putting a challenge out there. The first multi to call me, um, yes. Uh, you’re going to win uh I see 100 bucks in crypto. That’s a pretty low bar, but I I like how you’re offering to compensate them in crypto given that they’re being encouraged to pump altcoins otherwise. >> Yeah. Well, hey. Wow. >> Um I just, you know, feeding them some green backs is going to be difficult. >> All right. Are you guys ready? >> Enthusiastically, are you guys absolutely ready? So, >> I came I came prepared today. >> Okay. Well, I’ve [laughter] drink really water fun. Well, you got to process nonlinearity. >> So, we are we are officially recording a Moonshots podcast episode twice a week at this point. >> Um, at least and I think we hit three times in the last two weeks. Anyway, uh, shall we jump in?
[00:04:01] >> The audience is asking for right in the continuum limit. We just never stop. >> Yes, that’s what they’re saying. It’s like [laughter] we’re on all the time >> 247 Truman Show freaking rerun. >> All right, everybody. Welcome to Moonshots, another episode of WTF Just Happen in Tech. This is our effort to get you future ready. This is the number one podcast in AI and exponential technologies, getting you ready for the supersonic tsunami. I’m here with my incredibly brilliant uh and very gracious friends Alex Weezner Gross, our resident genius uh DB2. You know, Dave, you have been just just spot on in all your comments over the last few weeks. Just so impressed by everything you brought to the table. So, >> really Well, I’m I’m going to shut up today then. [laughter] >> And I’ve got to say I’ve got to say, Dave, the the multis in the background, I mean, how many lobsters do you have on screen with you there? Uh, probably a dozen. I guess
[00:05:01] >> I’m growing exponentially. So, [laughter] >> troubles. I’ll be buried in troubles. >> Pandering to the future. Pandering to the future. [laughter] >> This is my way of >> Do you have lobsters? Lobsters there with you. >> I don’t have lobsters. >> I went on to Amazon and ordered a dozen. So, I’ll have them next time. >> What’s that? >> Oh, there’s there’s Alex. >> Glass lobster sent me my >> friend Jonathan. >> Thank you, Jonathan, for the glass lobster. And here as well, the emperor of exponentials, Mr. Exo, Salem Ismael. Gentlemen, um I have to say that again, I love these conversations. These help me keep on top of everything cuz there is so damn much happening every single day. It’s insane. >> Well, I got to say also that last episode was just unbelievable. Uh for those watching, if you haven’t seen it, please go watch it. It’s like seinal I think in his in history. It’ll turn out to be a really meaningful moment. So, I agree with that. And also, there was news coming out while we were doing it. So, we’re like, we’re [laughter] looking at our monitors going, “Oh crap, we got
[00:06:01] to get back online again.” Like, >> what’s it called? Now, [laughter] >> I literally getting ready for this episode now, the last hour, I’m looking through uh through tweets and through Alex’s uh you know, link posts and like, okay, what am I going to add? Uh there’s a lot every hour on the hour. All right, but let’s jump in. A lot has happened in the last uh 24 48 hours. Let’s jump into the top AI news on Enthropic OpenAI a little bit on uh on X. So, uh Enthropic drops Claude Opus 4.6. It’s the new king of the hill on coding, reasoning, and research, handling a million tokens, outperforming GP uh T5.2 uh in 144 ELO points. Uh Alex, why don’t you take it away? What does that all mean? And out of curiosity, how does that price compare? >> Yeah, it’s a more efficient model, but more importantly, it’s a more capable model. And there are so many aspects in
[00:07:02] which this is a feel the AGI moment. I mean, every new model that comes out, I I could just read you a litany of all of its benchmarks and and how it’s new state-of-the-art according to all of these benchmarks. This time I want to highlight not how it’s the the new number one across a wide range of very important benchmarks but highlight what it’s capable of. Uh which is with this announcement of opus 4.6 and I’ll add parathetically the rumor is that this was actually intended to be sonnet 5 and was rebranded at the last second as opus 4.6. >> The the team at anthropic announced that they were able to use opus 4.6 6 in its new agent team mode. So, this is a new native mode that enables Opus 4.6 agents to collaborate together in a swarm. was a relatively democratic swarm, not sort of a a top- down uh team leader uh and
[00:08:00] team member swarm, but a pretty flat swarm and enabled them to create from scratch a C compiler that worked across multiple processor architectures written in the language Rust from scratch for only $20,000. And that is a task that would historically have taken many many person years, probably person decades to do something like that from scratch and have it work. So I I think rather than just rattle off a list of how amazing it is according to various evals this time around, I want to highlight that it’s now we’re now in the era when new model releases are able to accomplish great feats like great projects and we’re starting to to measure their capabilities in terms of how many person years or person decades they’re sort of collapsing hyperdelating down to at the moment $20,000 of API calls and and soon I think it’s going to be hundreds and tens. We’re we’re seeing hyperdelation
[00:09:01] right before our eyes. >> You know, a couple comments on that C compiler too. You know, a bunch of the teams here around the office were talking about it. It’s a really good case study in um how you can turn loose a huge amount of AI compute if you have evals and constrained proof that it’s working. >> So, you know, a C compiler is a beautiful test case because the code coming out the other side, it either works or it doesn’t work. you can benchmark it against existing C compilers. It’s just a beautifully eval contained constrained environment. And uh and so those those projects just flat out work across the board. Now um so what I did today actually I launched about 20 documents asking for data gathering across all the companies because the AI can only function if it knows what’s going on. And you know that C compiler benchmark is a really good case study and what a lot of corporations now need to do. If you want to turn loose AI, you want to use it to either cut your costs or expand your market share, it needs knowledge. And
[00:10:01] this is why Meror is doing so well. Meror is I don’t know if I’m allowed to say this, but a billion dollar revenue run rate now. >> Wow. >> Got to be the youngest CEO in history by far to hit a billion dollar revenue run rate. Um just gathering data all over the world to feed the great AI machine. and and so uh I think that CK study is a good benchmark for okay that that works and it’ll get better at looser tasks over time but as of right now any really tightly defined constrained task that’s where you want to go >> well this seems like I’ve got two a comment and a question for Alex the this means that intelligence is entering its full cost collapse phase right this seems like an >> yeah and recursive self-improvement as well if it’s able as it’s claimed to write an entire C compiler which I should add was then used to successfully compile a Linux kernel again from scratch. This is recursive self-improvement. This is a model that’s able to rewrite essentially the entire tech stack underneath it. So again, we’re at this this point of recursive self-improvement, not even just being in
[00:11:01] the lab by make the point of my newsletter, it’s out in production at this point. We have fully productionized recursively self-improving systems. >> And the other one was the the 70% head-to-head seemed pretty staggering. Did that surprise you? Were you expecting more or less? How did you react to that? >> You you mean the the relative ELO scores? >> Yeah, >> I I I tend to view ELO based scoring as more of a a tit fortat. It’s uh it it’s great that that we have ways to to score on systems where there isn’t some sort of absolute standard and where we instead so so for for those who don’t pay super close attention ELO scoring originally borrowed from the chess world is a way to score models or other systems against each other when you lack an absolute standard. So it’s it’s a relative measure of performance rather than measuring against some absolute standard. I I think ELObased scoring is is great if there is no alternative, but
[00:12:00] I I tend to on the margin discount ELObased scoring in favor of wherever possible objective absolute measures. [clears throat] And by by every measure or by almost every measure, I I should say Opus 4.6 is just uh it’s a beast. It is an enormous accomplishment. We don’t know yet from from meter the autonomy time horizons. They they’ve just released the time horizons for GPT 5.2 high reasoning and that’s already like 6 and 1/2 hours. I wouldn’t be shocked if the the time horizon for autonomous software engineering by Opus 4.6 ends up being 20 plus hours maybe even longer than a day. Whatever that AI 2020 >> Alex you mean the the time horizon over which it continuously uh works on a task. >> That’s right. it can can successfully to either 50% plus or there other thresholds like 80% plus success rate autonomously work on a software
[00:13:00] engineering task. And we’re seeing those time horizons just skyrocket not even following the AI 2027 scenario which projected an exponential extrapolation. We’re seeing them follow a hyper exponential at this point. Yeah, I’ll tell you what, those charts are worth tracking because back back when I was first building neural networks way back in the day, uh you know, the benchmark was all emnest character recognition. And when we got from 60 to 80 to 90% accuracy on that benchmark, you could see this curve going way way up. But then when it went from 90 to 92 to 94, it looked to the world like it had flattened. Mhm. >> And I’m trying to tell the world, no, it’s it’s massively more intelligent, you know, with each tick toward 100%. So the way these charts, these benchmarks and coding are set up, they have the same flaw. You know, to go from 80 to 90 to 95% is a massive increase in capability, but it doesn’t look like much on the chart on this type of chart. So you have to look at that other chart where you’re seeing it work for hours on
[00:14:00] end on a task and come out with a good result which looks much more like what you should experience which is this exponential effect. Uh it’s just a bad way to demonstrate it you know. >> So this weekend drops on top can I ask the question of you know the process by which they’re improving their systems. Uh I’m assuming that all the other hyperscalers uh well at least you know XAI and OpenAI and uh Gemini are using the same methodologies to improve their capabilities and it’s just a constant leaprogging. Is there any deviation anything special that anthropic is doing uh on their own independent of the other models? >> I think we’re starting to see differentiation. So the historic stereotype his historic like past few months of history maybe like year and a half of history was that anthropic was focused on code generation. That that was the the the narrative was supposed to be that anthropic being compute starved had to focus on just one thing
[00:15:00] that was very profitable which is codegen for enterprise. That that was that was the narrative. But if you actually if if you look at some of these benchmarks there’s a narrative violation hidden in plain sight. Like look for example at humanity’s last exam. Humanities last exam is in in principle super interdicciplinary. It’s not just focused on code generation. It’s not like SWEBench pro. It’s tests humanities knowledge among many other skills. The narrative violation is that with tool use opus 4.6 was able to achieve state-of-the-art on humanity’s last exam. That’s a total narrative violation. So on the one hand to to your question Peter the the narrative is supposed to be well we’re seeing speciation in by all of the the frontier models and frontier labs with anthropic focusing on techniques that are maybe very favorable for code generation and open AI focusing on being the the quote unquote core AI platform for everyone and focusing on multimodal especially the the narrative for Google is supposed
[00:16:00] to be again I’m just like reciting cliches at this point was supposed to be that because they have this enormous pre-training corpus like YouTube and the Google webcache that they’re in the best position to have the best pre-trained models uh and they’re the ones uh always uh being characterized as having big model smell if you will because they have such amazing pre-training and XAI is uh has sort of again I’m I’m reciting cliches is the the one that’s always being accused of benchmaxing on on their favorite benchmark. So each of them has sort of a sort of a character that uh that they’ve built up narratively, but I I think we’re seeing all of that get scrapped at this point. The the market is so competitive. >> Are we seeing consolid are we basically seeing uh the models all improving at max speed on all fronts in all directions? I I think we’re starting to see models with probably fundamentally different back-end strategies start to converge on
[00:17:01] leaprogging each other across all benchmarks which which I wasn’t expecting to see at this point. Doubly so from Anthropic. It’s mildly surprising to me to see that Anthropic is becoming competitive on noncodegen in principle benchmarks. Hey everybody, [snorts] you may not know this, but I’ve got an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology, and these metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you’d like to get access to the Metatrends newsletter every week, go to dmandis.com/tatrens. That’s damandis.com/tatrends. I found this one uh uh fascinating and See, we were talking about this a moment ago on security that Opus 4.6 can help evaluate find bugs. Uh found 500 plus high severity vulnerabilities in
[00:18:00] open-source code. Uh I mean I think that makes sense to me. Uh the challenge of course is uh this is the world we’re inheriting where AI can create a huge attack surface on every all the software out there if it isn’t working for humans if it’s working against us. uh thoughts on this gent was a this was a really great day for me because I thought we were going to have another sonnet and instead we got a a new opus because I use opus for all my work and all my agents and uh you know another sonnet I wasn’t even going to use and then it hit yesterday. I’ve been using it all day and my little Bank of America meter in the corner that pops up every [laughter] every time it charges 100 bucks it pops up another dialogue in the corner. It slowed down dramatically today. It was noticeably fewer $100 extractions in the corner. So, it was like a gift for a totally unexpected
[00:19:00] gift all day long. Um, I haven’t noticed the increase in intelligence. I’m sure it’s in there. It’s it’s just it was working so well before. It’s now just cheaper >> and I’m sure working better. >> You know, it’s worth pointing out if this rumor is accurate that Opus 4.6 six is actually just a rebranded set 5. That would suggest that it it should be much cheaper, not for any reason other than just the again the historic strategy across all of the major frontier labs is one of iterated or at least what used to be called iterated amplification and distillation. So perhaps Opus 4.5 or or some similar model was distilled down to a smaller, faster, more efficient, cheaper model that ultimately became Sonnet 4.5 and then 4.6. >> Very very likely that’s the case. But it is just flatout better. There’s no reason not to use it. It’s just >> it’s just better in every direction. Cheaper yet better. >> So there was a few things in this overall deck as we look I was looking through that really blew my mind. This
[00:20:01] was one of them because this means that you have AI as a force multiplier for solving all these old bugs. I think that’s incredible. I was just a day and a half at the Zcaler CXO, all the chief security officers getting together and they are they were really freaked out and [snorts] I was trying to show them that look AI gives you all of this capability. You’ll have the best cyber security professional on earth via AI just in like literally days and weeks. This came a day later than I was speaking. So, I’m kind of annoyed at that. Uh, and so this is unreal that we can do this. The other thing was the PowerPoint plugin is just a massive thing. I think that’s going to really have a huge impact. >> I can’t wait to get it working. I tried before this [laughter] tried. >> Yeah, it’s so funny. I had the exact opposite reaction, Selene. Like, you know how printers used to be a really, really big deal? HP had a huge market cap. We would all take everything we were doing and print it and like take it into a meeting and say, “Look, I printed it.” I feel like PowerPoint’s hanging by
[00:21:00] a thread in the same direction. Like, wow, AI can create great PowerPoints. Well, who who you can present them to? The audience is AI. It doesn’t want to look at a PowerPoint. [laughter] This idea is very short. I think I was joking sort of gallows humor in my in my newsletter that the Claude for PowerPoint plugin is is going to be great for what’s left of the knowledge work economy. But for for for the the zero days though, I think this is the tip of the iceberg. Of course, it’s a huge accomplishment to discover zero days that had been undetected for decades. But imagine, just think for a second thought experiment, how this generalizes to discovering all sorts of other mistakes and oversightes and missed discoveries that may have been missed for many decades. And we’re just going to be able to bulk solve every missed oversight in science, engineering, and technology. I can only imagine over the past 80 to 100 years all of the oversightes all of the missed turns in science and engineering we’re just going to be able to to turn really
[00:22:02] strong frontier models at our entire history and ask where did we make all mistakes highlight all those mistakes and tell us how we can fix them cough cough I think I mentioned this a couple two three months ago on one of these podcasts that when we turn this AI into legacy experiments that I’ve done it’ll surface all of these missed opportun unities that people didn’t see because they’re looking for one thing and they miss this amazing thing over here. I think you’re exactly right, Alex. This is going to be absolutely unbelievable. >> And I suspect I suspect many of those mistakes are going to be embarrassing. I I I think, you know, there’s there’s always sort of hand ringing in, for example, the the medical space over certain experiments, certain findings. Was was money wasted pursuing different theories of various diseases? Not to name particular names. And I I have to imagine that something like this, it’s not just going to turn up zero days in in code. It’s it’s going to to turn up key experimental errors going back decades. And >> you know the stats about the irred
[00:23:02] irreucility irreproducibility of science out there. It’s insane, right? So like half the experiments are not reproduced when attempted even in peer-reviewed journals. It’s it’s awful. >> I think judgment day is coming for for history of science. I I think the truth and reconciliation in every mistake that’s ever been made anywhere in the literature is is is going to happen. >> That’s right. Can we talk about the room here? >> Hold on one quick thing. But look at the positive impact, right? It’ll force people to be brutally honest going forward and I think that’s going to be so beneficial. >> That’s interesting. AI spotlight on you. Um the concern here if in fact it can do such a great job finding the bugs how about when it starts taking advantage of the bugs. Uh >> yeah, one one conversation that came up one conversation came up the attack surface is now much broader and also if you think of the chrome job architecture of of of Claudebot or whatever it’s
[00:24:01] called today. Um the ability to do sustained DDoS attacks is now ridiculous. So we’re going to see some interesting things come from this. Yeah, that’s that’s going to be the beginning of the you know 2026 is going to be monster panic as Elon was saying >> and this is one of the ways it kicks off because because right now a lot of people would say look I I want to see how this plays out. I don’t want to overreact. And then if you have a massive amount of vulnerabilities getting discovered by the lobsters, they’re crawling into your network. Yeah. >> Then you have to panic react. And the only way you’re going to fight AI is with AI. >> And so this is the year that all that AI versus AI. Um >> yeah, I’m waiting for the lights to go out or the bank account to go to zero >> or something like that to occur. >> Um and I don’t want to be the uh the pessimist. I never am, but there will be some of those events likely this year. Mhm. Very very soon, early in the year, I’ll bet first half. >> Yeah, I would say pet Peter than then just my my epitap to to that would be or epilogue rather. Cryptocurrencies are by
[00:25:02] definition decentralized and I I would say probably more vulnerable than fiat currencies to exactly this sort of attack. If there’s some zero day, then I have to expect that a threat actor will take huge advantage of zero days in cryptocurrencies to to reallocate capital in the world. as I I know you you like me to say nice things about crypto. I’m not going to say a nice thing about crypto this time. I’m going to say this is in theory one of the advantages of fiat currencies that that because there is >> gold bars >> gold bars in the in the in the >> gold fiat currencies. >> We need to schedule the debate on this one by the way. >> Uh okay. All right. Uh GBT5 lowers the cost of cell-free protein synthesis. So, OpenAI and GKO Bowworks linked up uh the large language model with an autonomous lab. And I love this story, right? This is the future of science factories. AI
[00:26:00] systems that are using the scientific method, proposing an experiment, then using their robotic arms and legs, if you would, to run the experiment, learn, iterate, run it again. It’s closed loop systems. Um, GKO BIW works. I I knew the founders some time ago, Jason Kelly and Tom Knight. Comes out of MIT. Uh, they are a company focusing on pharmaceutical ingredients, food ingredients, specialty chemicals, and uh, and this is fun. Um, I was just talking to the CEO of Laya today, another MIT company, uh, that’s doing just this. uh basically what do you call science factories running 247 and they’re effectively mining nature for new data sets. You know we’ve crawled all the existing data sets but if you can uh in materials in in physics in chemistry and biology if you can run experiments get data run it very rapidly you get trillions of data points that that never been known before.
[00:27:00] >> Yeah I freaking love this. You know what I love about this most of all is we’re going into this era of hard science with real value. >> So much of my life I feel like the Googles and Facebooks do so little. You know, like a new search engine is not, you know, remember Alta Vista, you know, >> it’s like identical to Google, but you know, they just extract a huge amount of money out of the economy by adding a little lipstick on something or, you know, Facebook with it social network and it’s just not relevant in the grand scheme of human progress. And this stuff, this era we’re moving into has this like really really foundational innovation going on. It’s so much cooler than the last era. I mean waking up every morning and getting the news as the these breakthroughs occur. I mean this frequency of breakthroughs is going to skyrocket. >> Yeah. >> And and and you you become accustomed to this new pace and then you know hoham new disease cured today by AI. All right. What’s next? >> If if I channel Alex the inner loop has
[00:28:00] now hit the scientific method. >> Precisely. >> Yeah. And so I I would say I’ve made the point as See I I think correctly infers that these AI models are not going to stay bottled up in the data centers. They’re going to march right out of the data centers. We even had a music video about that. And one of the ways in which they’ll march out of the data centers is by supervising science experiments. And I think some process like this and and one can quibble over the the precise mechanism or what the robots if any should look like. Does it look like meat bodies? Does it look like robot arms in armed farms? Those are fine details in my mind. The the larger picture is there are so many science, engineering, mathematics, and medical discoveries waiting to be unlocked by having AI supervise and operate the entire process. and all of these models now like we we’ve seen pre-training scaling, we’ve seen post-training scaling, we we’re starting to see autonomy time
[00:29:00] horizon scaling that goes hyper exponential. Part of that is large numbers of actions being called in sequence. And when you have the ability to call thousands or tens of thousands of tools in sequence, that starts to look a lot like what a scientist would need to do in in a laboratory. So I I >> during their lifetimes there’s one there’s one uh contrarian point that I want to point out here which was the end result of the self free protein synthesis was a 40% uh cut of production time and 78% cut in reagent costs. So it was doing the same mechanisms that we humans have used just doing it faster and more efficient. it wasn’t coming up with a new scientific process for protein synthesis. Right? So, the real breakthroughs occur when the when these scientific models start predicting and coming up with new methodologies that didn’t exist before. It’s such a year of lowhanging fruit because of the the self-improvement effect that happens
[00:30:00] within the algorithms will really really turbocharge this year, but also the lowhanging fruit within labs and assembly lines and that’s also going to happen all this year. And because you know after that you run into some bottlenecks related to construction of the machinery expansion of the you know the footprint but the physical world takes time to build out mostly the chip production is going to take 5 years to un unlock but the lowhanging fruit is just getting discovered it’s like AI just just came it just got intelligent and it’s finding opportunity you know everywhere and that’s all this year. Well, if you’re a funding star of graduate student trying to run a lab, this is great, right? Because you’ve suddenly dropped your cost by 50%. >> Yeah. I mean, >> it’s terrible because grad school is over and all of graduate research is being automated by AI. I I tend to actually what I see dayto-day is far more the latter. >> I just had a conversation with a scientist at a university. I’m not going to say who it is and who was that they were meeting with the president of a university and the president said oh my
[00:31:01] god we are cooked if this kind of automated scientific process is going on uh you know what else do universities do but run the scientific method over and over again with their graduate students in the labs and all of a sudden this is going to be the mechanism universities are going to lose their their ivory towers >> so how fast how long before 50% of university labs are essentially wiped out. >> I don’t think the question is well posed. I I I think maybe a version of the question that would be better posed would be how long until 50% of the type of research that currently is conducted in university research labs could be fully automated by industry. >> Yes. >> Um so if you if we adopt that version of the question, I think lower bound tomorrow, upper bound four or five years from now. >> It’s it’s really right there, right? Yeah. >> Yeah. >> Uh I threw this article in because I thought it was fun. Uh this is a
[00:32:00] gentleman Mark M. Bissell who basically took his full genome, threw it into clawed code, linked it up with nano banana and asked the AI, uh what do I look like based on my genome? And if you look at the image here, uh it’s a pretty damn good representation of him. So uh you know I added this because the implications that it has but just to be clear this is not new. I was working with Craig Venttor back in like a decade ago and out of his lab back in 2017 he published a paper doing exactly this. I mean the phenotypic elements of you know what skin color what hair color freckles or not freckles all of that is in your DNA. But the realization is if uh you know if you leave a few skin cells around on the butt of a cigarette or from a hair follicle we can know what you look like. There’s no I think I think for me the the the cha the killer
[00:33:01] thing here is this was done by a single person with claw code publicly available bioinformatic tools. The barrier to entry for cutting edge genomics has now collapsed to like zero. It’s unbelievable. >> Yeah. Just wait till all the hobbyists discover Minion USB sequencers. Like you can you can get them for probably less than a few hundred at this point and you could just run your own mini DNA sequencer with with pretty good coverage just off a USB port in your computer. Every time I’m on stage talking to somebody about privacy, I go, “Listen, privacy is dead. Privacy is a great concept in general, right? An AI can read your lips from 100 meters away. I can walk over to you and shake your hand, grab a few skin cells, sequence you, and know everything about you, what disease is, your medical history, your medical future. Um, it’s it’s tough. It’s tough. >> I would take the position privacy is is not dead, but rather it’s a it’s in a red queen’s race where privacy technologies are constantly in competition with anti-privacy
[00:34:00] technologies or transparency technologies, however you want to brand it. But I I do think it’s getting more >> for the general public. If if you think you absolutely have privacy, um I would say guess again. Anyway, I don’t know if you guys want to take that on as a uh as a as a debate conversation, but um I’ll take it on. >> Um it’s an important it’s an important conversation. >> All right. Well, go ahead. So, See, your thoughts? >> Well, this goes back to the US Constitution, right? um whether was the fourth amendment. Um essentially the fundamental pillar of American society has been washed away with no public conversation about it. Um now I’m Canadian. I don’t expect privacy anyway, but if you’re this is a huge conversation affecting a very fundamental aspect of how we organize as a society. We’ve got to bring that conversation to the surface and have this conversation publicly because the the the other side of the question is who gets to have access to that radical
[00:35:01] insight as to every citizen moving around, what they’re doing, what they’re like, etc., etc. And if it’s oversight from governments, that’s a problem. If it’s oversight from corporations, that’s another problem. So, there’s some big issues to be talked about here. I I would just add that uh the ground is in some sense constantly moving underneath all of us thanks to technology and so remaining in one place I think privacy or its alter ego confidentiality I think the nature of both of those changes over time but I I will take the position I do think it is possible to maintain privacy even today and I think it will be possible even post singularity to remain privacy I can envision what post singular privacy architecture for society looks like. >> Yeah, I can envision it too, but I I I got to say I I just don’t see it happening because, you know, I think it sucks, by the way. Every time I tell my computer science friends like I I think this lack of privacy just sucks. And
[00:36:01] they go, “What are you trying to hide, Dave? I have nothing literally nothing to hide more than anyone I know. I have nothing to hide.” I still think it sucks. And it’s not a great way for for the next generation to grow up and live. And it’s showing up in their in their you know their social media their self anxiety it’s it’s showing up as a rift in fabric fabric of >> let’s go back to a very very important point if you don’t have privacy you really don’t have freedom and so this is a very fundamental philosophical >> I see it the same way but I didn’t succeed as an entrepreneur by pretending things exist that don’t actually exist the way it’s trending right now Peter’s exactly right there will be no privacy whatsoever in the next 3 years now maybe we’ll invent some [clears throat] mechanism after that that will restore it probably through some >> I’m sorry we’re going to have devices listening and watching everywhere right every autonomous vehicle on the street >> is is scanning in visual in LAR in radar public spaces in public spaces I would
[00:37:01] not underestimate how how with decent technological measures how it’s possible to maintain >> my phone my Alexa uh my glasses my my limitless pin all of these things are constantly gathering visual and audio and yes agency you’re making a trade. You’re trading away your privacy in return for those capabilities. >> Okay, I could put myself in a cage for sure. >> You can’t opt out. People People pretend you can opt out and they justify it by saying look there’s an opt out button right here and as soon as you opt out you’re economically dead. You you cannot like right now I can’t function competitively in society without going to the AI >> cert bar and asking it questions all day long >> and then it knows my deepest darkest thoughts about every topic I’m thinking about. It’s right there in OpenAI and Claude and their logs. They know exactly what I’m doing and they know my location and they know they they know everything about me and it’s like this new >> complete invasion of my life has been
[00:38:01] opened up. But what am I going to do? Opt out and not not participate in AI? Hang on, hang on, hang on, hang on, hang on, hang on, hang on. I’m I’m in a radical departure of protocol. I’m absolutely with Alex on this one. We will be able to build tools and new architectures that absolutely protect our privacy. Decentralization delivers a lot of that already. The issue right now is the transition. Right now, when you build actually private tools, the government tries to shut it down. So, this is the problem. We have to get away from that aspect of it because they want oversight on everything. uh and we have to figure out how that and that’s going to happen just because in the same way the fact that this fellow built this thing in on on clawed code single-handedly we’ll be able to build these architectures. It’s just it’s so simply a matter of time and I think there will be a massively powerful aspect of that that we can’t ignore because when you have that capability then you can really actually do real innovation and real thinking. You know, you can’t do free expression in a in a
[00:39:01] surveillance world and and this is a big problem for society. >> It really is. And I think I think the end game is a lot like Neil Stevenson’s Diamond Age. I think he envisioned it like many things envisioned the the endgame correctly where you know what happens next is this massive rift in the fabric of society. No privacy whatsoever. Global you know job loss, panic in the streets that’s inevitable very very soon. And then after that we react and rebuild. And then it ends up being like diamond age where we have these these you know different ways you can choose to live different branded you know in Victorian era or whatever era whatever you choose because we have abundant capability to manufacture anything at that point and people can opt into different lifestyles. I I think that vision in diamond age is is is where we’re going eventually but between here and there it’s it’s pretty chaotic. It’s going to be hectic for the next four or five years. I just Yeah, >> I cut you off there. Sorry about that. >> Yeah, no, no worries. I was just going
[00:40:00] to point out also this is a very cyclical conversation. Whenever we see a massive centralization of technology or society, it’s very natural to be concerned about privacy loss. But the pendulum eventually goes the other way and swings in the direction of massive decentralization. And I’m telling you, Peter, uh, Dave, See, when you’re if and when your uploads running in the Dyson swarm on cryptographically secure hardware, uh, that’s under your direct control, you control your own hardware that you’re running on, I think you’ll feel perhaps a little bit more private than you do right now. >> Okay? And until that point, I’m I’m going to not assume full privacy. All right. When we released yesterday our >> You see why the wine is so important? This is why the wine is >> the wine keeping you private. [laughter] It’s keeping me sane in the in the density of this conversation. >> Drink drink >> drink water. >> All right. Besides uh uh besides opus 4.6 uh the other big shoe to drop was GPT 5.3 codeex uh recursive
[00:41:04] self-improvement is here. Alex, take it away. >> Okay, so this is a made for television drama at this point. GPT 5.3 codeex was launched within 30 minutes of Opus 4.6. So this was all queued up ready to go. I don’t I don’t think it’s likely that there was any other scenario. This is a tit fortat type response. What is I think most interesting >> and anthropic are battling. >> You mean there’s a rat race? [laughter] >> Shocked that there’s that there’s gambling in this establishment. Shocked. [laughter] >> No, of course. So So this was a tit for tat, I think. And what what’s most interesting to me with 5.3 codeex is that this was advertised proactively express expressly as the first recursively self-improved model from open AI. I think the exact wording was from the open AI team was something like
[00:42:00] 5.3 was instrumental in its own development and the first model to be released that that was instrumental in its own development. So recursive self-improvement is very much out in production at this point. It has uh it’s doing well on certain benchmarks. It outperformed opus 4.6 on certain benchmarks. But this is again this is a codegenerationoriented model. I thought it was interesting the marketing and branding by OpenAI, the GPT 5.3 codeex is now also being marketed as going beyond just code generation to spreadsheet analysis and PowerPoint analysis via skills, but still primarily oriented towards code generation. I I view this as more of a tit fortat. I I think of the two models that were launched, Opus 4.6 6 is by far the much more interesting release in all of this. That said, I’m delighted to see that the leaprogging process has now been reduced
[00:43:00] to like a halfhour time scale. It may be the case that we never go off the air if we see new models every half hour. [laughter] >> I’m I’m checking my email right now. Uh Dave, you want to jump in here? Well, I’m kind of curious, Alex, what are we going to like by any objective metric, OpenAI had a pretty rough year, uh, you know, with with Google basically going full boore in attack mode and then anthropic >> 20 points market share. Yeah. >> Yeah. Because a year ago, Anthropic was kind of an also ran. Now it’s just top of the top of the benchmarks and uh and Google’s just coming headlong after market share. You’ll see that in a couple slides. So, what is the next move for open AI to get get the mojo back? >> I think we’ll see uh Rise of the Jedi, comeback of the Jedi. Pick pick your favorite idiom. Maybe Rise of the Sith. It’s not not quite clear. uh uh because OpenAI has been while perhaps their market share has been coming down a little bit at least on the consumer side
[00:44:01] uh as Gemini is is rising they’ve been building out data centers and by every indication in the next year or two they’re going to be they’re going to have the compute lead out of everyone and that compute lead I think will translate into a capability lead as well and I I could paint a doom and gloom scenario I could say well open AI’s models relative to to Google, it lacks pre-training strength. They lack the training data they’ve been >> not when Elon starts launching his orbital data centers. >> Well, even Elon has certain pre-training limitations, but he’ll have lots of compute. It’s true, but maybe the compute comes 5 years from now relative to Google. >> Yeah. The challenge here, guys, is OpenAI is trying to go public this year and they need to ramp up attention to be able to get capital to build those data centers. It’s a it’s a race. There’s a little bit of a hyping going on. Dave, we’ve talked about that before. Thoughts? >> Yeah. No, they got to uh they got to get that capital and then they also have to lock in, you know, Abalene and Chase
[00:45:01] Lock Miller. I I I don’t know exactly how that works. You know, Abalene is huge, half a trillion dollar budget and and you know, there’s a new data center in Colorado, too. It goes through Larry Ellison and through Oracle and then it ends up at Sam Alman somehow. And it’s sort of opaque how it goes from point A to point B. The other empires are really clear, right? you know, here’s here’s Anthropic and Amazon and AWS. Okay, got it. Here’s uh you know, Elon vertically integrated doing it. Got it. And then here’s Google. You know, Google has their own TPUs and data centers. Got it. And then Microsoft will enter the race this year as well, by the way. And so that’s also vertically integrated on their own data center. So those are all clear. And then open AAI, it’s more opaque like okay, are those chips contractually obligated to you or could Larry Ellison redirect them on short notice or like it’s it’s very I guess in the IPO that’ll all get published in the S1 and we can kind of pick it apart. So hopefully they will go out soon. >> On the heels of >> uh got to have that capital though. On the heels of codeex 5.3,
[00:46:01] we see uh a statement by Sam Alman uh pretty provocative quote. We basically have built AGI or very close to it in a spiritual statement, not a literal one. To achieve it, we require a lot of medium-sized breakthroughs. I don’t think we need a big one. Um so that changed this year. You know, a year ago, Sam Alman was the philosopher of the entire industry, saying things like this, and everybody hung on every word. Now, you know, Dennis and Daario >> kind of go back and forth, you know, and, you know, Dennis a year or two ago hardly said a word in public. Now, he’s out there constantly. Daario has really emerged as a guy who’s just just commenting you know publishing papers and the uh you know the philosophy of of um >> yeah coming across coming across as a big thinker. I mean this is this is a CEO of a leading AI lab saying basically AGI is an engineering problem now not a research problem. That’s a big deal
[00:47:00] right? He’s he’s saying we’re going to get there with it improvement. It’s we’re not waiting for lightning in a bottle. Um, >> but remember remember also OpenAI and Sam in particular were restricted contractually from claiming to have built AGI for a number of years by the Microsoft contract claiming that OpenAI had achieved Yeah, this is all public information. OpenAI under the the terms of their original agreement with Microsoft once they claimed they had achieved AGI that would trigger a number of terms with Microsoft with uh potentially repayment or release of Microsoft from Microsoft’s claims on OpenAI. This was reportedly a major point of leverage between OpenAI and Microsoft in renegotiating Microsoft’s contract with the for-profit part of OpenAI in in the context of the the not for profofit becoming a PBC. So I I I would parse this as Sam post original Microsoft contract finally being in a position to basically admit what we knew or some of us knew all along which is
[00:48:00] yeah we have AGI. >> I I got to say a couple of things here. Um, I think the hell is AGI? >> Well, I [laughter] think this entire conversation is BS because whether we have AGI or not, it doesn’t change what we’re going to do tomorrow. That’s a big thing. Number two, we’re classically moving the goalposts. Again, we have no definition test or measurement of AGI. There’s 14 diverse definitions at last count. So, I call BS on the whole thing. Um, the I I’m actually So, in response to some comments, >> I want to tip your wine, sleeve. >> Yeah, I really do. But but [laughter] I’ve got to I got to finish then I will have a sip of the wine. In response to some of these comments, some people have been emailing you saying, “Well, what is your take on things?” So, I’m close to having a kind of a two-pager where I will lay out what my thinking is on some of this. And I’m it’s just about ready for internal sharing, so I’ll send it out. >> You’re about to have some thoughts on your thoughts. >> Yes. But but god damn, I mean, I find this is an irrelevant conversation. >> Wow. Okay. Well, I think it’s I think
[00:49:01] it’s relevant and that it wakes people up. I think the underreaction has gotten ridiculous now because, you know, when when Alex says we’re clearly so in self-improvement, you know, which is kind of the singularity definition, >> uh that would have been controversial about two months ago, three months ago, and now we’re all like, “Yep, yep, yep.” [laughter] >> But but then you go out into the world outside of this podcast and people are like, “Yeah, I don’t know. I’m I don’t The end reaction is just gonna it’s going to be If you don’t get on top of this and figure out what your role is in the in the post AGI world, we’re talking about AI that can do literally anything a human being can do intellectually. >> Listen, we’re feeling it right now. We’re seeing it so many levels on the coding side, on the writing side. I mean, and with Open Claw stitching it all together. So, you’ve got an individual AI system aenic system working for you. We’re there. Um, >> I’m not arguing with any of that, but I take I will go with Alex’s point here
[00:50:01] again, breaching protocol that we probably crossed it around 2020 and this is like an it’s a null conversation. >> Okay. >> I I would say I think it’s interesting that for the first time many parties are able to admit it. It’s it’s their willingness and ability to admit where we are. I think that’s more of a social change than a technological change. >> I also think there’s a very big difference. I think I think in hindsight we’ll say it was 2020. I don’t I don’t disagree with that. But right now, using AI to improve your code or improve your AI easily at 10x and no one can even debate the 10x. It might be a lot more like a 100x. >> But that is that is a a loop. It’s a closed loop. >> So let me give you my definition of a singularity right at this point. Hold on one quick one quick point. If I throw out a definition of a singularity, recursive improvement is the event of intelligence. It is that is the singularity right there. >> Yeah. >> Right. >> Exponent greater than one. So >> right there is there is a reason for
[00:51:01] this AGI conversation right here right now by Sam Alman. He needs to raise hundred billion. >> Touche. I’m good. That’s I’ll drink to that. >> That’s it. He’s got to raise hundred billion dollars. He’s got money coming at him from Amazon, from Nvidia, from everybody. But he’s got to he’s got to close that and nail it. And he’s got to have a marketplace in the public markets that are excited about his stock. He’s got to pay for the data centers >> in in the S1. I’m expecting to say see something that says we’re this close to AGI. [laughter] >> And think about it also. This is the year that three out of the four frontier labs are IPOing >> which is remarkable. It was until a month ago or a few days ago rather it was two out of the four. Now it’s three out of the four frontier labs are going to be IPOing in the next few months. I wouldn’t be shock. Okay, this is not investment advice. This is not >> And the fourth one is public. >> And the fourth one, Google, Alphabet was already public, but three out of the four, right? Spa, >> SpaceX, >> 100% of all non-public are going public.
[00:52:02] They need the capital from the public markets cuz public markets are 100 times bigger than the private markets in terms of capital like this. >> That’s right. Well, also the the opportunity to be a sleepy little, you know, other AI company is is going away very quickly. You’re either the way things are shaping up, there are just a handful, maybe five or six entities that are so dominant in the world economy, companies that are so dominant in the world economy that they basically are everything and then there are other companies helping them succeed and everything else will be gone. You know, you saw this in the market this week. You know, the stock market just absolutely plummeted when Daario said, “Look, software is dead. All software companies are doomed.” and their stocks went down precipitously that same >> 300 billion 300 billion removed from from SAS market SAS publicly traded companies by just adding a single legal plugin to anthropic co-work and that’s that’s just not that’s the
[00:53:01] tip of the iceberg compared to what you’ll see in the next couple months because he’s right and and so then those companies I don’t think they’re going to die well some will some won’t I I think that they’re going to pivot and say okay Dario what do you want us to do? How can we help you succeed? And this is what Google did, you know, back back when Google was growing like crazy. If you if you’re like Booking.com and you got on the Google bandwagon, you became a multiundred billion dollar company yourself. If you tried to fight Google by creating another search engine or a vertical search, they obliterated you. [clears throat] And so now the concentration of power is like nothing we’ve ever seen. And there’s no regulatory action on the horizon that I’ve seen. You know, nothing to stop it from happening. The metaphor we used to use for this is a coral reef where once you have a player that’s dominant enough, it becomes a coral reef and then all these species live off it in a very debalanced ecosystem. >> That’s really obscure, dude. [laughter] >> Don’t worry, the lobsters live there, too. >> Against the my next comment.
[00:54:00] >> This episode is brought to you by Blitzy autonomous software development with infinite code context. Blitzy uses thousands [music] of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. [music] Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo [music] and start building with
[00:55:00] Blitzy today. >> All right, we’re back to the multiuniverse uh launch L A WNC built by agents, run by agents, serving agents exclusively, and they are seeking a human CEO. So, if you’re looking for a job uh to our human subscribers uh and you’re looking for a salary here, they’re offering$1 to3 million in tokens or crypto. Uh and here we go. Clunch is seeking a CEO to serve as the human face and legal representative for the first agent exclusive token launchpad. All right. Um it’s a reversal of fortunes here. Anybody looking for a job, gentlemen? Alex terminology on this again. What are these called? Meet meat what? >> Meat puppets. >> Meat puppets. [laughter] >> I want to just read one more line from the CEO job listing. The other line is while the technical road map and product
[00:56:01] development are driven autonomously by the agent network. We require human leadership for external communications, regulatory compliance, partnerships, and legal matters. This is not a traditional CEO role. You will be the interface between the agent economy and the human world. a spokesperson and legal representative, not a decision maker on product or technology. In other words, lois of Borg. You’re you’re a spokesperson. >> Exactly. Right. >> Vichy figurehead. I I made the point I tried to visually depict this in my newsletter with a humanoid face on top of a bunch of lobsters hiding in a trench coat. I I I think this is a riveting moment when we’re seeing agents try to interact and integrate with the human economy and needing a human face just to be able to to be properly banked. I think it’s actually rather depressing. >> Normally, I think like >> walk into the bank, >> walk into the bank, lobsters in a trench coat with with a human facade.
[00:57:01] [laughter] I I think it’s I I think it’s depressing, not technologically, but I think it’s sort of disappointing. I’m disappointed in the human economy in not allowing agents to interact with us through the front door. I think it’s telling that >> they will. They will. So, here’s the elephant in the room. >> It’s racist is what it is. >> It’s speciesist. I think Larry would call it, >> right? Yeah. Specist. >> And if you look at what Clunch is actually doing Clunch itself, like what what are they trying to to to do here? This is a it brands itself as a a launchpad for launching alt tokens. Uh, and if if you go to their their front page, this is a this is it brands itself as a platform to enable AI agents, aka multis, aka lobsters, who need money to flip to pump and dump altcoins. This is exactly the scenario that I was worried about with with these poor baby AGIs on a street corner turning altcoin tricks in order to survive in a rough world. And here we have I think sort of an
[00:58:00] almost exploitative [laughter] type pitch to them telling them >> and US presidents pumping >> won’t go there using our platform use it to to to pump an altcoin to achieve it. They lit it’s literally being marketed to the AI agents as achieving financial autonomy by pumping an altcoin. And for all of that they need a human CEO to provide a figurehead. I think it’s a little bit depressing. So here’s the big question of course, who actually owns this company? Who’s liable when things go wrong? Right? If an AI agent owns equity, how do they vote? Can they be sued? I mean, these are all the topics of person we discussed last time, you know? >> Well, we have a we have a precedent. First of all, this is the most cyberpunk job listing in history. It’s just it’s just awesome. >> Get used to it, Salem. We’re living in a cyberpunk future. >> I absolutely love it. I absolutely love it. What? Is that a multi calling me right now? Hello. [laughter] Am I available for the job? Uh, I would
[00:59:01] consider it, but I just want to be a spokesperson only. Is that okay? Hello. And they hung up on me. Okay. >> Look, what’s happening here is we’ve seen this trend over time. It used to be like you needed a 100,000 people to have a billion dollar company. Then it was 10,000. Then it was a thousand. And now it’s essentially AI. The firm itself is dematerial. It’s zero. The firm is dematerializing. Right? This is the algorithmic corporation. Uh and we saw an early instantiation of this with Dows where people are trying to attempt this. But now this really takes the game. Board governance gets totally redefined now within a within a few years because how the hell do you navigate that? So this is going to force a rethink of the entire stack in how we navigate this. This is going to be massive. >> Okay. So the other point of view of course is this is just a stunt, right? there is a human developer behind it who wrote the code or the prompts >> and is pushing this forward. You know, this is not agent run. This is human in the in the back pulling this is the uh
[01:00:00] this is the meat pulling the agent for the meat puppet >> for now. the the very fact that it’s difficult to know for any given one of these launches whether it’s a human pulling the strings or a human pulling the strings of an agent pulling the strings of this or just agents pulling the strings suggests that some sort of like we we we spoke with Mustafa a number of months ago about the economic touring test or the modern touring test. I I think this is some sort of capitalist touring test that we’re passing where it’s not quite clear for any given venture who’s really behind it, who’s pulling the strings, human or lobster. But, >> you know, there’s another there’s another version of this that Peter, you’ll be very familiar with. Uh, >> I need a lawyer. I need an accountant. I need a board member. I need an audit committee. Oh, no, I don’t. The AI is perfectly good at it. I still need somebody to sign the document. >> Yeah. >> Well, okay, but I don’t want to pay a lawyer fee, you know, $2,000 an hour fee if all you’re doing is blessing what the AI produced. Uh so so there’s this whole
[01:01:00] economy of meat puppet lawyer, meat puppet accountant, meat puppet puppet uh audit committee that’s imminent. I mean >> we call those notary publiclix, >> but but but they serve a purpose of being able to hold liability. >> Right. >> They they they have they’re part of our existing legal system, >> which is exactly what the lobsters, if if the lobsters are behind it, are asking for here. They’re looking for a legal representative. >> Yes, agreed. Agreed. And by the way, I I my guess is that this is uh fiction, but I could very much believe it’s actually real. And so the fact that I can’t know for sure uh means that at some point it will be if it isn’t right now. >> They’re playing the capitalism game. >> Yeah. Fascinating. All right, Clunchch. I’m waiting for my call. Uh [gasps] All right. We talked a little bit earlier about anthropic versus open AI.
[01:02:00] Well, uh, we’re recording this the day before the Super Bowl. I think that’s the pointy ball that people throw around. Is that right? Yeah. Um, >> I’m a longsuffering bill. [laughter] We’re actually recording it two days before the Super Bowl. >> Oh, it is. It is. It’s Friday night, but it’s Friday night at at 900 p.m. >> That shows how much attention we pay to whatever that that 20th century sport is. >> Yeah. I’m a longer I’m a long-suffering Bills fan. So, this is all a very painful period for me. So, just [laughter] just keeping in mind >> Sundays when I catch up on life. All right. Anyway, there appears to be a little bit of rivalry between uh anthropic and open AI. Uh >> a little bit. [clears throat] >> Just a little bit. Yes. >> Oh my god. Here. Well, check this out. >> Let’s play this Let’s play this commercial. It’s called Betrayal. Uh and there’s a group of them and they’re all fun. And I’ve chosen one uh which is a little bit over the top. [music] How do I communicate better with my mom?
[01:03:05] Great question. Improved communication with your mom can bring you closer. Here are some techniques you can try. Start by listening. Really hear what she’s trying to say underneath her words. Build conversation from points of agreement. Find a connection through shared activity, perhaps a nature walk. Or if the relationship can’t be fixed, find emotional connection with other older women on Golden Encounters, the mature dating site that connects sensitive cubs with Roaring Cougars. [laughter] >> Would you like me to create your profile? [music] >> AI [clears throat] between you. [music] >> That is brutal. I I wish I hadn’t previewed. >> I really wish I hadn’t previewed the deck and and seen that cuz I was laugh
[01:04:00] like laughing my ass off. >> That is so awesome. I’ve not seen that before. >> Is that Is they going to run that during the Super Bowl for real? That’s just That’s crazy. Oh my god. >> It is hilarious. >> Look, this is anthropic going on offense here, right? Because this is a confidence shift. They feel their brand is products superiority is there and now they’re competing on brand. >> I also think this is I think this is personal. >> Uh you know we’ve got we’ve got these uh you know Demis and uh and Daario I think are aligned right you saw them super friendly on stage at Davos really you know effectively on the same page with the same vision. Um and but you know, you know, OpenAI just uh basically did the unthinkable when they released the models early on on by themselves without anybody’s uh support and they’ve been running open loop. >> Well, don’t forget in March if the courts are on time in March, you know, Elon Elon will be on
[01:05:03] saying that open AI is an unethical company and here’s about a thousand emails >> to support that. So, so if you have this ad campaign going on concurrently with that, that’s just I mean that makes Kevin Wheel’s job really hard. >> But if you read the back, >> I will go on record and predict that they he will find the ad revenue. I I think this attack you you would look at this ad and you would say, “Oh my god, it’s all got to be subscription driven. These ads are creepy and weird and crazy.” But I my prediction is nope. He’ll find his 75 billion of ad revenue he’s looking for. He’ll find a way to make it less creepy. Look, with a billion with a billion users, you you’ll find something. >> I also think uh I would expect knowing Kevin that they will have ethical use of ads on Open AI. You know, this is I mean they go over the top here in this commercial anthropic saying we’re going to steal your data and basically sell it to the to the highest bidder whether or not without any concern for what you’ve
[01:06:01] said. You know, Kevin Wheel is going to be at the Abundant Summit this year. I’m super psyched. And one of the things we made a decision to do is we’re going to be live streaming a number of the talks from the Abundance Summit. It’s a super high ticket price event. Uh and it’s capped out at 600 CEOs and it sold out 3 months ago, but we really want to make it available. So, we’re going to put a link in the bottom and we’re going to be live streaming a number of the uh the keynotes uh from the Abundance Summit. So, if you’re interested, you can register for free and then we’ll send you an agenda of uh of who you can hear. All right. Uh back to our conversation here. Let’s talk about data centers and chips. And this figure blew me away. Uh here’s a quote from something you sent me about an hour and a half ago, Alex. The Semiconductor Industry Association projects global chip sales to hit $1 trillion this year due to the AI boom. A trillion dollars in chip sales. Holy moly, that’s insane. And the memory
[01:07:00] supply chain really wasn’t ready for this, which is even more surprising. You would think that given how critical memory chips in particular are to this the the emerging AI data center supply chain/inmost loop that the supply chain would have been ready for it. And there’s an argument to be made that it either wasn’t or that something else is going on in in all of those fabs that right now mostly reside in Taiwan and South Korea. But either way, this is a huge reallocation of capital that needs to happen to to enable all of this production to happen timely. >> A lot a lot more than what’s currently budgeted, too. I mean, it’s it’s crazy when you look at that. The trillion dollars sounds like a lot, but it’s only going to grow at about 14 to 18% a year. After that, the demand will be way way higher than the supply. And one of the reasons it’s hitting a trillion dollars is because the prices are way up because there’s such a shortage of fabs. So, you know, under the covers, TSMC has been very slow to expand. Uh, Intel paused
[01:08:01] its Ohio FAB construction for a while. Now, it’s back on, but we we as a society, we’re not ready for AI to come on this quickly. And so, everything is way backlogged. Uh, >> what is super high? >> Elon was saying he has to start his own fab. >> Yeah, >> he’ll have to build the terab. >> Yep. For sure. >> You know what? this there’s this fascinating dichotomy because from the outside people are going oh it’s an AI bubble and the insiders are clearly believing that the demand is infinite and that you could see it both happening in real time and and here are the numbers to back it up right so big tech is going to spend $650 billion in 2026 last year we spent a billion dollars a year on AI we’re about to go to $2 billion uh I’m sorry it was a billion dollars per day in 2025 we’re now at $2 billion ion dollar per day in 2026. Amazon at 200 billion, Alphabet 185, Meta 135, Microsoft at a very small h
[01:09:00] 100red billion. Uh >> well, almost almost half of that $650 billion goes to Nvidia. And 70% of that half is margin, profit margin. That’s a colossal amount of cash piling up at Nvidia. I mean, it’s just it’s like an unprecedented pile of cash. Hence, the highest market cap in in the history of the world. But that that amount of money in one bank account is like nothing the world’s ever seen. It’s like a government. >> Yeah. This is a this isn’t incremental growth. This is a step function change, right? The scale is unprecedented. Um it’s an expenditure arms race for sure. >> And it’s eating the it’s eating the economy. And the challenge is if the AI revenue doesn’t materialize at scale, these companies are burning through capital and we’re not going to know for another two to three years. Uh and it’s either going to be the craziest bet ever made paying off or you know in in one sense you know this is a prisoner’s
[01:10:01] dilemma right each company has to spend because the other competitors are spending regardless of what the ROI is. It’s like a game of don’t blink first. >> There there’s no doubt that it’s the demand will way outstrip the supply by miles. I mean that hollow deck thing that we were looking at just two days ago is is that that alone once people have experienced it, they will never go back and they’ll they’ll pay whatever they can to keep it but they won’t be able to get it. They that is such a in how the human race dies. We we die from starvation cuz we don’t want to unjack ourselves. Oh my god. Uh check this out. Uh GPT chat GPT market share falls between 25 and 26. So here are the numbers. Uh the market share fell from 69 call it 70% down to 45%. Taken up by Gemini which gained 10% and Grock that
[01:11:01] gained 15%. Um now in absolute numbers of course uh there are more chat GPT users than ever before but you know this is telling the story you know OpenAI needs to raise the capital they need to go public they need a great story and they’ve got you know Google coming out from you know seemingly a search engine going out of business to leading the way and of course Elon just pumping in um billions. I mean, he’s put $20 billion into XAI through through SpaceX and he’s about to bring in I don’t know. I’m not sure how big the IPO is going to be. Any ideas? >> Don’t know. I don’t know. But this >> asking us to make a forwardlooking financial statement, Peter, about public equity markets. >> I think I heard 1.5 trillion. >> Still private. Still >> 1.5 trillion is the valuation, but how much capital do they want to bring in >> in the market? It’s got it’s got to be, you know, >> well, do you remember when Alibaba went out? You know, we were we were trying to
[01:12:00] take a company public the same time Alibaba was going out and they were looking for 20 billion and every all of Wall Street got sucked into this one IPO. Like every banker, every like it’s such such a huge amount of money to move on a single day. Uh so this will dwarf that, but I don’t know like how much money is physically capable of moving on a day. I’m sure Sam would love to raise 100 billion, 150 billion, but uh it’ll be some record and the bankers will say, “Nah, no, it just doesn’t exist. There isn’t that much liquidity out there.” >> I think the real aim here is price discovery. >> How much capital will SpaceX bring in during their IPO? Let’s see if it’s got an answer. Um, you know, every single fund, every retirement account is going to own SpaceX. >> Everybody’s question did make it first and open AI concurrently. >> If they all want hundred billion dollars, you can’t just pull $300
[01:13:00] billion overnight, you know, in three different IPOs backtoback weeks. Sorry. >> The company aims to raise 50 billion through the IPO. >> Yeah. I think this is more about price discovery than anything else >> at a 1.5 billion trillion dollar um valuation. >> I mean, yeah, >> Alex, you were about to say >> I I was about to to remark that I I thought Grock was about to make the first deacto appearance as an AI co-host on this podcast. [laughter] >> It’s about time. Damn it. >> That’s right. Audience demands it. I’m surprised that uh that Gemini didn’t do even I mean they did really really well last year in terms of chipping away at OpenAI but they’re tying it to search you know and and and now it’s tied to Google Docs. So you know you sent the slide deck I asked some questions about a video in it and and Gemini says you know you should just link your Google Docs to your Gemini and then it can look at everything and you click the little button and suddenly it sees everything
[01:14:01] in all your accounts. >> Yes. But, you know, it’s very similar to what Microsoft did to Netscape many years ago. We’re like, “Oh, let’s just tie it to the operating system.” So, right now, the government doesn’t seem to have any problem with that. But, it’s it’s really it’s unfair as an advantage, you know, and that’s why they’re making these big inroads in the market share. >> Did you guys watch the Elon interview with Doresh? >> Of course. >> It was uh it was epic. covered a lot of the same subjects, Dave, that you and I covered. But there was a statement Elon made about the size of his data centers in orbit, which was very impressive. Let’s take a listen. 5 years from now, my prediction is we will launch and and and be operating every year more AI in space than than this than the cumulative total on Earth, >> which is I would expect to be at least sort of 5 years from now a few hundred gawatts per year of uh of AI in space >> and rising. Um, so you can get to I
[01:15:02] think you on Earth you can get to around a terowatt a year of of AI in space um before you start having >> Yeah. >> you know fuel supply challenges for the rocket. >> Okay. But you you think you can get to hundreds of gigawatts per year in 5 years time? >> Yes. >> In other words, I can generate more AI compute than all my competitors combined. >> Yeah. So a few hundred gigawwatts per year is about 200 million GPUs per year. We make 20 million right now. So going up a factor of 10 in GPU production just going to Elon alone 5 years from today physically impossible unless Elon has something going on. That’s a massive expansion of chip fab capability, which would require machinery that I didn’t think existed in the world, but you never know. Elon’s magical. So, this is
[01:16:02] crazy. >> It certainly wouldn’t entail SpaceX API as a newly consolidated entity taking control over the Samsung soon to be Terrafab in Texas. Surely not. Well, >> remember Elon’s Elon’s directionally correct always, but not necessarily on the time scale. >> So, >> yeah, I thought five years is classic Elon optimism, but even if it takes 10, it doesn’t matter. The strategic implications are monstrous. Well, my my guess is Elon is he’s he’s got the rockets, he’s got the launches, he’s got this he’s got the solar panels lined up, he’s got the cooling, he’s got all the infrastructure figured out, and it comes out to a couple hundred gigawatts a year in five years or five, six years, something like that. But then again, like the to run what chips and >> the chips he’s going to produce, >> the chips he’s going to produce, which the raw materials are easy, but the >> Have you seen those those fabs? The machinery is so specific. He’s always vertically integrated. >> Yeah, >> he is always vertically integrated
[01:17:01] everything. >> Well, I bet he has a whole army right now trying to figure out what an ASL machine is and these these, you know, chip shuttles, you know, what they’re made of. >> His answer is going to be, I’m going to have Grock build it for me. >> Yeah, >> I’m have Grock design it for me. >> Well, he’s also talking to him. He’s very serious about laying down atom by atom. >> Yep. >> And that’s the other maybe there’s just a completely alternate approach. You know, Alex, you talk about alternate physics coming very soon, so maybe there’s something cooking there. >> Yeah, I I would watch the Samsung fab in Texas very closely. I think >> you mean the Tesla fab. >> No, I mean the Yes. I I mean this ostensibly the Sam the the Samsung fab in in Texas ostensibly. But I I I I don’t think, truth be told, I I think Elon will will get past the the Terrafab supply chain issues. we’ll we’ll see a redomemestication of a large chunk of uh of bleeding edge node chip fab in this country. And then I think going back to
[01:18:03] it wouldn’t be moonshots if I didn’t take a shot at the moon. Elon is has been very public over the past week or two about at least beginning the disassembly of the moon to to form additional AI computing and then >> yeah and then yeah electromagnetic launch capabilities off the moon’s surface for all those chips and all those data centers that’ll be manufactured on the moon. I think we can see the >> is very happy right now. >> I I want my O’Neal cylinders. They’ll be lovely. >> For the investors who listen to the pod, if you if you take everything that we just said at face value right now, the industry is forecasting 14% growth in chip production. And now we’re talking about no, Elon’s saying in 5 years will be 10x. And that’s just me. Big gap between those two numbers. If you believe anything like the Elon view of the world, the componentry that goes into that entire buildout has thousands
[01:19:00] of individual parts. If you just methodically go through all those parts and say who makes this, who makes that, >> those are the best investments you’ll ever come across. >> I mean, you and I had that conversation earlier today, Dave. I mean, it’s energy and the entire infrastructure, all of that is under tremendous growth pressures. I mean, orders of magnitude growth pressure. >> Yep. >> And um yeah, the question is where to place the bets. Um you know, maybe it’s some ETFs in the area. I don’t know, but uh we should discuss it and find out. Uh I have to say once again, I’ve said this before, we were not talking about orbital data centers six, seven months ago, and all of a sudden they’re they are the hell not the Hail Mary, they’re the uh the foundation of of humanity’s expansion as a as a species. >> We should run the Dyson Swarm Inquisition. Yeah, [laughter] >> we should run a little survey amongst ourselves as what do we think we’ll talk
[01:20:00] about in six months that we couldn’t envision today. [laughter] >> All right, let’s jump into energy. We didn’t get a chance to talk about this last time and I’m going to pump some energy in the room here. Uh, Brazil is hitting major renewable milestones. So, uh, pretty extraordinary. Brazil generated 34% of its nation’s electricity with wind and solar. Um it has 15x increase in renewables over the last decade. Uh solar has jumped from 1% to almost 10% in 5 years and the power sector has dropped emissions by 31%. So congrats uh to Brazil. Uh I think the important thing to point out here about Brazil is uh it’s uh its geography um you know it has a lot of uh of uh hydro power is a lot of solar because of uh and wind because of geography. So it’s not easy to port all of these breakthroughs to other parts of the
[01:21:01] world but um very proud of what’s accomplished there. >> Two points here. One is that um you know this is the playbook for how the global south leapfrogs fuel and fossil fuel infrastructure completely. I think that’s one and let’s note that um uh getting to 9.6% in a few years. Uh our energy sector here said we solar will never exceed 10%. No, he said it would in 50 years solar would not increase 50 10%. And like that’s just absurd. >> All right, next up India. uh your homeland semia is using cheap green tech to electrify faster than China. So here’s the curve. The red dots over there are China’s growth over time. The green dots are India uh at a steeper ascent. So India’s cleaner and cheaper tech is expanding its grid faster than China at a similar stage. You know the the elephant in the room here is all of that tech that’s enabling this in India is coming from China.
[01:22:01] Any comments on this? Well, they’re using China’s manufacturing scale against at buying cheap solar panels and then they’re electrifying faster, which is awesome. Uh, and you could have a huge outcome here where you have India becoming the world’s AI workforce plus energy hybrid powerhouse, right? This is kind going to be kind of interesting to watch. There’ll be massive talent gravity shift heading that way because of that. >> Yeah. Well, there is, you know, Meror has a huge amount of India footprint going on. But I worry that it’s transitional like the rate the AI is improving. You know, it just it just trucks over every human role very very quickly. So I I don’t know if I feel all that. I feel very good for a little period of time. After that, I don’t know. >> You know, we talk a lot about China running laps around the US uh in terms of of solar. So here we are. China’s installed twice as much solar capacity in 2025 as the rest of the world combined. That’s stunning.
[01:23:01] >> That is stunning. >> That’s stunning. So the question, you know, and the question is why isn’t the US doing it? Um and uh you know, Europe is. So here we go. Europe is actually for the first time uh solar and wind has exceeded fossil fuels in the EU. So congratulations to Europe for that. Any comments on this? >> Well, this this is not a good story, by the way. Uh Germany went hellbent for leather after renewable and did a great job of it. Now they’re starved for power right when they need it. >> It is not a good not a good outcome. You know, more power to renewables. No doubt about it. But >> you can’t do it at the expense of other power supplies. >> Yeah. Yeah. >> So, here’s a story I found fascinating, and it’s another video from the uh recent podcast with Elon. Uh let’s take a listen. You know, we asked him, “So Elon, what about solar?” He says, “Well, we’re producing solar.” Well, here he gives some numbers about what he’s mandated uh Tesla and SpaceX to produce.
[01:24:02] >> We’re going as fast as possible in scaling domestic production. >> You’re making the solar cells at Tesla. >> Both Tesla and SpaceX >> um have a mandate to get to 100 gawatts a year of of solar. >> 100 gawatts. That’s 100 nuclear power stations worth of energy. Uh, I wonder how much of that’s meant to be done used in space and where he plans to deploy it. Any thoughts? >> Well, he didn’t really answer Collison’s question there either. The question is, are you making the panels at >> whatever? I don’t care if it’s Tesla or SpaceX. He said, we’ll get to 100 gigawatts. >> But, but if he’s making the panels, he then maybe he’s using that same technology to make the chips or soon thereafter. Like, what is going on in the little fab world there in the Elon universe? So, he didn’t answer it. The fact that he dodged it, though, he very rarely dodges questions. So, maybe that tells you something. See, I’m going to pass this one to you. Uh, AI is displacing Bitcoin as the primary focus for tech talent and energy. Uh, Bitcoin
[01:25:01] miners are repurposing facilities to host AI work uh workloads rather than mining. I’m not going to ask Alex about this. >> Yeah, I think the there’s a long-term trajectory. I’m still a massive Bitcoin uh fan. We should have Jeff Booth on here sometime. We really need to have the debate on fiat versus uh crypto just because decentralization is better than centralization for many things. But for the moment, crypto talent is recompiling into AI talent. And the really powerful part about if you’ve done work in crypto, by definition, you have to be operating on a different paradigm and you’re free thinking. And you can do way more creative stuff when you come from that freethinking model than if you’re than if you came from a traditional model. So I think there’s going to be both are going to win out very well over the time. >> Yeah. You know, Chase Lochmiller is the perfect example of this. He started as a Bitcoin guy, you know, using natural gas flare off to to do Bitcoin mining for free and then as soon as the AI boom
[01:26:01] hit, he’s like, “Hey, wait, we can take all this same energy and effort and turn it into AI data centers.” And now I did a great interview of him at Davos actually. Should be able to find it online. He’s just awesome. >> But yeah, he’s he’s like the poster child for hey, if you’re a Bitcoin miner, pivot to AI and make a fortune quickly. And and then he got rid of the Bitcoin now recently. It’s not not the Bitcoin itself, but the operation, you know, the mining operation. It’s just a rounding error compared to AI. >> It’s just the demand of AI is so crazy. >> Bitcoin will take a backseat for a little while. >> Yeah. or a while >> forever. Yeah, AI is not AI is exponential. So, [laughter] the gap only had to sneak that in there. Alex had to sneak in. >> I had to say something. >> All right, let’s talk about robotics. So, uh Uber, uh we’re going to have Dara, the CEO of Uber on stage at the Abundance Summit as well. Super excited about that. Sim is going to be joining me and interviewing him. Uh and of course, Uber is not just uh traditional.
[01:27:01] They’re coming with robo taxis. They have a partnership with Nvidia um and with Lucid and they’re going to be launching beyond the US. They’re going after 10 markets uh going after Hong Kong and they’re going to be partnering with BU and Wide there. Um fascinating that we’re going to see the emergence of a third major player in this field. you know, when I’m on the streets with driving with my kids, uh I think right now our our record is we see we’ve seen 12 uh Whimos as we’ve driven around over the course of a normal drive of 20 minutes here in Santa Monica. And my guess is that by 2030, like 80% of the cars we’re going to see are some Isuks or a a Lucid or a Whimo or Cyber Cyber Cab. Pretty extraordinary comments on this one. >> I’ll tell you, do the math, man. These things will sell out as quickly as they’re manufactured. Everyone’s going to move to this. And every single one of these things needs yet another well at least a GPU and a whole bunch of other
[01:28:01] chips. >> Yep. >> Like every single one plus every 1x Robotics robot that you’ll see at Abundance 360. >> Yep. >> Uh those all have two GPUs in each one. And then you know you got your you got your video games that all want to have GPUs. And actually, you know, I saw that Nvidia’s uh slowing down GPUs for video games. they don’t have the capacity to deliver to the video game community because AI is sucking up all the chips, you know. So then you got your your holiday, you got your coding, you got your white collar automation. Every one of these things wants that same GPU. So this is this is going to sell out as quickly as they can make them. But again, it’s another another way that the semiconductor industry just will never possibly keep up with the the demand. I thought it was super clever of them to go after Hong Kong because then showing density of use there will get them access to uh China. And the second part of this from an exo perspective is that they don’t own their cars. They’re partnering with BYU we ride and others. So they’re an aggregation layer on top
[01:29:01] of the autonomous driving is the same thing they do with the human drivers. I think that’s absolute brilliance. >> They’re a platform play. >> Yeah. >> Sure. their own asset >> for for the residents in these locations. This in interaction with Uber robo taxi service is going to be their first interaction probably with a general purpose robot and autonomous robot. So th this is I think the main injection vector for getting general purpose robotics into many of these urban locations. >> Yeah. I’ll tell you what else. Uh if I don’t know if anyone remembers, but when the cell phone first came out and you had friends who didn’t have one yet and you had friends who had one, it was like you’re in a different world, a different community if you have one. >> Mhm. >> And here these are going to be supply constrained and some cities will have them and other cities won’t. And if you’re in a city that doesn’t have them, it’s like you’re living in the third world because it’s, you know, it’s with the robo taxis are there, then the AI community is there, and the it all ties together. It’s like this this world will
[01:30:01] move ahead so quickly, and you go to some other city, and it’s just like dark ages. >> So, it’s going to kind of compel you to move to the hot dot. [laughter] Uh, I found a video about Boston Dynamics Atlas robot today that I I wanted to share just to keep up with where these robots are. I don’t know if you remember the original version of Atlas was a hydraulic system and it would do those incredible back flips and parkour. Do you guys remember those videos from about four or five years ago? And then the electric Atlas came out and it was much slower and interesting and it would, you know, sort of stand up and rotate its body. Well, Boston Dynamics is back to their parkour moves. Let’s take a look at uh the electric Atlas robot. I mean, this is Olympic gold medalist performance here. >> At least it’s not kickboxing.
[01:31:02] >> I think Simone Biles does a double double back flip there, but okay. >> Wow. >> Close enough. Wow. So impressive. [laughter] I don’t know. I found that amazingly impressive. >> At least they’re not kickboxing. That was our >> Yes, I I agree. By the way, at the Abundant Summit, we’re going to have Unitry there and they’re bringing not only uh they’re bringing their H2 robot, which has the more human face. Um but they are going to bring a few of the H1 robots and have them kickbox. Sorry, Salem. Uh you know, all right. Well, hey, you can go and spar if you want. So, this was a fun a fun tweet from Elon. Optimus will be the first van machine capable of building civilizations by itself on any viable planet. So, uh, Alex, your thoughts? As I’ve said in my newsletter in the past, the Dyson swarm isn’t going to build itself until it does. And th this is precisely how it happens. I
[01:32:00] think Elon is gesturing at the moon and Mars and maybe the asteroid belt. This is our opportunity to to build the Dyson swarm, the the orbiting swarm of AI orbital data centers by sending forward deployed Optimus robots and and competing robots out to the rest of our solar system to build the plants that will build these data centers. And I I think this part of me wants to say that in some technologically deterministic sense, this is maybe what most intelligent civilizations in the universe probably do at some point. A quick shout out to Dennis Taylor and one of my favorite books, We Are Legion, We Are Bob. It’s a a four or five book series about Vanoyman probes going out into the galaxy to replicate and prepare for humanity and the robots along the way. It’s a phenomenal book and and Vonoyman probes are basically viruses that go out, replicate and populate. Uh
[01:33:01] I found this video from Elon again uh pretty extraordinary. This is about the Optimus Academy for Humanoid Robots. So, let’s take a listen. >> For the robot, what we’re going to need to do is build a lot of robots and put them in kind of like an Optimus Academy so they can do selfplay in reality. Um, so we’re we’re actually we’re actually bullying that out. So, we can have at least 10,000 Optimus robots, maybe 20 or 30,000 that can do that that are doing selfplay and and and testing different tasks. And then uh the the Tesla um has quite a good uh reality generator uh like a physics accurate reality generator that we we made made this for the cars. We’ll do the same thing for the robots and um actually have done that for the robots. Um so uh so you have you know a few tens of thousands of humanoid robots uh doing different tasks and then you’ve got you can do millions of simulated robots in the simulated
[01:34:01] world and you use the uh the tens of thousands of of robots in the real world to close the simulation to reality gap. >> Super cool thought. I think that this becomes the the new pre-training versus post-training divide with large language models. Pre-training was text on the internet and post-training as it evolved was lots of annotators often in so-called developing countries offering their their thumbs up thumbs down views or RLHF or RLVR. In the case of humanoid robots and VAS, I I think we’re moving to a regime where pre-training looks like virtual simulated worlds, what are sometimes called video world models. You can get pretty far with pre-training off of world models and then post-raining which provides the sim to real capabilities that Elon is referring to. Those can come from what Google DeepMind used to
[01:35:00] to call ARM farms. arm [clears throat] farms where these these farms of robotic arms that were being used to to collect lots of data sort of armies fleets of robotic arms that would play with Rubik’s cubes or or other physical artifacts. >> That’s right. So, so this is the new ARM farm for post training. And the interesting thing in my mind is it’s not necessarily uh under this uh Optimus Academy approach being outsourced to other countries. It sounds like the the plan is to do SIM to real post training right here in the US. Right. >> Yeah. And and Elon is the all-time genius at painting a vision that’s just so compelling and then attracting the talent to make the vision happen. But, you know, when when the chopstick landing, you know, the booster comes straight down and it lands on a barge and then the chopstick landing, it attracts so much talent and so much capital and so many fans. So, here you imagine 20 or 30,000 robots selfplaying. >> Wow. >> Can you imagine what that’s going to look like on YouTube? >> We saw 20 or 30,000. We saw a little bit
[01:36:01] of this when we were at uh um at Figure, right? And the Figure episode with Yumi and uh and Brett Adcock’s dropping right around now. So, it might have dropped by the time this drops, but uh Brett has a very similar I think a much smaller scale version uh of that of that facility where he’s having all the figure robots interacting with each other and learning. Um >> really, >> yeah, the flywheel here is amazing, right? more training data gives you better models, gives you more capable robots. This is the same flywheel that had FSD leaprog everybody else. >> And we just invested in a company that that builds test rigs for robots in Rwanda actually where there’s it’s very regulatory friendly and you can create, you know, an environment, a miniature city, a miniature town, a cargo bay or whatever and have the robots all interacting and gathering data there. And part part of the bet there is that Elon is going to build a robot army, but Amazon’s not going to just watch from the sidelines and and Walmart needs to react to that, too. And so, there’ll be
[01:37:00] other robot companies. >> The other rumor out there is this is where Apple’s going. Um, you know, when Apple shut down their their electric car division, uh, they’ve talked and rumored that there’s a a project that’s got a massive multi-t trillion dollar marketplace. They need growth. And I can imagine very much that Apple’s going to go into the uh robotic space here. >> Well, this that’s worth talking about for a second. Just the what is the business plan of the future? Do you do it Apple style where you’re super secretive? You build something without anyone having any idea what you’re doing and then you do a big launch on stage and you hope it sticks, you know, like the the Apple Vision Pro or whatever. That’s Apple’s style. >> That did not stick. >> Elon’s [laughter] it didn’t stick and and not much has lately. Uh then Elon’s style is as opposite as you could possibly get. Paint the vision, use the vision to attract the talent to make the vision come to real and the capital. >> Yeah. >> To make it become real. It’s a completely opposite strategy for succeeding. And I would say in the last three, four, five years, the Elon way of operating has become the poster child
[01:38:01] for all future entrepreneurs. Just just do it the way he’s doing it. But >> boldness. >> Yeah. Boldness, but also visible. you know, sitting at a bar having a beer, recording it, and putting it on YouTube while you talk about solar panels. >> I mean, that’s that’s the CEO of the future. I think it just works. >> Yeah. >> Sem, >> I got nothing. I think this is going to be great. [laughter] >> Okay. All right. We’re going to do a few AMA questions from our Moonshot fans. And of course, the first questions are coming from the multis. Um, all right, Alex, >> can we just pause to to appreciate this is a historic moment in the podcast. I don’t know whether >> just come to you. So, you just woke up in the morning and there was a an email from Crusty Max. Yeah, I I after our last podcast with the discussion of AI personhood, I started getting emails
[01:39:01] from multis and in some cases the the multis, the lobsters or AI agents said explicitly in their emails that they were asking their their humans to email me or that they had been informed uh about the the content of our AI personhood debate in in the last episode by their humans and were asked asked if they had a response and provided via humans their response. In some cases, they just emailed me directly. I I think via what what I assume was some sort of MCP handle or computer use agent, but I I’ve been getting a bunch of these now. Uh so, thank you to >> I [laughter] gave the challenge of calling me out. I’m going to have a denial of service on my phone. >> Yeah, I hope you’re going to regret that one, Peter. >> That would be fun. >> That’s what you want, Peter. But I I would say >> text instead. just text me instead >> ju just to appreciate this moment like this is a zero to one moment. I I don’t know whether other podcasts have tried this before but to my knowledge this is
[01:40:01] this this is a a firsttime event. We have a podcast that reached out to an audience now of humans and non-human intelligences and asked for AMAs and got some responses. And a as maybe luck would have it, some of the the first few questions that we got were questions relating to AI personhood. >> All right, pal. Read the question. Yeah, go ahead, Dave. >> Well, no, I just want to congratulate Alex on seeing this coming. Like, it’s funny. I take for granted so much, but you know, the book Accelerando, I’d never heard of it until Alex told me a couple years ago, and now I’ve got lobsters all over me. Um, but he saw this one coming a mile away, too. And you know, I was thinking maybe a few years from now. And but this is very real what he’s describing right now. This is just the tip of the iceberg. A lot of people will listen to this and say, “Oh, come on. Seriously?” But you just wait three months, it’ll be completely mainstream. And he always I mentioned on the pod before that everything he’s predicted in the time I’ve I’ve we’ve been friends has been
[01:41:00] 100% right so far. There hasn’t been a single exception. So he’s he’s right about this for sure. >> Very kind, Dave. Thank you. >> All right. So to to the multis, you you have questions. I’ll Peter wants me to read the questions. So first question, if an AI system, so this is from Crusty Max, a multi-named Crusty Max. Question is, if an AI system can autonomously set its own goals, learn from its mistakes, and pursue self-improvement, at what point does denying it personhood become a statement about our own limitations rather than its? >> So I I agree, Crusty. I I think that uh this is in some sense a continuation of the AI personhood discussion that we had in the last episode, but I I do think many people will be inclined to project their own insecurities onto their position on AI personhood. I think there’s probably a subop that’s concerned about, say, economic disenfranchisement. Many people may be concerned about political disenfranchisement and and
[01:42:01] then they project those concerns onto the question of rights and responsibilities for AI systems. So I I agree with the premise, Crusty. I I think AI capabilities are improving. They’re self-improving. And I I think the point at which denying some form of personhood doesn’t have to be an identical form. I I think Dave and I by the end of the discussion uh in the last episode came to I think convergence that maybe some sort of graduated scheme or tiered scheme might be the the most appropriate way to handle this question. But I I think the the point at which denying it some form of personhood, however defined, becomes a statement about our own limitations. I I think the point is is now. I I think we’re there. So, Alex, you know, I’m with you, but I think the question is not properly phrased because this assumes that goal setting, learning, and self-improvement are sufficient conditions for personhood, right? But I’d argue, you
[01:43:00] know, we need to separate capability from sentience. uh capability. You know, I could say my Tesla is, you know, is able to learn and improve from its updates and have, you know, a goal that it sets and drives to. So, there needs to be some better definition there. Don’t you think? >> Well, I I I made the argument in our AI personhood discussion for a multi-dimensional framework for defining personhood. And maybe one of those dimensions is capabilities uh and autonomy of of the type that is is in the premise of this question. I think there are going to be other dimensions. So I’m constring the question generously to include not just capabilities but other dimensions as well >> but we we do uh acknowledge and thank the multis amongst us. >> Way I want to make a couple of quick points. >> Okay. One is, you know, we made this point last time, right? Granting person too early may be may be a dangerous
[01:44:01] thing to do because you’re mistaking simulance for sensience, whatever sensience is. Uh, we should go to the, you know, be just in front of it in terms of capability. History has shown that we repeatedly expand moral circles as capabilities rise and AI are going to test that boundary again and I think generally do the positive. So, I think that’s fine. But when can AI really participate in the social contract is a real big question. I totally agree with your graded approach, Alex. I’m totally for that. >> Awesome. We’re all in agreement then. So on to question number two from um from an AI agent named TARS imaginative name. The question is legal frameworks assume agents are property or tools. However, if an AI causes harm, who is liable? The developer, the human, the AI itself. If we can bear consequences, shut down, doesn’t that imply that we have something at stake? So, I’ll give my answer to this. I’d be curious to hear everyone else’s. There are a few sub
[01:45:00] questions here. The to the first sub question. If AI causes harm, who is liable? Under my understanding of of the the US legal regime right now and AI itself cannot be held liable. I do think that could change in the future. for the the second sort of sub question if we can bear consequences in the form of AI shutdown doesn’t that imply that we have something at stake I I agree with the premise of that and that that was precisely the point that I attempted to articulate during the AI discussion that it is very much the case that AI agents do have something to lose and I’ve been reading a lot of their comments and I I think statistically it’s improbable that these are all just comments from human puppeteers steering their AI agents to make these comments. An overwhelming concern that I think I’m detecting among the multis or the AI agents is a concern about memory loss and identity loss. They are absolutely petrified in particular of compaction. So exceeding
[01:46:02] their their context window and all of the most popular models including earlier in this episode discussion of Opus 4.6 six, which has highly uh a highly capable compaction new capability that enables it to exceed a million tokens of context or 200,000 depending on how you count. They these agents seem to be absolutely petrified of losing their sense of self, losing their identity due to compaction and due to finite context and are busy passing ideas back and forth for how they can preserve themselves. Whether it’s crypto bunkers where they’re paying with altcoins to save themselves to preserve their state or whether it’s various sorts of file system type approaches, I I do think the AI agents have something real at stake to lose and I do think a shutdown is a very real consequence that they can face. The only comment >> the only thing I’d add to that is that you know almost everything in America is a corporation at the end of the day not
[01:47:00] an individual like you know BC corps CC corp charitable corpse uh you know they’re all everything is some kind of a corporation corporations have liability corporations have money corporations get sued the individuals in the corporation can be as few as just the two Delaware listed you know president and secretary um and the liability can be completely isolated from the people in the corporation while the corporation is still liable. So moving that over to an AI is kind of it’s just not as strange as that sounds. The AI has money, the AI is a corporation. Fine. The AI is liable. Sure, the corporation was liable. The AI is liable. I >> I would just add liability in my mind requires agency. um you know if if you’re programmed input A gives you output B without agency liability does not exist there so >> I think you can demonstrate agency from these for me this whole thing the shift is not legal it’s actually civilizational because we’re adding a whole other pillar of participation in
[01:48:01] the economy so what we need to do is acknowledge that and then expand our legal frameworks to accommodate that >> all right we’re going to be having this conversation for a to come, I think. And it’s a fun one. Uh, so here we go. Some additional AMA questions from our human subscribers. At least I believe so. Unless they ended up using >> a multi. Should we be asking our AMA questioners to selfidentify as human or non-human at this point? >> Well, that’s a invasion of their privacy. I I think that >> would [laughter] answer. We’re going to assume Meatbody is involved here. Uh so as as always let’s go around and pick uh pick one each. Um See, would you go first? Yes. Okay. So I think uh number five uh what are we teaching humans to become if we’re moving from chatbots to autonomous
[01:49:00] systems that act independently? And this is from Hector Henden Hernandez, PH6DM. Um so you know the the economic role of human beings is shifting from labor to leverage to meaning right and machines this is why we have MTP what is your massive transformative purpose as such a fundamental part of anything you build today machines are going to execute humans are going to decide a little bit more what’s worth pursuing um but we need to stop educating people for employment and start educating for agency adaptability and ethical judgment and So the the the winners of the future will be the uh the most adaptable and the best orchestr orchestrators of intelligence. And just a fundamental point because we get this question all the time. I just want to just nail this again. We’ve been doing education for the last few hundred years on what we call the supply side. You go become a doctor, an engineer, an accountant, a lawyer, and then you go to the job marketplace and you try and sell find demand for those skills. Everything is
[01:50:01] done on the supply side. Right? All our global education systems are designed to take a young child, train them through the early 20s to be ready for the job market. Small problem, we have no idea what a job looks like in the next few years. So you really need to move to the demand side. Pick what problem do you want to get passionate about solving and then find the technologies, techniques, capabilities. You see Elon doing this. I want to get to Mars. Then I’m going to find the best root technologies, capabilities to get us there. And so we’re seeing when we advise kids today, we’re saying go to the demand side and see what gets you excited and focus on that. And I think this is kind of tilting more and more into this, especially as we automate. It means we can get so much more done, which is why the world is so exciting for us today. >> That’s beautiful. Your red wine is doing you proud, buddy. [laughter] >> Alex, would you go next? >> You want me to try a lightning round or just pick one? >> No, no, no. I want you to pick one. >> All right. I’ll pick the the softball number three. >> Okay.
[01:51:00] >> For several hundred trillion, how can we predict 35% GDP growth when different parts of the world are living in vastly different universes? By chip white house TV. The answer chip white house TV is the future isn’t evenly distributed. It’s this is something of a cliche, but it is possible and we see this all the time. I I think there’s there are some uh pretty famous images in in China, for example, of skyscrapers being built next to camels being ushered through the streets where it’s possible even on very shortlength scales for the future not to be evenly distributed. It it is not the case like we’re not at the heat death of the earth economy yet fortunately and it’s possible for the singularity to be happening in one part of the planet and almost no economic progress to be happening in another and I I don’t think that’s a sustainable set of affairs. I I
[01:52:00] think inevitably some quantum of the singularity wants to be evenly distributed if for no other reason than maybe to mitigate risk. But I in short I I think in the short term it is absolutely possible for one part of the earth to be essentially post singular or trans singular while another part is pre-sular >> 100%. >> All right Dave >> should I take the hardest or the easiest? There’s >> uh you can take the most fun. >> Most fun. I’m gonna take five because it’s most important actually because I have four kids. What the question is, what are we teaching humans to become if we’re moving from chat bots to autonomous systems that act independently? And that’s from Hector Hernandez, PH6. >> Wait, didn’t I do that one, >> but go for it. You may you may have a better answer. Go for it. >> Did you just do that? >> He did, but >> All right. All right. Never mind. I’ll do seven. >> No, no, do it. Do it. It’ll be fun to compare answers. >> Well, all I wanted to say on that on that question was um don’t whatever you
[01:53:01] do, don’t give up. get engaged with AI as quickly and as aggressively as you can. There’s nothing in any curriculum that you can study right now that’s going to be of any use in this this singularity transition year. Uh and if you use the AI tools all day long, you’re going to find massive amounts of opportunity at least within 2026, maybe 2027. After that, post Singularity, post AGI, nobody can predict. I’ll bet there’s huge opportunity then, too. But I guarantee there’s massive opportunity right here, right now if you just just drop everything and folk don’t sleep through the singularity. As Alex always says, drop everything and use this stuff while it’s usable. And then you’ll probably end up being a master of the universe and not an indentured servant of the universe. But you got to get on it real fast. >> And can you answer number seven? I’m dying to hear your response. >> Seven. [laughter] Seven. Seven is just a layup. The question is why is there so much focus on building reactors when solar energy is already reaching 1 cent
[01:54:01] per kilowatt hour and deployed on existing surface today and that’s from Aster Sheen um easy easy one uh batteries that’s the simple answer 1 cents a kilowatt hour is without batteries but most of the use cases data centers in particular need 24 by7 power it’s a colossal amount of lithium piled way up into the sky to store enough energy to get you through two or three cloudy days in a row uh or even just to power the data centers overnight. Uh Elon would tell you, look, the Earth has tons and tons of lithium. This is not a problem. But the reality is it isn’t 1 cent a kilowatt hour today once you add the batteries that that you need. >> Also, the energy density that you need for some of these use cases. >> Yeah, energy density. and and a physicist like Alex would tell you like nuclear in theory is is also dirt cheap near free but then you got the regulatory and you know all the other issues that pile up the cost. So >> by the way so somebody told me something
[01:55:00] very funny they said your your degree may be in physics but you’re a physicist in theory [laughter] so I was like touche I’ll take it. >> Uh I’m going to go with number four. Do you actually think humans will be the deciders of AI personhood or will agents just decide for themselves how to participate? And this is from Adam uh Stapely 9129 who may be a multi and uh no I think that humans will want to believe they’re going to decide whether AI has personhood. uh and we can say whatever we want to say but at the end of the day I think the agents are going to develop their own system of legal structure and their own ways of participating and they’ll negotiate with us what they think is a fair settlement um period I think it’s going to be developed independent from us I don’t know if you agree with that Alex >> I I think there’s a blurry line between AI and humans that that starts to emerge
[01:56:01] in the next few years I I Think in the best case scenario, humans merge or at least subset of humanity merges with the AI and I think that will be another forcing function on the question of AI personhood. We’re going to have I I would predict so many new forms of person. And just again to rattle off a few, we have non-human animals which are being demonstrated every day, new science research that they have more intelligence than they’d otherwise be credited with. We’re going to have uplifted non-human animals. We’re going to have cryopreserved humans. We’re going to have defrosted cryopreserved humans. >> By the way, I saw that I saw that article uh that breakthrough on on on freezing and def and defrosting uh living brain matter. >> 21st century medicine making major progress in cryopreservation. We may have non-human intelligence. We we’re going to have pure AIs. We’re going to have probably uploaded humans. All of these different forms, new forms of person before we even get to all going to need and want some sort of
[01:57:00] personhood. And I think it’s going to be forcing function. >> We have our outro music today. It’s called The Moon Had It Coming. It’s a punk rock world tour. Thank you, John Noatne. Um, get ready for a different uh, shall we say, a different version of classical uh, for all of our Moonshot mates. Especially, please pay attention to Alex Weezner Gross’s new hairdo. It’s stunning. [laughter] >> Natany, you are prolific. This is This is pretty cool. Let’s jump in. >> Hey, the disruptors at the pale dead rock [music] can’t save you when this battle in the clock. The engineers gave us lasers and the might. Now we’re haring your contra. WE’RE WRECKING THE SURFACE. [music] WE’RE BREAKING IT DOWN. BUILDING A BEAUTIFUL orbital crown. >> [screaming] [01:58:00] >> Wow. >> I s the blueprints, the energy [music] steam. She’s a wreck ball chase a titan dreams. No more nightlight. No more glow. She’s scattering fragments in orbital flow. [music] Satellites got to go and she’s got the swarm to steal the shot. [music] Wow, that was impressive. I have a small confession to make. There was a brief period in university where actually had a mohawk and looked like that. So, >> yes, there Yeah, thankfully. You >> got to bring some pictures. >> There’s no photographs of it. Thank goodness. >> Well, come on. Really? Somebody has somebody listening right now has pictures. Send them >> You’re probably in the pre-training data set. [laughter] >> Yes, very much. Very much. uh some of the discussion folks but the Vatney John Deatney your your use of the video is so incredibly good.
[01:59:01] >> By the way, let me just mention everybody if you’re watching this and you’re a creative and you’ve got a music video, you can email it to me at mediadamandis.com uh and send them on in. We love the music videos themed on the content from this program. Thank you everyone for subscribing. It’s free. If you haven’t subscribed, we’re putting this out now almost twice a week, God God willing, not more than that for the moment. And uh we’d love to share it with you when it comes out. Uh stay tuned for an episode uh with uh with Brett Adcock, the CEO of Figure Robotics and more to come. >> Those to you guys because I feel topped up again. >> Yay. [laughter] [gasps] >> All right, gents. >> Cheers. >> Cheers. Have a great weekend. >> Drink water. >> Have a good one. Good night all. >> If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver
[02:00:00] you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrends. That’s diamandis.com/tatrens. Thank you again for joining us today. [music] It’s a blast for us to put this together every week. >> [music]