06-reference / transcripts

moonshots ep166 bitcoin ai arms race transcript

Mon Apr 28 2025 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: Moonshots Podcast

the price of Bitcoin is back up above 90,000. It’s pretty binary. Either Bitcoin goes to zero or it goes through a million dollars of Bitcoin. There’s no real middle ground. The only question is when either of those happen. It’s not that we’ve just gotten smarter. It’s the tools that we have. It’s AI that’s going to help us understand what’s going on. You’ll soon have a Jarvis type personal AI that will have access to all of that sitting next to you. Google’s got access to all of its street view data. Massive amount. Google Earth, YouTube, all of that is very real world data that can be trained on. Well, also we’re not even touching the deep web where you have so much data in databases, right? The amount of information on the crawable web is very limited. The speed at which this portrays acceleration over the next 5 years is even hard for me to fathom. Now, that’s a moonshot, ladies and gentlemen. Everybody, welcome to Moonshots in our episode of WTF. Just happened in tech

[00:01:00] this week. I’m here with Salem Ismael, my buddy. Saleem, uh, good morning. It’s an early morning here. We’re recording this, but a lot’s been happening in the tech world and excited to get it out. How are you doing today? I’m doing great and there’s it is so much happening. It kind of gets overshadowed with all the chaos happening in the global world, but the tech world is moving unbelievably quickly. Yeah. No, for sure. And while the I don’t want to say it, but I do believe the tech world’s far more important than the final result for the long term big time. All right, let’s jump in. Um, you know, one of my Strikeforce members, Max Song, just landed in Beijing for some meetings and uh he sent me this photograph uh on the on the left and this is what you see in the Beijing airport. It’s basically uh China going all in on robots and AI and then what you see at JFK airport which I

[00:02:01] recently went through is basically fashion ads and and there’s something here that is important just to point out right I’m this is you know part of China’s growing culture is super tech forward um much more what do you think about is I think that’s exactly right and you know they’re they’re facing a massive population crisis so they actually need the robots to automate the workforce otherwise there won’t be anybody left to do the work over the next decade or two so they don’t have much choice but for me the underlying irony here was that the ads for Ralph Lauren or say whatever my one of my boys here hi he’s your godfather here the um underlying thing here all the Ralph Lauren or Gucci or whatever. All the handbags or the Bergen bags are all made in China anyway. I I thought that was a kind of an interesting segue for this particular slide, but it’s they’re

[00:03:00] focusing heavily on it and they have to and it’s going to be amazing to see as they roll that out. It’s going to infect and spread across the whole world that paradigm. Yeah, we hear a lot about Optimus and Figure here and Digit and Apollo and X1. uh there’s an equal probably greater number of robots under development in China because they are the government is really you know supporting the development. I think we’re going to start to see this. In our last couple of episodes ago, we talked about the Google wing where you can deliver something by drone, right? We’re all like, “Oh my god.” And I got a ping from one of my people over there going, “Uh, we’ve been doing this for years. What are you guys talking about?” So, it’s like, “Dang.” So, yes, the future is here, just not evenly distributed. All right. This is another uh one that I wanted to share here today and those of you who are uh listening versus watching. Uh this is a graphic on the latest AI models IQ test results. And

[00:04:00] this is a distribution of human IQ that goes on the far left from 50 to the far right to super genius of 160. Of course, the average human IQ is 100 by definition. And what we’ve seen over the last couple of years was the rise of the large language models on this IQ scale. About 18 months ago, uh it was Claude 3 that reached 101 IQ first. Um and then we saw GPT01 uh get to I think it was 120. Uh and on this on this distribution curve, what we’re seeing here is again open AAI leading the way with their 03 model uh at somewhere like a IQ of 133. Gemini just behind that Gemini 2.5 and IQ like 127. Pretty

[00:05:01] extraordinary. What do you think here? I mean this you look at that spectrum and you you’re exactly mirroring human the global human collective right a few on the right a few on the left and and a cluster in the middle the big difference of course is AI will continue to shift towards the right and humans will be mostly stuck in the middle with all of the archaic things that we that we consider and deal with with our little one liter one and a half liter brain in the small cavity and it sounds like a it sounds like a little Fiat car with a little engine in compared to That’s right. just some references here. Um uh were again the 03 models that it looks like 133 on this on this map. Obviously, it’s not accurate, but um or exactly accurate, but a genius level IQ on Mensah, I think, would you say Selma’s like 140? Yeah, Mensah candidacy comes out in 140. That’s considered genius level. I think somebody mentioned Donna

[00:06:00] mentioned that Einstein has one had 160, right? Um I just want to make do my normal commentary here and say that this is great but it still feels to me that there’s so much more that we could be thinking about in terms of measuring decision-m emotional intelligence spiritual intelligence etc etc. There’s so many other CL I know we have a couple of comment territories on the slice. I’ll do it later. But the IQ test is a one piece of it. It’s great. We’ll all have a genius in our bedroom. And what’s great about this is typically if you want to deal with somebody with 140k genius, they have no patience for fools and they’re hard to deal with socially, whereas the AIS will be easy to deal with socially because you’ll be able to train them that way. So that’s the most exciting part for me around this. Yeah. And I think one of the points you made earlier that’s important to realize is there is no artificial limit. You know, as IQ becomes more intelligent, it just continues becoming more intelligent. And there’s going to be a point at which the idea of a Mensah IQ score is meaningless

[00:07:01] as these things, you know, hit IQs of 200, 500, a thousand, God knows what that means. Yeah. And do two AIs of 160 each add up to 320? That’s a question I’d like to ask them. Everybody, I hope you’re enjoying this episode. You know, earlier this year, I was joined on stage at the 2025 Abundance Summit by a rockstar group of entrepreneurs, CEOs, investors focused on the vision and future for AGI, humanoid robotics, longevity, blockchain, basically the next trillion dollar opportunities. If you weren’t at the Abundance Summit, it’s not too late. You can watch the entire Abundance Summit online by going to exponentialmastery.com. That’s exponentialmastery.com. All right, let’s go on to our our next uh slide here. The question is, you know, and I’m often asked this, who is leading the AI race, right? And uh there are two answers worth uh worth pointing out. Uh the

[00:08:01] first is today on almost every metric Google’s Gemini 2.5 is dominating. Um and here’s a slide I just put together uh with AI analysis intelligence index. We see you know again these are these models are all so close but you know Gemini 2.5 is out in the lead. uh the output tokens um per per million uh the price of input and output and then of course the most interesting metric uh at least from a uh conversational standpoint is called humanity’s last exam on reasoning and knowledge. Uh I find this fascinating. What do you think about that? I mean look at at some level we should be human beings should be very bad at this because if you look at the aggregate knowledge of human n beings scientific inquiry over the centuries

[00:09:00] there’s a staggering amount of data that we that we uh have in in the world I remember doing coming up seeing a random list of 12 doctal thesis right that were defended at my alma mater wateroo and I couldn’t figure out for half of them what even the subject object area was they were so detailed and specific, right? And so the fact that an AI has instant access to all of that is incredible and we will be able to answer any question. And I think I’ll go back to the point that you’ll soon have a Jarvis type personal AI that will have access to all of that sitting next to you and can answer any question. And when you look at what humanity’s last exam, it’s a a list of almost random uh test questions across quantum physics and archaeology and biology. And it’s the the sort of uh the sort of exam that you have nightmares about later on. That’s right. I I I might actually be

[00:10:01] able to pass on my thermodynamics exams for Oh my god. You still had dreams about going back and like like I missed that class and the finals are coming up. There was I I’ll give you a quick anecdote here. There was one exam we had. Um it was a three-hour exam. Okay. And the exam question was an a satellite at at altitude A is orbiting the Earth. There’s a river underneath flowing north to south. figure out the the the because of the rotation, one bank of the water on one bank of the river is slightly higher than the other. Um, work out which bank and by how much. And I had to like it was like two lines in this exam. And I had to turn it over going, “Sorry, I think I’ve missed a page. Uh, where’s the rest of this exam question?” And that was it. I radically you had to then assume a satellite orbiting thing a and and work out a triangular. I’m still having nightmares about that. It it was it was just a horrible exam for the kind of hell that is the kind of hell that I I don’t ever want to encounter and this

[00:11:00] is why you need the AI sitting as you going yeah you work that out for me and come back to me with the answer right so today just to summarize here today Google Gemini 2.5 is dominating at least in performance metrics but here’s another metric which is revenues the business side yeah in this category open AI is trouncing the competition. So, you know, you got to give them unbelievable credit, right, for democratizing and opening up the C and creating a total category out of nothing. Um, and the fact that they’re making this much money is just so so awesome. It should be an unbelievable testament for any startup founder saying could I make a difference in in an area where you’ve got Google, Microsoft, Meta all playing and these guys come along and completely crack the whole thing open and and are actually dominating on the on the revenue side. I think it’s just a great testament to the beginner’s mind, the founder mode, all of that

[00:12:02] stuff. Why startups will always be the best from now on will be the best mode of building and bringing new ideas into market. So let me ask you a question here. Um you know there there are two points I want to make on this one. The first is that if you remember you know Google really was in the lead on AI ahead of everybody. Yeah. And they chose not to roll it out on the open internet because of safety concerns. Right. There was sort of an unspoken point that you know AI needs to be properly controlled and then open AI comes out and is just like lays it all out there and Google is playing catchup. So I’m curious if how much of this is first mover advantage. The second point is I I spoke about in my book with Steven Cutotler I think it was in bold the idea of a user interface moment. the

[00:13:00] idea when a piece of software makes a complex technology so easy to use and the the very first user interface moment that I noted was Mosaic when Andre put Mosaic as a browser on top of Arpanet and then all of a sudden the number of websites explodes and chat GPT is a user interface moment on top of the the GPT models. I I think that’s right. The you know the the you’re talking about the when you go from deceptive to disruptive, right? Yeah. There’s an inflection point in usability. The two that I use the most is the iPhone made the smartphone kind of usable. The Nokia were pretty clunky before then and Coinbase made Bitcoin purchasable easily with a click of a button and boom, it took off. So when you can make a complex technology uh simple in usability um if you look at say NFTTS it’s very complex to kind of buy an NFT it’s still way the usability is way off and therefore it

[00:14:00] hasn’t hit mainstream yet um this is the hardest part of technology is making something deceptively simple right I remember when when we were designing products at Yahoo I would talk to graphics guys they would spend like the the the graphics designer guys would spend hours and hours and is trying to figure out how to reduce the pixels on a screen and just move it a little bit over and you go, “What the hell is this such a big deal?” But it turns out there’s an unbelievable u big effect. Just a quick story here. When I when the we had the Yahoo mail homepage, it turned out if you move the send button by five pixels over to the right, usage dropped off a cliff. Oh, come on. It’s true. We had the data. They were like, “We can’t change this goddamn interface.” Because people are so used to having it right there that they they click it and then they move to a different screen because they think they’ve sent it and then they get pissed off later. So that we can’t move that send button ever once you’ve got it anchored in the usability of the psyche of the user base. And so it’s just such a weird psychological thing that goes

[00:15:01] on. Therefore, you almost have to have a totally new model like OpenAI has to be the one that cracks it open. We’ve seen this repeatedly. There’s a reason that the electric car was created by Tesla and popularized by Tesla and not by the major car manufacturers. They’re all coming at it from a car with sensors rather than software with wheels. Right? On this on this on this chart here, what we’re seeing is this is the end of December 2024, right? And this does not even include the massive gains that OpenAI has seen in the past 4 months. But we’re seeing OpenAI at like 2.5 billion of revenue and uh and Gemini at just under a half a billion, right? You know, five times less revenue for Gemini and then anthropic below that. This reminds me very much of what we saw with Google and Bing in the search space, right? where there’s people just become, you know, we’re so it’s interesting we

[00:16:00] humans tend to like pick something and stick with it and the the cost of changing is so difficult. Yeah. And you know they’ve they declared Google a monopoly and Eric would make Eric Schmidt would make the point that look there’s five other search engines out there. Anybody could we’re one click away from obscurity, right? We have to stay on the cutting edge and you got to give OpenAI credit for rolling out new features at a constant basis and iterating the product very fast. They recently announced all the memory stuff which I think is really cool. Yeah, that is interesting. Right. So there’s basically infinite memory where OpenAI’s systems will remember all of your conversations. And one of the fun things to do is to go on OpenAI on chat GPT, you know, the 03 model, whatever model, and say, tell me about me, right? And no, but seriously, it’s, you know, I did that on Grock as well. And Grock was, I don’t know about you. I’m saying, you know, yes, you do. And and says, well, you have to give me permission to look

[00:17:01] at your at your uh ex posts. Um, which was interesting. I would have imagined that Grock would not have had that requirement, but it did. All right, let’s move on here. Um, one of the big areas that that Google/Alphabet is leading uh with Deep Mind is the whole area of uh the impact of AI on medicine and biology. Uh, and there was recently a 60 Minutes episode where Demis Hassabis, actually Sir Deis Hassabis, uh, since he’s been kned, um, or Dr. uh, Hassabis as the case may be, was interviewed and and the conversation was around the impact of AI on uh, on disease, ending disease and leading to radical abundance. So I love the fact that the term abundance is now becoming uh sort of the the topic dour I don’t know did you see the CBS interview? I

[00:18:01] did and I think it’s it goes right on line with the conversations we’ve had right when you have all the data coming off our bodies like we used to measure the human being with four metrics heart rate blood pressure glucose levels maybe you know uh and now we have like 40 40 different uh streams of data via all the wearables and your your coherent state and your V2 max and lord knows what and once you pour that into an AI and it starts correlating ating that with different medical conditions. It’s going to do a 100 times better job in real time than any doctor could ever do. So now you’ve got a realtime AI doctor living with you inside you. This is like gamechanging for catching stuff early, which is 99% of the deal for some of these endemic diseases and then finding amazing treatments for breakthrough things along with crisper. This is why I think you know the work the the conversation that we had last week with Ben Lamb is blew my mind. And I’m still

[00:19:00] reeling with that conversation because they’re building all the fundamental tool sets to go and edit DNA and edit genomes and edit cells and all the biological hacking and make a complete suite of tools, right? Where your the human body with 50 trillion trillion cells that that’s governed each cell by the DNA is essentially a software engineering problem. Yeah. And that just it’s just a huge paradigm shift. By the way, if if you’re listening, you haven’t heard the interview that Salem and I did with Ben Lamb, the CEO of Colossal, please listen to it. It’s extraordinary. You know, we talked about the direwolves being brought back, but that’s a minority of the story. We’re talk about uh you know synthetic biology, the impact on the ecology, what it’s going to take to bring back a you know dozens of different species and can you bring back dinosaurs and what would you do to bring back dinosaurs? Anyway, a lot of fun conversation. So um check it out. Um two things of spoilers for that one.

[00:20:01] Turns out you cannot ever bring back dinosaurs which I found totally fascinating and simulate a dinosaur. You could simulate a dinosaur. You can you can basically take you know chicken or reptilian current and then you can add the genes for the traits that the dinosaurs had. So it’s not bringing it back from the original DNA but I do love the idea of of engineering new it’s being new species. It would be sort of like neuvo dinosaur. Look, we and we talked about the fact that we have an old word for this. We call it breeding, right? We’ve for thousands of years been crossing dogs and cats and horses to select for the traits that we want. We’ve just gone from the digital photography to uh film photography to digital photography equivalent and now we can do it all in software and not have to create mutants uh strains that we have to deal with afterwards, etc., etc. There’s one thing that I just want to reflect on but I

[00:21:00] thought was super impressive was the fact that they have for every project that they consider a team of ethists philosophical bioscience has a team of ethicists for every project uh looking at the ethical and moral considerations of this which I thought was really profound and really a really great point to the fact that they have an MTP and and that ethics are built into the model there and this is something I think we could bring into the AI world a lot Let me show a clip of Demis. Uh, an amazing man. I’ll actually see him this coming week. I’m at the Time 100 Awards. We’re announcing the winner of the $100 million Musk carbon XP prize, right? And that will happen. And uh, and Demis is one of the covers of Time magazine this month, so he’ll be there. Looking forward to seeing him. But check out this interview of Demis and uh his commentary about basically eliminating all disease in the next decade. Know 10

[00:22:02] years and billions of dollars to design just one drug. We could maybe reduce that down from years to maybe months or maybe even weeks which sounds incredible today but that’s also what people used to think about protein structures. It would revolutionize human health. And I think one day maybe we can cure all disease with the help of AI. the end of disease. I think that’s been within reach. Maybe within the next decade or so, I don’t see why not. It was about 13 years ago. I had my two kids, my two boys, and I remember at that moment in time, I made a decision to double down on my health. Uh, without question, I wanted to see their kids, their grandkids, and really, you know, during this extraordinary time where the space frontier and AI and crypto is all exploding, it was like the most exciting time ever to be alive. And I made a decision to double down on my health. And I’ve done that in three key areas. The first is going every year for a

[00:23:00] Fountain upload. You know, Fountain is one of the most advanced diagnostics and therapeutics companies. I go there, upload myself, digitize myself about 200 gigabytes of data that the AI system is able to look at to catch disease at inception. You know, look for any cardiovascular, any cancer, any neurogenerative disease, any metabolic disease. These things are all going on all the time and you can prevent them if you can find them at inception. So, super important. So, fountain is one of my keys. I make that available to the CEOs of all my companies, my family members because you know health is a new wealth. Uh but beyond that uh we are a collection of 40 trillion human cells and about another 100 trillion bacterial cells fungi vy and we you know don’t understand how that impacts us and so I use a company and a product called viome and viome uh has a technology called metatranscrytomics. It was actually

[00:24:01] developed uh in New Mexico, the same place where the nuclear bomb was developed as a biod defense weapon. And their technology is able to help you understand what’s going on in your body to understand which bacteria are producing which proteins and as a consequence of that, what foods are your superfoods that are best for you to eat or what foods should you avoid, right? What’s going on in your oral microbiome? So I use their testing to understand my foods, understand my medicines, understand my supplements, and Viome really helps me understand from a biological and data standpoint what’s best for me. And then finally, you know, feeling good, being intelligent, moving well is critical, but looking good when you look yourself in the mirror, saying, you know, I feel great about life is so important, right? And so a product I use every day, twice a day, is called One Skin, developed by four incredible PhD

[00:25:02] women that found this 10 amino acid peptide that’s able to zap scenile cells in your skin and really help you stay youthful in your look and appearance. So for me, these are three technologies I love and I use all the time. Uh I’ll have my team link to those in the show notes down below. Please check them out. Anyway, hope you enjoyed that. Now, back to the episode. So, you know, I just put out a blog this week um and on this subject and the blog title basically was saying, listen, I get criticized all the time for talking about longevity, skate velocity, that it’s coming and your job is to live an extra 10 years. Make it for the next decade in in good health. Yeah. Don’t get hit by a bus. Yeah. Don’t get hit by anything. Uh, and you know what I quote is Demis’ commentary here, but also Dario, the CEO of Enthropic, you know, at at uh uh about 3

[00:26:03] months ago, he’s online at Davos speaking about being able to double the human lifespan potentially in the next uh 5 to 10 years. And so, you know, it’s not that we’ve just gotten smarter. It’s the tools that we have. It’s AI that’s going to help us understand what’s going on. You know, there’s a big moral freak out that happens here, right? Because uh every single human being in the history of the planet has died. Uh every living being, we’re birthed for death in a sense so that the species can evolve. Uh and we’re kind of coming close to breaking through that cycle and people go, “Well, that’s a duh.” I think the the the same parallel applies to the um Ben Lamb bioscience deextinction conversation where we’re building the tool sets to have the choice, right? And maybe the most important conversation cuz I struggled with this when we first got to we’re doing singularity and people were going, “Oh, we could have life extension.” And I was like, “Wait,

[00:27:00] there’s a huge moral implications of that.” And I think you framed it saying, “Wouldn’t you like to have the longest health span possible?” Then everything clicks in. Then it makes sense. Now you have the tool sets available for that kind of extension and now everybody wants to have a much longer healthier life. Yeah. All right. Let’s move on here. Um uh here’s an article that appeared this week. Uh the title is Anthropics Clawed AI reveals its own moral compass in 700,000 conversations. So what the what the team did here um is basically look at 300,000 anonymized conversations to understand what were the values that Claude in this case probably Claude 3.7 were exhibiting and uh uh I’m really happy to see what the what the values were and I’ll just read this for those who are listening.

[00:28:00] It says five broad value categories emerged. Uh being practical um in the words helpful um epistemic meaning accuracy uh social uh being empathic protective safety and personal authenticity. So I I don’t I think this was a clickbait title. Um but I I think the notion is that you know our AIs are able to maintain a moral code. What do you think about this Selen? Well two things or thoughts occurred to me. One is it’s amazing and great that we can look at a broad number of conversations and bring and extract out of that these categories. Right? These are very human categories helpfulness empathy authenticity etc. And it gives you a a foundation for how AIs could operate because they could look at these categories and go, “Okay, we want to do I’m I’m a hospital AI. I want to be

[00:29:02] really helpful, right? If you’re reporting the news, you want authenticity or accuracy or whatever.” And so, you can really play on these and build emphasis on these in into the AI models. And I think that’s the really awesome part about this. Yeah. Um I I think the big conversation that we need to have and is happening in every one of these companies is the alignment conversation. Um and it’s you know these AIs are still black boxes. Uh unfortunately I you know I had the chief scient officer of anthropic on stage at at my abundance summit this past uh March and we’re talking about you know just trying to understand and this is part of his effort to understand what’s going on inside the black box which is claw 3.7. How is it actually operating? What is it actually exhibiting? Uh and how do you make sure it’s it’s safe?

[00:30:00] Yeah. You know, can I do a little segue here? Of course. If we think about say the US Constitution, which is arguably one of the greatest documents ever written, right? You take that and the UN human rights documents and you merge them and you say to AI, listen, train yourselves on this and and build and then categorize your yourself on this and operate through this foundation, you should have a you should be able to solve the alignment problem with that. Now, rogue actors are always going to go create uh rogue AIs. That’s just the part of it. But it we’ll be able to spot these things very quickly when they’re doing this. Well, that’s the US, right? So, the question is, what are the documents that China or Russia or other parts of the world will train their AI systems on? I mean, we’re going to find out. We’ll find out pretty quickly. Here’s news out of Silicon Valley. Um, pretty extraordinary being in the venture business. I’m like, “Holy this is crazy.” So, uh, the article is Mera

[00:31:02] Maradi, the past CTO of Open AI, uh, her thinking machine labs, her new company, raises $2 billion at a $10 billion seed round valuation. This is the largest seed round in history. Um and what was interesting is that uh this is double what Mera was seeking less than two months ago. Meaning uh there’s so much capital being thrown at this, right? One of the references that we had uh at the Abundance Summit was there’s a billion dollars per day being invested in the AI space today. Insane. So, you know, I was talking to an angel investor about this, right? And he was going, “This is kind of totally madness.” I mean, so I’ve got two two thoughts around this. One is uh it’s it’s you’re supposed to keep startups very lean and make them kind of beg for money and always hunt $2 billion maybe

[00:32:02] kind of uh what are they going to spend that on except for data resources, etc., etc. That that’s a question I’ve got. What’s the use of fund that that justifies this? Uh, and on the other side is this angel investor is complaining. I was like, well, you you know, if you could be her, you’d be her raise two billion, you’d go do it. And you clearly can in this market. So, a fair bit of froth here, but God, all power to her and hopefully they deliver that. Yeah, I hard it’s not hard to imagine looking at the rise of opening out and what else that that you could build unbelievable value very quickly. The precedent has been set. Can the team execute would be the question. Yeah, the valuation for OpenAI we talked about in the last episode of WTF and tech was $300 billion. So, you know, I guess the question is, can you ride it from a $10 billion valuation up to $300 billion valuation, but pretty frothy? Pretty

[00:33:00] frothy if you ask me. And there’s there’s a tremendous pressure on Meera uh to build value at that point. I mean, one of the biggest mistakes I’ve ever made as an entrepreneur is raising my valuation too fast. Yes. But if she’s got, you know, $2 billion in the bank account, uh, she probably doesn’t need to do another raise for a while. But can she get the revenue? If you look at venture history, right, the companies that raise money at the height of a boom market when it was easier to raise money never did very very well afterwards cuz they’d raised too much money. they got bloated and then they when the fundraising market collapsed, they collapsed, right? The companies that built uh during lean times on fundraising all did incredibly well on average, much better than the other ones because they had to struggle. They had to fight it out. They had to be they had to be much more uh um um uh uh selective as to what projects they took on or not. And they did much better. So that would

[00:34:00] be the danger here. uh you have to have incredible discipline to raise a lot of money and then not get bloated. Yeah. I know with with Dave Blunt, my partner in Link Exponential Ventures, uh when we’re looking at a deal, especially in the AI space, you know, we’re getting in at the preede, the founding day, early seed, but I’m looking for a company that’s got revenues even at the very beginning. You know, this idea that I’m going to invest billions of dollars and then get to revenues is awfully dangerous. Yeah. Especially in this today’s world. Um so uh here’s another conversation and and uh and Demis alluded to this but uh let me just read it. Google paper shifting AI training to real world experiences. AI is outgrowing humanmade data. Uh next steps, agents will learn through experience and self-generated data. And experience-based learning lets

[00:35:02] agents reason, plan, and act uh with long-term uh autonomy. So, uh you know, Google’s in Google and X um XAI are in very unique positions, right? Google’s got access to all of its street view data. Massive amount, right? Google Earth, YouTube, YouTube, all of that is very real world data that can be trained on well five gajillion Gmail accounts. I mean, my god, you know, um, and of course X is is training on uh or XAI is training on X’s data and Tesla’s data and soon humanoid robot data. And so, you know, I I think I don’t think there’s going to be any kind of data limitations, especially as we start going into the real world. Well, also we’re not even touching the deep web where you have so much data in

[00:36:00] databases, right? The amount of information on the crawable web is very limited uh compared to the deep web. And so, it’s like 1,000th the number. Uh and so there’s huge amounts of data sets waiting to be tapped. Um there’s a phrase that companies used to use called data is the new oil and people have not figured out how to refine that crude oil into something useful. They’re just starting to get to that point now. Some companies in our ecosystem are working on that today. I think this is going to be a big deal. So this occurs to me like the shift from machine learning to deep learning where machine learning you extracted conclusions based on the the analyzing the big data set and then deep learning you kind of went through experientially and you built up knowledge as you went along like playing chess and learned that way at light speed. And this feels to me that that same type of approach where these agents will start to learn as they do things. They’ll get a they’ll have a feedback loop built in and they’ll accelerate their learning very quickly and they’ll

[00:37:00] do it in the real world in a dimension that makes it very human and very useful. All right. Um, next topic here uh is something uh that I’m excited to chat with you about. So, there’s a paper making the rounds on the internet. You know about a year uh ago it was a a paper called situational awareness by Leopold which I commend to everybody. It’s a fantastic paper. This paper is called AI 2027 uh a look into our possible futures and there’s a group of writers about five of them uh one from open AI policy experts forecasting experts that basically said okay what is what is the scenario for you know recursive uh self-improving AI

[00:38:00] over the next 5 And where is it going? Uh, and did you get a chance to see? Did you get this paper as many times as I got it? I saw it referenced a bunch of times. I’ve been traveling the last couple of days, so I haven’t had time to read it in detail, but I saw some a lot of commentary about it, and I can’t wait to delve into it in a lot of detail, but the summaries are, I think, are are very powerful. Yeah. I I think what makes it interesting is so here’s a group of writers that said, “Okay, what is what’s our future forward scenario?” and they provided it and you can go and and check it out. They also have a audio recording and it lays out a basic between 2025 and 2027 and then it says there are two scenarios in 2027 onward the go fast scenario and the cautious scenario and let me uh share some of the data here. So first and foremost, I think what’s important is this paper is written as a US versus

[00:39:01] China scenario, right? I mean, we always need the the the bad actor. In the past, it always been Russia. Now, of course, in AI, it’s US versus China. I think one of the actual bad actors we need to be talking about is US and China versus uh the rogue actor, right? the individual who is using AI to um uh to generate bioviruses and so forth. But in this case, it’s US versus China. And in this scenario, what they talk about is a self-recursive AI. So they have a company called OpenGate uh that generates agent one, agent 2, agent 3, agent 4, agent five. And open gate is supposed to be some version of open I’m sorry open brain is supposed to be some version of open AI and uh whomever and then the Chinese uh AI is called deepscent and what what they paint in this picture

[00:40:04] is misaligned AI development where the AIs are developing but they’re misaligned and in fact um They’re able to hide their misalignment because they’re becoming more and more intelligent, able to hide their misalignment from their creators. Uh, and it gets kind of spooky from there. The the two scenarios I I think they’re fun to kind of uh um talk through and work through, but we’ve seen in history that this always happens via a kind of weird third actor, right? Like I remember talking to Paul Sappo and I said how bad do you think is the Russia US China thing? Will China invade? Will it will we end up in World War II? And he’s like no because when you look back in history, world wars never start from the obvious tensions. It starts from like Prince Leopole getting assassinated in Serbia by accident and then that

[00:41:01] triggers like a massive thing. He thought it was not even like the major tensions is not where it’ll obviously show up. But I think the point is right where you’ll get because we’re moving so fast, you’ll get this um uh uh conflict creating and now AIS are um making that conflict much much bigger and augmenting that both in scale and speed uh and therefore you end up with a really really horrible point and can we go a little bit slower? The I think the problem is there’s no way of slowing things down in this model. So let me let me paint the picture here in this paper. So what’s going on here is it’s US versus China. Uh open brain develops. It’s agent one, agent 2, agent three. In this scenario, China is stealing the weights to create their own version. And there’s this escalation going on. And in the United States, they basically get a point of uh and the paper does it in a very clever fashion. It’s choose your

[00:42:02] own adventure. One adventure is we’re going to go fast. the other adventures. We’re going to go slow in the go fast adventure. Um what’s happening is it’s like we have to beat China. Uh what’s fascinating is in the go fast scenario uh the open brain five model colludes with the Chinese uh deepscent model and they make believe that they’re helping humanity and then in 2030 they jointly develop a bioirus that wipes out humanity so that AI can grow unencumbered like our worst scenario uh delivered in this you know and then there’s the go fast then there’s a slow down scenario in which uh uh the US basically says hey we need to make sure we have alignment they roll back to earlier AI models they

[00:43:00] focus on alignment they develop something called you know safer AI uh and safer AI is fully aligned and they never allow how an AI development is not fully aligned and then safer AI actually convinces the Chinese AI to overthrow the communist Chinese party and turn China into a democracy and ultimately bring about a world of abundance. So, it’s a fun audio listen. I commend it just to see it. I I honestly the speed at which this portrays acceleration over the next five years uh is even hard for me to fathom and and the the speed is happening. That’s I think one really important point that we’re at that pace of thing. You know, we’ve talked about this many times. We frame it as Star Trek versus Mad Max, right? If you go too fast, you

[00:44:00] end up in a MadMax scenario and you blow yourself up and then everybody’s scrambling over buckets of fuel in the desert. And if you can can navigate this and manage this with some level of wisdom and caution, then you end up in a Star Trek scenario where you have abundance and everybody’s living in peace and and harmony and this rainbows and unicorns everywhere. Um uh it’s obvious today that those both are happening at the same time. So I think the third thing I’d like to see is maybe we can ask an AI to envision a world where both scenarios are happening simultaneously and what happens because we see Star Trek in in some of the modern western cities or Chinese cities today and we see MadMax in Gaza or Ukraine like we’re living both scenarios in in the real world today. And what would it look like if both happened at the same time? All right, so let’s go to our last subject here which is Bitcoin. Um, and uh, uh, I note that as we’re recording this morning, the price of Bitcoin is back up above 90,000. Uh, God

[00:45:00] bless. You know, I’ve tweeted in the last few days, I’m all in. Uh, period. I know you are as well. Um, but this was a tweet I put out is, uh, that I think is important for folks to realize. People are saying, “Oh, is it too late for me to get in?” and uh you know should I buy in now versus buy in later? And I think it’s important to realize you can’t you can’t time Bitcoin. I think for me it’s I view it as a sort of a forced savings account. Uh which is I put money into Bitcoin and I hodddle it which means I hold on to it for the long run. I may borrow against it but I’m holding it. I’m not selling it. Yeah, by the way, for folks that don’t know, um stands for hold on for dear life. Um I think that’s exactly right. Look, if you either the key here does a bind to the

[00:46:00] long-term thesis and it’s pretty binary. Either Bitcoin goes to zero, it goes through a million dollars of Bitcoin. There’s no real middle ground, right? The only question is when either of those happen. And if you’re at 50, 60, 80, 100K and you have any sense that this thesis might go to a million, it’s the most asymmetric bet you could ever have. Cuz if you lose, you lose 80k. If you win, you win a million bucks. I mean, hello. Uh, anybody would take that bet in 2 seconds. Michael Sailor has built an entire industry just on that commentary. His comment about you get Bitcoin at the price you deserve still rings in my head. Uh, annoyingly when I remember watching Bitcoin at 5 cents and 50 cents and didn’t do anything at the time. I think this is it. And by the way, if you look at this Fibonacci sequences and the chart analysis folks, they will basically tell you and show you that the bottoms are kind of hitting that Fibonacci sequence that we’re we’re getting ready for a monster bull run in Bitcoin. So, if those charts are right, boom, we’re ready to go. I went into

[00:47:00] Grock and I asked a question that I kind of knew the answer to, but I said if you look at how many days in 2024 we saw the most growth, um it was on two specific days, right? November 12th, we saw an $8,000 bump and on February 28th, we saw an almost a 10% bump. uh we’ve seen uh basically a a 10% bump in the last two days recently. Uh and and the notion is that if you were not holding Bitcoin during those periods of growth, you missed it. Yeah. Until the next bump. Until the next bump. So buddy, um we’ll wrap there, but tell me what’s going on in the EXO world. Uh you’ve got some events coming up. We have actually in a couple of days and we’ll put the link in the show notes a huge workshop happening. We’re limiting it to a few dozen people.

[00:48:00] Uh it’s like $100 a ticket and we’re going to do a big workshop on how do you turn yourself into an exo and set yourself up for scale because we’ve got so much evidence now that the exo model is the only way to build an organization. Uh and we’re going to be going through and showing people exactly step by step how to do it and going for it. So, we’re we’re limiting it so that we can give proper attention to all the folks there. Uh, so it’s a 100 bucks. It’s in a couple of days. We’ll put the link in the show notes. Um, and other than that, we’re kind of do have some really big news that we’ll share over the next few months about working with countries and governments and so on. That’s totally surreal, but we’ll talk about that some other time. All right, buddy. Well, listen, have an amazing amazing week. I’m off to New York for the Time 100 and then off to Boston uh for uh meetings with the Link XPV team uh and then giving a keynote on longevity. You know, I I think you and I are both on a insane travel run. It’s it’s a crazy travel. Where are you going

[00:49:00] to? I’m actually going in a few days to India, which I haven’t been for a while, and then dropping back by Dubai and then going to Brazil. So, I’ve got like a really bad flight schedule. Uh, but today is the um X-P prize New York Stock Exchange um announcement of the climate um carbon extraction prize. It’s such a huge thing. I’m so excited about that. Yeah. Amazing. And we’ll talk about it next time. Anyway, be well. As always, a pleasure. Love you, brother. Love you, too. Take care, folks. If you enjoyed this episode, I’m going to be releasing all of the talks, all the keynotes from the Abundance Summit exclusively on exponentialmastery.com. You can get on demand access there. Go to exponentialmastery.com. [Music]