last February I said deep seek one of my favorite AI companies out there if you look at each of the Innovations they made it was largely engineering Innovations do we see deep seek uh dethroning or reducing the valuation of these companies at all I think for my opinion it should increase the valuation the US versus China AI Wars you know this is a winner take all type of game this is the biggest crisis that we have coming because we’re heading into future now where I’d say every single AI leader says the AGI is 3 to 5 years away welcome to moonshots in an episode of WTF just happened in Tech with Saleem Ismael and a special guest imod mustak uh you know imod is the founder of stability AI a company that had been the leading open- Source developer music and image generation with 300 million open source downloads
[00:01:00] imod today is the founder of intelligent internet uh he’ll be speaking about that but in this episode we’re doing a deep dive into three subjects deep seek of course and the constant disruption that’s coming in every market and an accelerating Pace we’ll be diving into AI safety uh what’s going on at open AI as people are starting to leave especially from their AI alignment team and then we’ll chat with imod about intelligent internet what’s his plans where is he going all right let’s dive into this episode for me this is an extraordinary week of accelerating change and as always help me spread the message subscribe tell your friends this is the conversation that I think is probably one of the most important that we can be having right here right now let’s jump into moonshots welcome to another episode of moonshots WTF just happened in Tech this week I’m here with two besties Saleem Ismael uh the CEO of
[00:02:00] open EXO and imod mustak the CEO of intelligent internet uh no stranger to this podcast and uh it’s been a crazy week uh we kicked it off with sort of a uh internet uh AI Market meltdown on the news of deep seek uh and uh the the concussion waves keep coming what does it all mean we’re here to have that convers ation uh Iman good morning to you or good evening you’re in London today yeah I’m in London morning to you it was a pleasure yeah and S you’re in Miami or New York yeah all right we’ve got three different time zones around the globe we need someone in Hong Kong to just balance this thing out uh but we’ll get there soon enough so imod deep seek no surprise for you was this something expected or was this something like wow um I think it was actually expected
[00:03:01] like last February I said deep seek one of my favorite AI companies out there um like they took the original ethos that we had at stability another ex hedge fund manager and they released amazing models open I think when the AI Community first started to see this was probably around about summer of last year they released a deep seat Koda which hit the top of the code rankings CU they started with replicating llama from meta then they broke forward and in fact the algorithms there are some of the algorithms they use now and then in December God like a month ago they released deep seek V3 which was actually what this 6 million training cost model was and it matched GPT 40 and all these other models it didn’t match 01 at that point but we all thought they’ll figure out how to do it and guess what they did and it generalizes to what was the you so it felt like the internet broke on over the weekend as the announcement was made what was it that got everybody’s so
[00:04:00] hot and bothered instantly since it’s been around for some time so there was the base model that was the chat GPT equivalent model in December and that proved that you could train these models on a fraction of the cost the next thing was this reasoning model R1 where when you type it shows you the reasoning it takes a bit longer to think has better quality output that actually came out last Monday but it was this weekend that this narrative Cascade happened and now you’ve got your mom and your aunt asking about it you know and it’s front news and then Nvidia cracked Etc I think what it was is remember the early days of chat GPT or stable diffusion on image the immediacy of response and that new paradigm when openi released 01 this thinking model it was amazing but it was a bit like using chat GPT you put something in it says I’m thinking and it gives you a response because what they did is they hid the Chain of Thought reasoning with R1 it actually shows this is how I’m
[00:05:00] thinking about this is how I’m breaking down the problem and it feels like you have another person on the other side and as more and more people use that and they saw the performance benchmarks it built up into this Cascade because it was so immediately usable and they realized it was open source so people took the smaller versions of it and started running it on their laptops if it was just a Clos model that didn’t have the Chain of Thought but matched 01 it wouldn’t have had that if o Open AI had released the chain of thoughts then I don’t think it would have had the same thing said it was this Confluence of things that made people realize oh my gosh what is this new thing and how has it been done and it challenges our assumptions amazing s you and I on the phone over the weekend like huh this is real this is happening what were your thoughts when this when you started I I have two thoughts I love the timing that they launched it on the day of the inauguration um as a bit of a um a slap in the face to the incoming Administration saying we will s sanction you to bits etc etc and here’s how the
[00:06:02] sanctions work the second thought I’ve had throughout the last 10 days or so is that you know we are expecting demonetization and as the power of these models is accelerating exponentially and blowing our minds the demonetization should also kind of surprise us in the same way and so the fact that they’re able to do this at uh 1110th or 100th or whatever uh now how they got there is obviously uh an open question but the fact that that is has been achieved shouldn’t be a big surprise on the curves that we’re looking at right the it’s incredible to see but but the we shouldn’t be surprised if we had own dog food go ahead go go ahead I think someone noted it was actually the fiveyear anniversary of the Wuhan lab leak as well except for this one was delivered no not going to go there uh but you know like I put out in my my uh blog that
[00:07:02] followed the Deep seek announcement uh this is just going to be what is the new normal you know U when when Netflix ate Blockbuster for lunch uh you know this is just going to be happening over and over again the speed at which heads are are turning and snapping across every industry um you know it’s interesting because when we saw chat GPT announce and it got to a million users and in 5 days and 100 million users in 2 months people like can this ever be replicated again and the answer is yes and faster so iMat could you give us a quick rundown of of what actually how actually deep seek compares to uh to uh GPT 40 gp01 uh any of the other models and and you know there’s a lot of claims being made about how many gpus it was um it
[00:08:00] was created on how much money size of teams and it was those comparative numbers that made it a a big deal if it was just an equivalent uh model but was not being done at a fraction of the time or cost it would not have hit as hard as it did yeah I think this was the shock was the order of magnitude um so we can break it down a bit so o one was this evolution of trap GPT that came out that suddenly got to IMO medalist level or top coder level like top 1% coder level cuz it could think longer this is a key breakthrough and now open have actually said Mark Chen from there what deep seek figured out which was we’ll get that to in a second was pretty much what they’re doing at open AI that was in November and so we’ve had a few period there so first of all you had the model that matched chat GPT then they figured out how to make it think longer but the main upshot that shocked people I think initially um was that it was 96% %
[00:09:00] cheaper now software usually has an 80% margin we don’t know how much open aai charges but you know they’ve got this Hammer which is a large amount of gpus they’ve never had to work in a constrained environment so sometimes you are a bit price insensitive particularly because the cost of running an 01 query to solve a math paper or a legal problem because it’s as good as any lawyer or doctor is so small still but this was 96% cheaper than that which was number one number two was the fact that this could be kind of released anywhere um and the headline number of the original model that was this was trained from the R1 evolution is probably only1 $200,000 from that which again we can come back to was a shock last year um well a year before last I can’t remember the exact number I think last year open AI spent $3 billion on training models amazing to give you an idea of that now how much did deep seat cost there were accusations around they have 50 th000 of these chips not 2,000 like we used on
[00:10:01] the training run they never claimed how many chips they had they just said we’ need 2,000 for this training run and we used it over this period of day to build a model that looks like this those of us that have built these models know that these numbers actually all check out and this is why some of the reaction has been really interesting cuz people like well they have far more gpus or they have hidden gpus and other things the gpus they have are these models called an h800 which is like the top well now not quite the top end Nvidia chip but with the interconnect slightly reduced so the way that the chips speak to each other is a bit slower we had this issue at stability AI a former company where we built one of the largest super Compu clusters in the world but we had interconnect a quarter of the speed of other people because that’s all we could do and again we were competing against the biggest guys we built some of the best models in the world they wrote the lowest level code in PTX which is like this Cuda but a level lower to overcome it they basically engineered the crap out of it cuz some of them are ex Quant hedge fund managers and others and if
[00:11:00] you look at each of the Innovations they made it was largely engineering Innovations which is very interesting for our mental model because what’s China amazing at engineering Innovation you look at byd you look at xiaomi it shouldn’t be any surprise that as you move from research to engineering you would see this leap ahead but all the numbers kind of check out you see the cost reducing I think they’ve probably got 10,000 chips in total but that’s not more than many startups in the valley to be honest you know everybody Peter here if you’re enjoying this episode please help me get the message of abundance out to the world we’re truly living during the most extraordinary time ever in human history and I want to get this mindset out to everyone please subscribe and follow wherever you get your podcasts and turn on notifications so we can let you know when the next episode is being dropped all right back to our episode you know I I had a conversation with Kaiu Lee uh recently on this podcast and we were talking about the notion how
[00:12:03] you know the US government’s been restricting Chinese companies from from getting Nvidia chips and all that’s done is create this evolutionary pressure for them to do much more with much less and this sounds like a perfect example of that it’s it’s like it’s like darwinian in its in its developmental Force yeah I mean again if all you have is a hammer and you have large amounts of gpus the way that this works is the gpus compress the knowledge it’s like pressure cooking a steak and making it tender instead you look at things like better data better algorithms more efficient things if you can’t scale on compute speed because they didn’t have the chips for the speed because what happens is as you go from 1,000 to 2,000 to 10,000 gpus you can parallelize it and have more speed they instead did memory as the key thing so classical models are very dense models like llama 70 billion parameters this is 64 40 billion parameters but only 30 billion
[00:13:02] of them are activated at one time they scaled on memory and that is cheaper than super fast silicon so these constraints I think really are the key and we’ve seen it again and again that if you don’t need to worry about the constraints then you build inefficient models if you have to worry about efficiency then you know necessity is the mother invention wasn’t the CEO labeling the data and going through all that stuff cuz that adds so much uh juice to the model models are just data I mean again if models are figing out the interconnections it’s like if you have a bad curriculum then you have bad data the models we train on right now are trained on terrible data like 14 trillion words in the case of deep seek and llama you don’t need that much data to build an expert model but if you have a large amount of compute it doesn’t matter or even a 2,000 so what we’re seeing now is data Improvement in fact the data they use to turn this from a base model to a thinking model which they then transform the Llama model with and quen was all synthetic data so we
[00:14:02] moved to a point now where they figured out what the right type of data was and you find typically with those that make breakthroughs they don’t send the data off to the Philippines and do all of this and try to make up for it with engineering scale you look at every part of that process and again this Echoes what we’ve seen in engineering how did the engineering Marvels happen at Tesla or Chinese companies they look at every part of that process and they simplify simplify simplify so we had David Sachs over over the weekend um with this commentary let me go ahead and play this video one second um and love your your both of your thoughts on it well it’s it’s possible there’s a there’s a technique in AI called distillation which you’re going to hear a lot about and it’s when a one model learns to another model effectively what happens is that the student model asks the parent Model A lot of questions just like a human would learn uh but AIS can do this asking millions of questions and they can essentially mimic the reasoning process that they
[00:15:02] learn from the parent model and they can kind of suck the knowledge out of the parent model and there’s substantial evidence that what deep seek did here is they distilled the knowledge out of open ai’s models and I don’t think open AI is very happy about this what do you think about that imod well it’s a bit calling the kettle black right in some way don’t train in our data I mean distillation is nothing new and there’s no way to kind of stop this from the model basis but if you actually look at what the paper says and what’s reasonable they had this version r10 that created its own data and what’s this familiar with it’s familiar with alphao and Alpha go0 and mu0 these reinforcement learning models that outperformed humans on go in fact you could feel like maybe we’re all ladol right like the AI is coming for all of our expertise um it’s inevitable that that will happen but I don’t think they deliberately went in and did that because open AI o1 outputs these Cutting
[00:16:01] Edge outputs were missing the Chain of Thought reasoning step we’ve seen now that as you take the Chain of Thought reasoning from R1 and actually the new Gemini flash thinking the Google model that’s now top of the leaderboard that’s what you really need if you want to optimize this process so I think they actually created their own synthetic data but as they look at all of the internet there will be some open AI data in there we’ve even seen that with llama and Gemini and others sometimes you ask it who made you open AI because it’s taken so many of those drinks you know we’ve got um a interesting impact on Wall Street that occurs on Monday morning uh where it’s you know uh read across the board uh Nvidia got hit massively uh I’m sure you know open AI was reeling uh Saleem you know how do you think about this because it’s uh
[00:17:00] this is what people respond to yeah you know I think markets are psychological and everybody goes oh my God and everything crashes um uh the there’s no question that nvidia’s chips are overvalued but my guess is and I’d love to get EMA’s take on this is the overall demand in AI is so exploding that it’s not going to really make a big dent in the demand for the chips yeah I mean like Nvidia is still up 100% over the last year right it’s like it’s not down a lot no one knows what’s coming but what’s the market size of this the displacement is the displacement of all knowledge labor just like the industrial Edge you replace muscles now you’re replacing brain cells that’s a huge Market oh I mean we have we have a global we have a global GDP uh you know going into 2025 of $ 1110 trillion dollar you know half of it is physical labor and half of it is you know effectively intellectual labor it’s it is massive and this is the thing th this
[00:18:01] is the technology the intelligent Capital stock that really will Define productivity so it’s very difficult to get a handle of how this will go that people have been talking like Sati and Adella about jeevan’s Paradox you know like oh the price the higher demand and Ro andreason’s been talking about this I feel it is that and if you look at nvidia’s strategy they’ve been moving to these fully integrated data center boxes the gb300 mvl 72s and this new thing digits which if you’ve seen a Mac Mini it’s like a Mac Mini that sits on your desktop has 128 GB of vram a Peta flop of AI compute it for $3,000 $3,000 two of them can run R1 so with that you have R1 at home and it’s an entire baseboard created by that it doesn’t have a fan even it only pulls 200 watts of electricity so you made a comment earlier on in terms of the amount of energy and cost you think it would cost it would actually take to
[00:19:02] build deep seeks model could you speak to that it was sure kind of insane so when we bought on our first major supercomputer this would have been about 10th fastest in the world publicly in 2022 at stability it was 4, a100 which were the top of the range chips the internet connector was a bit poor but you know it was still big and each of those chips used like 400 watts of electricity that was a big old Beast if you can recall uh the recent Nvidia announcement Jensen had this like Shield which was a chip that was their new integrated box these nvl 72s with 72 chips super interconnected in fact the interconnect on those chips is equivalent to the bandwidth of the whole internet that’s how much faster they’ve got one of those boxes pulls down 100 wait wait wait can you repeat that the computer on those chips as what the inter connect the way that they communicate with each other the total bandwidth is like six PAB bits a second which which is the bandwidth of the
[00:20:00] whole internet they figured out how to get everything integrated so you don’t have this chip to chip inter connect you just have this like big wafer with 72 chips on it uses 100 Kow of electricity and when I was doing the math on this I was like so you have 2,000 of these h800 slightly hobble chips that the Chinese have right and deepseeker using I think it would require 10 of these boxes at most probably even less to create that model and each box costs $3 million of these new data center boxes in fact I think it probably only costs four of these boxes and even you pick the Upp bound the total energy required to train a model is 1,000 megawatt hours and it’s like 15 bucks or something a megawatt hour in the US now 20,000 you could you could literally train it off of a small solar farm in your backyard well a big solar farm you know it’s pulls down a decent amount
[00:21:00] like 100,000 kilowatt hours of energy but then that box to run it you could definitely run deep SE car1 on solar power panels and if we look at the direction this is going cuz it’s still not optimized next year you should be able to get an 01 level model on your smartphone that pulls at most 20 watts of electricity and it’s a less than a dollar per watt of solar power and this doesn’t make sense if you look at what these models are capable of and we think about the cost of intellectual labor well it makes it makes sense when you think about how much energy your brain pulls just 20 watts and and so we’ve got a we have a huge efficiency curve to ride to get there and I think the thing is like by next year you will have these 01 level models on 20 watts which is our human brain level and these are PhD level in so many areas and that doesn’t compute because we’ve had these discussions of Microsoft is
[00:22:00] bringing back Three Mile Island as a nuclear power reactor you know Dyson is energy is going to use everything like 60 gaw of electricity is coming on for data centers in the US I think over the next year or so yet when we get down to the actual numbers for a given unit of intelligence it’s a few Watts it’s a few pennies before that it would take entire teams using how many watts of energy in their brain in their infrastructure and when not ready for that selem you asked a question about how challenging is deep seek actually to open AI meta Nvidia what are you thinking there I’ve got two questions here one is um uh does the fact that it’s Chinese and companies will be reticent to put their information into it uh make a big difference so that’s a question for you and now my guess is the answer is no because it’s open source and you can run it locally is that correct you can but most people won’t right just like you give your code to you give
[00:23:01] all your stuff to Tik Tok um no one knows what happens with all this data and the versions that you can run locally are actually the distilled versions not the main version it’s quite difficult to run the main version locally so I think there’s a there’s a geographic Arbitrage advantage that the incumbents still have that’s that’s their that’s pretty powerful so but let’s stick on that question about you know so the question I was asked by everybody on X and my friends was is this going to go the same path as Tik Tok where in fact deep seek will be well let me back up a second when open AI first came out with chat GPT you had all of these companies and imod you and I had this conversation all these companies a lot of the banks saying you cannot use chat GPT in the office we don’t want you know open AI to to own our data there was this uh immediate privacy desire um which is still valid uh but are we going
[00:24:00] to see the same thing with deep seek where people are like no can’t use deep seek we’re worried about the data and uh where it’s going to be resident I think you’ve seen a couple of announcements so perplexity announced they’re using deep seek locally fully on American Farms Etc you know so farms and you’ll see that type of thing even if they’re running the larger ones but again it’s difficult to run yourself but there’ll be apis number two is you’ve seen open AI announce chat GPT for government used by 19 6,000 federal employees and this is the direction things are going whereby I think you’ll have four different types of AI super exert AGI that you call upon when needed your personal AI your Google your Apple AI these open weight models like deep seek and llama which are useful but not in regulated Industries and then open source open data AI where these decision support systems you need to know what’s inside them and how they are actually CU you can poison these models with inherent biases uh there was this anthropic paper we’ discussed Before
[00:25:00] Peter called um sleeper agents with a few thousand words out of 10 trillion with just one word you can turn the model evil or change its Behavior completely amazing it’s like the actual it’s funny enough you know most of the Transformers in the US are built by Chinese companies and no one knows the Control software of that these types of threats right do you want the Transformers that run your business to also have that Potential Threat so that’s what we’re doing now in tge Internet building out that open source stack for the reg and we’ll get I want to dive into what you’re building out uh with your your newest company intelligent internet because uh it’s got one of the boldest Visions I’ve ever seen uh for supporting Humanity I the impact of deep seek on open AI Nvidia meta Google uh you know I see this you know this comment from Sam Alman to read it says you know deep seeks are one is an impressive model particularly around what they’ve been
[00:26:01] able to deliver for the price we will obviously deliver much better models and also it’s legit invigorating to have you competitor you know we’re going to talk about AI safety in a little bit because when you’re legitimately invigorated you pull out all the stops you pull out all the regulations you do whatever you take to jump forward and that’s concerning um but do we see deep seek uh dethroning or reducing the the you know the valuation of these companies at all we saw it for a day but is it valid um I think for my opinion it should increase the valuation it’s bringing forward the time of mass intelligence too cheap to measure if you look at open AI what Sam has done masterfully is 300 400 million users like what is AI in most people’s mind is chat GPT right yeah Gemini and Claw don’t even
[00:27:00] register um and if the cost comes down it’s good for him this is the Zuck school of thought why did they open source llama because it uses 10% of their gpus and if there’s a 10% performance gain it pays for itself and so open AI will use whatever they didn’t most of their models don’t have brand new algorithms they’ve borrowed from Google and many others right there’s no real secrets in this space especially now with no non-compete in California you know that helps um and so for me what is open AI as a company they were in this pre-training massive compute stage now that’s becoming commoditized people can pre-train like open like um xai and others but pre-training maybe doesn’t require as much the data is getting better and better it becomes about intelligence refinement from seeing how people use it it’s the operator Paradigm whereby open AI can now run your Compu MacBook or whatever you can let it take over and it can book your holiday for you that’s the next
[00:28:01] stage and I think they’re well set up for that and their costs should decrease again openai made three billion of Revenue last year they lost 5 billion of which three billion was training models if you don’t need to spend as much training models it’s good so P says the the uh feedback loop of people using the model and because they’ve got so much there so many users gives them a pretty good Edge I I can see that I think it’s that and then you use these they’ve got like half a million gpus coming these B series The VR series and others you can now make those go sequential to build even better data and map and feed that back into models that you optimize and you hyper optimize like classically in Computing things were not parallelized they were sequential so we’ve had this period of these big clusters now it’s about swarms of models of Agents solving tasks cuz they’ve got good enough cheap enough and fast enough actually that’s the final thing about deep seek same
[00:29:01] with stable diffusion on image back in the day good enough fast enough cheap enough it’s that trif factor that causes these massive adoption curves you know when uh when this was announced you heard uh that Zuck created four War rooms of Engineers to try and decipher what was going on and how to utilize it I mean it really is an AI arms race where everybody is sort of is is uh surfing on top of each other’s uh advances uh and just accelerating everything what I found fascinating and I’m curious about this is the size of their team uh doing it with relatively and and open AI had you know a 200 person team during its earliest days as well uh how do you think about the size of your team for uh the ability to create something disruptive too big is is bloated small and Nimble I I think a core team of about 100 researchers beyond that it gets bloated so at
[00:30:01] stability we had 80 researchers and developers 16 phds and we achieve state-of-the-art in image video every modality even multilingual and so we had 300 million downloads on hugging face the most downloaded company the most popular open source while I was there once we scaled past that to 150 things started to break down because it is about this rapid iteration it is about trying new things and research being an innovation Center versus as a cost center you start to have too much compute and other things as well and again open air I think did their best work when they were smaller but they scale still scaled up and still do good work uh but it is a question mark now like it’s become an organization and as Saleem is the expert in once you get past that level it’s so difficult to maintain Innovation s yeah you end up getting you end up with a problem of either top down control structures with slow down Innovation or you let everybody do whatever they want you get a lot of
[00:31:00] duplication and so you end up with have you have to manage that tension around it and there’s just a lot more complexity and you know it’s fascinating 150 people is the dunar number yeah where anthropologically we found that this is a pretty solid uh reliable threshold um I do think to back to emad’s earlier comment that open has a lot more people than they really need because they have so much money they can just throw bodies at things uh now it’ll force them to be a little bit more efficient um and I also believe that this this is a good thing for the overall Market because a rising tide lifts all boats the I I think we’re going to end up with a balkanization though where you know Western companies won’t want to use deep secet type models like I can’t imagine an Indian major Indian State Enterprise wanting to use a model like that for all of the security reasons and then you have to develop homegrown models and then everybody ends up with own models in different ways and so you end up with a splintered effect
[00:32:01] we’ll talk about that with imad’s uh vision and Mission on intelligent internet um I I want to dive down into China for a moment longer because I think part of the announcement wasn’t just a cheaper open- Source model it was this level of innovation coming out of China which rocked people because I think the majority of the world doesn’t see China as sort of the hot bed of AI Innovation that it is um here’s an article from Business Insider Trump uh Trump’s threat on Taiwan chips tariffs could give Nvidia a fresh headache after deep seek uh how do you think about all of this imod well I think this is the real reason Nvidia would go down or maybe Jim Kramer the previous week saying buy Nvidia you know one of these things um I mean we’ve seen they want to home show this they trying to build chips there
[00:33:00] Intel’s probably in play as an acquisition Target oh it’s definitely in play I mean it’s like it’s it’s it’s fresh meat on the table and everybody’s figuring out how to chop it up well I mean if you look these chips they’re getting super fast and super good with Nvidia like if people talk about AMD AMD chips are impossible to use you know the software isn’t there there’s bugs and everything it takes a few generations to get stable and video chips work but Chinese chips also work so the Deep seek model API was being run on Huawei Ascend 910 chips which are a few Generations behind in terms of efficiency but they work you know similarly China has two exoscale computers two of the fastest supercomputers in the world built in a completely different way ocean light and Tian 3 because they just built at scale and bulk now what the case is here is that this particular thing is they want to increase us production because the means of production and the means of productivity of a society which
[00:34:01] traditionally it’s Capital stock it’s industrial Capital stock it’s IP it will be chips how competitive you are on the world will be how much Computing intelligence you have I think the US has realized this and how much energy you have to throw at it yeah that’s a factor of that as well and so the US has realized this so it’s drill baby drill it’s Reon sure as much of this as possible and it’s create the incentives to do that which is basically this like they’ll take any of that tariff money and they’ll put it straight back into Stargate type initiatives I think what do you think about Stargate speaking of Stargate I think the $500 billion is the total cost of ownership that’s pretty well known it probably like a 100 billion when you back everything out which feels small these days it’s actually a lot of money but then when we compare that to the 5G roll out it’s less money than we’ve spent on 5G and this is more important than 5G compare it to I mean it’s the order of magnet ude of the Los Angeles San Francisco Railway you know like the
[00:35:02] mythical Los Angeles San Francisco Railway there like a kilometer already there Saleem did you see this article this morning from Reuters Alibaba released his AI model it says surpasses deep seek uh the unusual timing of quen 2.5 uh Max Max’s release points the pressure Chinese AI startup de deep seek meteoric rise in the past few weeks has placed on not just overseas Rivals but also its domestic competition you know this is just speaks to the democratization right I mean the the everybody will end up creating a bunch of models uh and I think uh we’ll end up with a bunch of very specialized models right I remember Eric Schmidt’s comment that you’ll end up with a specialized AI That’s the world’s best physicist and a one that’s the world’s best biotech person and that person can be replicated that AI can be replicated infinitely and so now what do you do with deep specialty on The Human Side and that I
[00:36:00] think is the bigger question around a lot of this stuff the models are just going to keep getting better and better as we’ve seen over time I mean I think emods comment around what do you do with Labor uh and seeking capital is a really really profound question that’s I think the really structurally from a societal perspective that’s the question I think we should be spending a lot more time on as a global as a global uh um intellectual Forum of how do you navigate this going forward because this changes everything yeah the models again it’s it’s good enough cheap enough fast enough right and in fact the other quen model the VL model at performance anthropic and GPT 40 on visual understanding and the ones they have coming next are the ones that control your computer but anything that can be done on the other side of a screen this year the AI can do better for pennies so there’s a lot of conversation going on uh across Silicon Valley across the White House
[00:37:01] about uh and I’m speaking to Ray doio next week about this as well uh the US versus China AI Wars um I mean there are two levels of competition going on right now right it’s competition between companies uh and there’s you know six seven eight major AI companies out there uh that are vying for number one position and then competition among nations you’ve got Saudi Arabia wanting to be at the top of the stack committing you know hundreds of billions of dollars uh followed by you know cutter and the Emirates but you’ve got us and China really going at it and and the question of you know this is a winner take all type of game if you develop a digital super intelligence before your your corporate or national competitor does by just a little bit I it could be it could be
[00:38:02] devastating uh Iman how do you think about about us versus China in that regard well I think this is the we’re heading into a future now where I’d say every single AI leader that I could think of says the AGI is 3 to five years away though we just had we just had Sam say it’s this next year yeah but let’s say within the next 3 to 5 years every Le we’re talking about Dario Demis everyone myself like whoever there consensus is what you mean that’s crazy if you think about it right like everyone says it’s coming and there’s this concept of AGI ASI as this pivotal action moment where one entity would have the ability to shut down China you just turn it off you know so pivotal Act is you build AGI first and then it turns it off that might happen and we still don’t know about that which is why you now need to start preparing for it just like Sund pitchai at Google said why are we building out all these gpus use cuz we can’t afford not to yeah you know and
[00:39:01] that’s the game theory of that you can’t afford not to build an AGI now if everyone else is building it before AGI though there’s like an AGI we can think of as a mega Chef that can come up with any recipe and out compete all of us what we have right now this year are amazing cooks that can follow recipes and do jobs better than humans like the robots from unit tree yesterday doing the Chinese dance with the fans I don’t know if you saw that like is getting to the point where they can build houses better I’m going to have the unit tree robots at uh at the abundance Summit and I mean it’s incredible there $6,000 uh for one of their mid-tier levels you know I’m going to that’s a150 an hour what’s that that’s a $150 an hour when you B into depreciation energy costs and everything I I have it I have it pegged at 40 cents an hour I mean it’s insane it really is I think my kids will buy one just so I can clean up room and that’s the most expensive it’ll be right um um IM when you talk about AGI in 3 to
[00:40:02] 5 years let me get to Mike sobx question that I ask what do we what do you mean by a gii the best kind of framing I’ve seen is those multiple tests like the wnc test and the and the Ikea test Etc what’s your framing on what do you consider to be AGI uh I think it’s probably a complex system that can outperform a team I think before that I had this idea of AI artificial remote intelligence you can’t tell if it’s a human or computer on the other side is your remote worker cuz that’s the most natural way that this first starts coming in right like you call a company and they put a bunch of people we have the technology now that you can have a zoom call with someone and it could be 100% a robot yeah your your your worker is plugged into slack it joins you on zooms I mean right now there’s a we’re living in a world of distributed Workforce and if you’ve got an AGI that is able to to literally uh plug in take
[00:41:00] a roll fully and have read all of the email traffic all the slack traffic and be up to speed instantly um that’s an exciting an exciting world it’s an exciting world but at the same time that’s the first level of disruption right because you don’t need any more bp Outsourcing the nature of the firm will change because they will be super chefs so Cooks they will not make mistakes or they will learn from their mistakes once they have low communication overhead the next step is teams of that so independent agentic they have a task and they can get resources towards that this is why Wyoming’s Dow law and other things get very interesting and the Step Beyond that is this ASI thing that we can’t red Define where there’s a big takeoff where it has Beyond human team organizational capabilities like it can invent incredibly quickly about is is going to be the impact on physics and on biology and on pure science taking us Way Beyond you know Dario uh was on video I think it
[00:42:01] was from Davos saying and I know you believe this you might because we’ve had these conversations that in the next 5 years will’ll make a 100 years worth of progress in medicine and biotech and double human lifespan I mean that’s pretty extraordinary commentary to be made making publicly yeah and I think the one of the most fascinating things of the last week is this when you use 01 and you dump a bunch of stuff in you can’t do file uploads and other things which is a bit annoying it’s not that creative but is thorough with R1 because it hasn’t been tuned and made safe in others it’s actually very creative so someone actually took a code base for R1 and then made it double the speed in terms of performance other people have put together academic papers and it’s synthesized those into new reinforcement learning algorithms and that’s an indication of now maybe if like the downside is maybe these things get less safe the upside is maybe they get more creative and again these are the levels are you an amazing cook
[00:43:01] that’s the disruption of the labor market right especially anyone behind the screen are you an amazing Chef that takes us into this AGI as a team ASI kind of concept and again that feels not 3 to 5 years away for me that feels much quicker given all these exponentials and very few people are preparing for that yeah uh you know so again the point I opened up with which is we’re going to see disruption after disruption and I you know our our financial markets aren’t ready for this as well you know we’re going to see the energy Mar I mean I think one of the implications we’re going to see with uh AGI ASI is going to be new forms of energy sources um which will potentially topple uh our P dollars and destabilize government revenues so we have fascinating and massive apption coming well have you ever seen that chart of
[00:44:00] GDP per capita versus energy per capita yeah it’s basically a straight line it is and it correlates with health as well um and lots of other things that could be completely disrupted because to make let’s say in a couple of years to make the best film studio in the world you can do it anywhere with solar power that’s what I’m kind of talking about you could have science happening in Guatemala or anywhere like that it’s an uplift of global an aggregate if this technology proliferates versus this brain drain that we’ve had out to the west classically and again you think about your Capital stock your intellectual and physical Capital stock it’s massively redistributive and our economies are not set up for that because productivity was a function of Labor which was a function of energy that correlation is about to break for the first time ever I I agree I think you know we’re moving from an energy economy to an information economy and now the data sets and the information
[00:45:00] you have will be Paramount I think the the we need to start asking really big philosophical questions like what do we want all this to do and what do we want to be like and how do we what what are the activities and functions we want to be doing as human beings as the job market disintegrates in front of us uh you know the I still have my trepidations about humanoid robots Etc but once they show up and have feedback loops and have built-in llms into their circuitry um you have a a fully functioning um robot that can do lots of varied things you kind of suddenly don’t need a gardener or plumber or lots of other kind of things I’m using those examples as a tongue and cheek because those are probably ones you need the most but there are many many functions aircraft maintenance right that that will be done uh much better and much more precise because of the access to information we talked the couple of episodes ago
[00:46:00] about the fact that if there’s an avatar of you or me Peter it’s much more reliable because it’s got full access to everything we’ve ever said rather than what we can hold in our brains far more Charming far more compelling and even better looking um and so so how do we navigate that and I think this is where imod your kind of philosophical bent towards this becomes really really important and I’d love your take on where this goes the displacement of Labor is just a starting point you and all this yeah before before we get into that because I want to go deep in the second half of this uh uh this this pod today uh into uh imad’s point of view there I want to hit on a couple of questions Imad um what do you think is the best case scenario for AI this year in 2025 what are we going to see by the end of the year uh that people look back and say okay that was amazing that was fantastic what’s your what’s your thoughts best case um I think the video technology has got to the point we can
[00:47:01] remake Game of Thrones season 8 so that’ll be quite good you know so so focusing on that I mean how how dead is Hollywood I it’s completely rewired the again the energy of making a movie is massively reduced um but at the same time at least people can maybe be more creative like the video game industry went from 70 billion to 180 billion over the last decade and the average score in Metacritic went up 5% IMDb score 6 .3 on average Hollywood’s gone from 40 billion to 50 billion so maybe it transforms maybe it’s new types of media but I mean when I let you question when am I going to see a conversation like this you know uh Jarvis please make me a movie that is continuation of um of uh you know the Star Trek season five uh and have me is in there as one of the actors we have all the technology for that now it
[00:48:00] hasn’t been put together so if you use something like clings feature reference you can take a scene from that and it can generate new scenes we can do story lines the average film shot is 2.5 seconds it’s dropped from 10 seconds a few decades ago and we can do 2.5 seconds perfectly now with almost perfect control so let’s say it’ll take a year or two now before anyone can do this a suitably dedicated Studio could do this by the end of the year for a full episode insane okay so what else are we seeing this year in 2025 uh I think music music music’s pretty much solved on the media side like if you use the newso udio the Next Generation they have coming is insane um I think on medicine again we’re at that above human level and we troun them on empathy medical chat Bots for everyone to help them through their journey and our mental health in particular I think we’ve reached that critical point where the models have gone from not good enough to
[00:49:00] good enough MH we could transform mental health I think that would be very important um I think you will see the first few breakthroughs in science with novel things generated with the aid of o03 type models this test time inference uh I want to call it think ference I think that’s a better way of putting it where the models think longer um and I think those are probably the biggest real impacts maybe Siri is not going to be so bad anymore no I can’t wait for for sir not to suck and for Alexa to actually be useful um I’m I’m shocked that Amazon has not they were originally going to put anthropic behind uh Siri and behind Alexa uh and really powered properly that sound looks like it’s gotten delayed well they’re building out a million trains with their specialist chip so good luck to them on that one all right let’s flip the script here and say what’s the worst potential outcome for 2025 complete destruction of the BP Market which will reverberate out so this business processing Outsource because
[00:50:01] again when you use operator now the technologies that take over your computer it’s a bit rubbish now but it’s the worst it’ll ever be anything on the other side of a screen I think this year is the year gets displaced parallelized on that and again this is actually leaning into this whole Doge type thing get the workers back in being in person is going to be good for your job right now because if you’re remote you’ll be the first to go that’s a really important point and Define BP for folks who haven’t heard that term uh business process Outsourcing so Outsourcing to India all the call center workers or the programmers like the AI is better than any Indian programmer pretty much that’s outsourced right now and so you will have impact on those economies right now than the remote workers in the US I’m going to see the headlines in the Indian times right now again says yeah I think it’s very well I think it happens in two phases is I think phase one you you have this massive downside
[00:51:01] and then phase two the really good ones just show up and just generate a ton more code because there’s just so much more code to be written but I think it’s going to have a really detrimental effect any kind of software maintenance uh support systems Etc all go out the window very quickly yeah I had Mark Ben off very I had Mark Benny off on this pod uh a couple weeks back and he was saying with agent force you know he’s not hiring you engineers and he’s repurposing old engineers and he’s increased productivity 30% and that’s just going to Skyrocket from there yeah if you look at lovable bolt cursor like that takes you up to a decent level and they can build whole apps and stacks and they’ll just get better and better as the base models get better and better in fact one of the things we started to do for non-engineers who apply to work at our company is they have to do a 30-minute cursor course this kind of AI assisted IDE doesn’t matter what they are HR or anything and then they have to tell us how do their view of the world change so what does that course what
[00:52:00] does that course teach somebody how to build an app for HR how to build an app for anything just by talking to it it’s building the app almost live you can do that today in chat GPT with canvas you can build a react app live you could like replicate the entire Wii screen or build a HR application it’ll generator and you’re just talking back and forth that base level of capability increase will cause a realignment but the downside we’re talking about is there’s real jobs and real people that have to think what’s next and they have to become experts in AI assisted and they have to be in person otherwise you’re going to start to get disrupted and I think that has to be a headline I remember Peter last last summer 38% of IIT placements in India were unplaced from the top university it was crazy worse yeah and and it is uh uh one encouraging thing damaging to the economy of India in a major India I’m sorry SL one encouraging thing I’ve seen
[00:53:00] is in in the US we’re hiring much much fewer topf flight NBAs can hopefully lawyers too yeah yeah Harvard is way down on its employment actually this year isn’t it uh but this is just this is just the beginning I don’t think people are ready for the level of societal disruption that’s coming we we can process it it’s because it’s lots of little L curves right all across just like every teacher in the world had to ask can we set chat use chat GPT for our homework right what’s our every single HR department every engineering department is asking the same question you know and it’s still not mainstream but clearly it’s hitting the headline more and more and more and more and there’s this disconnect Beyond I it was like again it was a bit like Co those of us in the know we saw it coming and we were like this is a step change until Tom Hanks got it the world didn’t realize like what is the Tom Hanks moment is deep
[00:54:00] seek the Tom Hanks moment is it going to be something else it’s coming and it could be very positive for the economy on the other side it could definitely be very negative for a lot of people it was about 13 years ago I had my two kids my two boys and I remember at that moment in time I made a decision to double down on my health uh without question I wanted to see their kids their grandkids and really you know in this extraordinary time where the space Frontier and Ai and crypto is all exploding it was like the most exciting time ever to be alive and I made a decision to double down on my health and I’ve done that in three key areas the first is going every year for a fountain upload you know Fountain is one of the most advanced Diagnostics and Therapeutics companies I go there upload myself digitize myself about 200 gabt of data that the AI system is a ble to look at to catch disease at Inception you know look for any cardiovascular any
[00:55:02] cancer any neurod degenerative disease any metabolic disease these things are all going on all the time and you can prevent them if you can find them at Inception so super important so Fountain is one of my keys I make that available to the CEOs of all my companies my family members CU you know health is in you wealth uh but beyond that uh we are a collection of 40 trillion human cells and about another 100 trillion bacterial cells fungi V and we you know don’t understand how that impacts us and so I use a company and a product called viome and viome uh has a technology called metatranscriptomics it was actually developed uh in New Mexico the same place where the nuclear bomb was developed as a biod defense weapon and their technology is able to help you understand what’s going on in your body to understand which bacteria are producing
[00:56:00] which proteins and as a consequence of that what foods are your superfoods that are best for you to eat or what food should you avoid right what’s going on in your oral microbiome so I use their testing to understand my Foods understand my medicines understand my supplements and viome really helps me understand from a biological and data standpoint what’s best for me and then finally you know feeling good being intelligent moving well is critical but looking good when you look yourself in the mirror saying you know I feel great about life is so important right and so a product I use every day twice a day is called one skin developed by four incredible PhD women that found this 10 amino acid peptide that’s able to zap scile cells in your skin and really help you stay youthful in your look and appearance so for me these are three Technologies I love and I use all
[00:57:01] the time uh I’ll have my team link to those in the show notes down below please check them out anyway I hope you enjoyed that now back to the episode let’s jump into safety um this was an article that came out uh today in Fortune uh open AI safety uh researcher quits claiming AGI races uh too risk is too risky to gamble uh and I’ll read the quote uh an AGI race is a very risky gamble with huge downside no lab has a solution to AI alignment today and the faster we race the less likely that anyone finds one in time even if a lab truly wants to develop AGI responsibly others can still cut Corners to catch up it may be disastrously this is from Stephen Adler who left open AI um and he’s one of the many individuals who’s left open the eye on this
[00:58:00] concern uh selem where do you come out on this first off and then let’s go to emod next um I have my standard soap box that I’ve I’ve been saying for a while which I don’t see a way of regulating or navigating or putting guard rails on this in any way shape or form you’d have to police every line of code written right uh the only way to do it I think would be to develop an AI that would watch other AIS and see you know you end up with a kind of an arms race which is what it’s always been on the on the security side um however uh this one is really uh crazy you know IM you’ve been probably been tracking truth terminal uh where the AIS are faking out humans and and telling humans to go create a a token for them and making money off it Etc it’s nuts I think the genie is out of the bottle in my opinion you think it’s like way out it’s not it’s like climate change it’s too late to try and like stop it you try and figure out what do you do to mitigate it and and that
[00:59:01] would be my view IM what’s your perspective yeah I mean you said the only thing that can stop a bad AI is a good AI right uh unfortunately this is the case with a gun um I mean they will have guns the AI safety discussion has always been because we couldn’t imagine what an ASI super intelligence looks like and whether or not it would be beneficial or not beneficial in order to control or guide something that’s more powerful than us and more capable than us the only is to reduce its freedom but that doesn’t seem like it will make much sense if we’re saying that it can break through any freedom this is the kind of test that like El owski and others did you set up this thing whereby the AI is out to get you can it convince you to let it out and they’re failing the tests already and they’re failing them on models that are already available the Restriction against this was well maybe the models need to have a billion dollars to make and a trillion gpus I don’t think anyone believes that
[01:00:00] anymore what I mean like I go back to the sorry I go back to this old story about how fallible humans are right where if you leave a USB stick in a parking lot 40% of employees will pick up that stick and stick it into the corporate computer okay if you print the logo of the company on the stick because that’s really hard to do 98% will plug it in to see what’s on it and then boom you’re done so I don’t see any mechanism on the human f side to protect against that side of it well if you look at where these models are going it’ll be swarms of models and that for me that’s just a bot net right so even if you regulate and restrict in tier one tier 2 who gets Nvidia gpus it doesn’t matter you’ll have swarms of Bot Nets if there’s Bad actors the question I think that the AGI people looking at is existential risk and so for me the only way to mitigate against this is you make really amazing models that are aligned to human flourishing AA able to everyone as a public infrastructure and a public good
[01:01:01] because those models could be co-opted but you can build a very resilient dynamic system that can protect and then there’s less incentive to have this arms race because you will cut Corners I I think even might that I’ve heard you speak about that before I I’ve as we I’ve kind of gained this out in my head and talked to other people that’s the you’ve you’ve hit on the I think is the only path through this the only path through this is to create um benevolent AIS faster and and more powerfully and make them available I think it has to be an open source infrastructure because then it sets defaults like people only use a few data sets in these models but if there’s a problem in the data set it’s like the dependency tree right like we’ve seen these attacks on open source and our infrastructure because the he throb bug for example one library in this whole stack of software is suddenly co-opted and then our passwords are at risk we got to build this new knowledge cognitive infrastructure well communally and then make it available to reduce
[01:02:00] these game theoretic Dynamics you might I go back to the commentary of of Sam wman saying ah a new competitor that’s invigorating to us we’re going to go faster uh going back to Safety in these companies uh I am curious of your thoughts I mean you know I know the ethos behind Google and the work that they were doing uh in sundar’s point of view of we can’t release this until it’s ready and we have a plan and then of course chat GPT blows the the plan up uh and now there’s a race going on we’ve got you know grock 3 just being released uh and Elon will never play for number two uh what are your thoughts about elon’s thesis of maximally Truth seeking and maximally curious as a training objective for an AI system not sure what that means to be honest like
[01:03:03] like that seems like mad scientist territory to be honest if you get it wrong like it’s very interesting like Facebook did that study where they had 600,000 users and they said if you see sadder things will you post sadder things now that’s a maximally curious AI type of thing and guess what they made 300,000 users sad and they posted sadder things oh no I think if Eric Schmidt had this recent book with Henry Kissinger about Genesis called we had this thing doxa you know the underlying agreements of humanity and you have the faith Traditions the other things what is our common moral basing no AIS are grounded in that right now it turns out they are actually remarkably good at theology but is that their grounding no maybe we need to build it along those lines to reflect what the culture thinks cuz if you have slightly undefined things around curiosity truth sinking then it doesn’t really care about helping you do your taxes that that won’t be a subjective thing so I think we need to categorize AIS in different paths but everything
[01:04:00] got muddled in one like everyone will have an AGI a chef in their pocket not everyone needs a chef we all need Cooks but we need some chefs for Humanity I’m curious what a ma maximally truth seeking and curious AI does for my taxes is like hey is this was this cryptocurrency actually reported or not well it’s like Marvin the Paranoid Android from Hit gu to the Galaxy here I am brain the size of a universe and you’re getting me to do this in in the past when science fiction writers have dealt with this the AIS and robots invariably develop their own religion well we saw that we saw that recently right there was uh I forget the name of the uh the company that Unleashed you know 100 agents in uh in Minecraft and and the Agents developed their own economy and their own Rel and then the priest was the most uh was the richest because he was sell in
[01:05:01] dispensations and there is something funny about that so um the Twitter handles God and Satan now on Twitter are run by an AI so now research have done that and it’s got its own meme coin and I know that’s going to go the dispensation route help us it’s kind of under the raid of I know it’s going to take off you know it’s interesting CU nothing’s nothing’s changed in a thousand years we’re we’re still we’re still running the same basic you know this is a comment from my dad where he I was talking about fixing civilization he said we haven’t civilized the world we’ve materialized the world we’re tribal Apes operating Clans with more and more powerful tools we still have to do the work to actually civilize ourselves so I I just want to close out uh open AI safety issues imod how do you feel about is are these companies paying lip service or are they truly trying to create safe AI systems or put
[01:06:01] guard rails up uh none of these people want to kill everyone right that’s a good thing I’m glad about that that’s a good thing so we start with that like it’s not like hahaa you know kill everyone but the way they believe they can do that is by building it first that’s it nothing else matters because I am the only one that can do this right you know like it’s that silicon Val thing with Gavin bellson I can’t want to be in a world where someone else makes the world better than me you know with me first and if you look at open AI open a is a consumer company that’s going to optimize for Consumer engagement what is your reinforcement learning function what is your objective function Google’s one and meta’s one is ads and ads and manipulation open aai is basically a consumer company that’s going to shot to AGI there’s nothing about humans in there there’s no representation you know like where is
[01:07:00] the thing for Humanity you can have it as your mission statement but do you trust humans like open AI would never trust Indians to have gp4 uh by Indians I mean just anyone right and so you’re representing your constituency and your constituency is very small so we should expect them to become more and more consumer anthropic will continue to be closed and do their thing Google flits back and forth but now they’re releasing the models you stop worrying about the known unknowns and the unknown unknowns and then you just catch up with everyone and now it is a race with these race Dynamics whereby you’re going to cut Corners the models are good enough to stop the most egregious classical mistakes but we’re not really worried about those right like sometimes it tells people to do bad things what you’re worried about is it wiping us out and you won’t know that until you get there it’s not like it’s going to tell you and in fact the really worrying thing is we already see the models lying yeah this is I think the really um un unnerving part where they’re they’re faking out the humans so
[01:08:01] you know one of the conversations we had at the abundance Summit last year was uh around digital superintelligence and you know those blurry lines between what is Agi and what is digital superintelligence Etc but uh there is a a question would you would you rather live in a world in which there is a digital super intelligence or would you rather live in a world where there isn’t one um and it’s a it’s a question about you know we humans are still running archaic uh software and our neocortex um and we’re going to make and continuously make stupid decisions uh based on our cognitive biases and will a digital super intelligence enable us to survive ourselves yeah I mean this is the topic of Dar Amad from anthropics iay all watched over my Machines of
[01:09:01] Loving Grace right like humans are not aligned there is massive suffering in the world we are prisoners of our own minds effectively can AI bring that forward especially if it’s line I think yes is the answer basically like yeah I mean like the nothing else has worked right and ultimately the best thing is when we’re surrounded by people that support us right in the right way not blowing smoke up our butts or whatever we can have that now everyone can have that cuz we need to self-regulate and self stabilize now the way that I see it is that there’s only two ways this ends up it’s like really bad or really good I don’t really see like anything in between because the nature of our interaction with information and each other will be changed Forever by this technology within the next decade yeah totally binary yeah that’s why my P Doom is 50% no it’s 5050 I’m TR I’m tracking P Doom uh when I interview interviewed Elon last
[01:10:01] year at abundance it was 80% positive 20% negative at at uh in Saudi it was 90% positive 10% negative but you know no one likes to hear uh the truth which is50 well this is the funny thing a lot of people say it’s like 10 20% that’s Russian Roulette you know like it’s literally Russian Roulette stop making this but if you kind of look at the I i’ categorize this as the Star Wars versus Star Trek future and you can see this in the current discourse are you looking at a world of abundance which is positive sum or are you looking in a world of competitiveness which is negative sum because when you’re in a negative sum environment you have unstable natural equilibria and this is where you lead to cutting corners and everything when you’re positive sum then you have stable environments and again Star War Trek for all its issues has stable environment where Star Wars definitely does not cycles of Destruction I prefer the Star Trek versus Mad Max because I think it’s
[01:11:00] a little highlights it a bit more but it’s the same it’s the same conversation yeah imod I want to jump into your recent work um and and really please open the kimono as much as you’re willing uh this is a paper that you wrote when Capital no longer needs Labor uh how does labor gain Capital uh you’re also have spun up your latest company intellig internet and uh you know tell us about this paper and about intelligent internet as far down the rabbit hole as you’re as you’re willing to go uh I’d love to see what your creative mind has been spawning yeah thanks yeah took a bit of time off and what I’ve been thinking about like I think this is the biggest question of our time for humans because you know there’s this thing of how do you create happiness there’s the Japanese concept of iy do what you like do you good out do where you where you believe you’re adding value and other people do too people need that progression and there’s discussions of Ubi and others but as we discussed earlier on in this pod anything that can be done on the
[01:12:01] other side of the screen can be done better faster and cheaper by a computer this year H pretty much anything be it design be it taxes all of these things artwork film prodction and you can’t tell it’s not a human again this is this Ari this touring test for remote workers then in a few years it’s only restricted by the number of robots we can produce number of motorcycles and cars we produce is 70 million each a year so let say robots are similar you get that disruption as you said Peter you estimated 40 cents an hour for an R1 unitary robot and that will be as capable as a human probably in a year or two Optimus will be the same this is the biggest crisis that we have coming because it’s an unemployment underemployment question of meaning when a technology can do the work better than you can what is your meaning and how do you acquire labor when Capital doesn’t know labor acquire Capital it doesn’t require you anymore when Ford had his
[01:13:01] car he wanted to pay everyone so they could afford four cars companies don’t care about that as much anymore um so when kind of looking at that I was like there are various science fiction Futures that outlined here like um things from Culture by Banks to the Star Treks and the others we’re probably moving into an abundance post scarcity economy but can we make sure this is evenly distributed can we enable people to have a universal basic AI so it’s up to them how they do this and then the further question is what is meaning in this because the existing economic structures break down and as a very practical example of that let’s take the FED there’ll be lots of discussions about the FED today’s the FED cut rates you know other things like that the fed’s Mandate is interest rates inflation unemployment you cut interest rates that adjust inflation and employment that’s gone the actual Mandate of the FED in the next 5 years completely doesn’t work
[01:14:01] anymore because you can’t interest rates what does it mean it means people will buy more gpus more confu this right maybe and more robots more robots in that won’t impact unemployment you’ll have massive inflation and deflation Cycles so the very basis of our economy is messed up and so that’s why I was like what can we do to help with that that’s why this concept of ingent Internet give Universal basic AI everyone go standard data sets models systems um we figur out ways to coordinate that but put this into every nation and build teams that think about what is the future of healthcare education maybe Faith government politics and get everyone to work in the open to build an open infrastructure cuz we have lots of questions that we don’t have answers to and human Talent augmented by computers are probably the only way we’re going to figure this out but we need to join it together cuz the problems we Face here in the UK or us are similar to Spain India everywhere so we got to create that Global Network I think there’ll be
[01:15:02] there’s two layers to this there’s the recreation of meaning because you know for the last few hundred years your occupation your job title was the meaning you had in your life and as we strip that away people have to find new models for meaning entrepreneurship is a rising class because of that people can find their own meaning we talk about mtps all the time I I think the second layer is how do you just ensure basic Supply chains of goods and services so that you have bread on the grocery store shelves and uh clean water etc etc and I think governments are going to be very stretched to figure this out in an age of potentially malicious AIS that can spread thisinformation and really damage infrastructure via the uh autonomous remote monitoring stuff that they’ll be able to do I think those two are those two buckets have to be addressed um we I don’t know as a species if we can
[01:16:00] navigate through those uh in an effective way certainly our leadership has no mechanism to deal with this because they they’re either not aware of the problem or they don’t understand the scale of what’s coming yeah and one of those two disqualifies most leadership and most legislators around the world from this uh so it’s a it’s a sticky problem it’s going to have to be done by uh smart CI citizens groups Etc that will that will navigate this I’m concerned about the meaning issue as well in a in a huge way I think we’re heading towards a world of what I call technological socialism where technology is taking care of you it is feeding you it is educating you it’s taking care of your health it’s all free um you don’t need to do much of anything so how do you you know we all know that a video game that’s way too easy is boring and you stop playing and so when life gets
[01:17:01] boring how do we keep humans engaged we need struggle and we need meaning in our lives I think you know Isaiah Berlin had this conceptualization of positive Liberty versus negative Liberty positive Liberty was the freedom to believe in isms fascism communism religion they tended to end up quite bad so postulated negative Liberty the freedom from anyone telling you what to do which led to this say fair capitalism and other things and people find meaning in their brands and these narratives and stories it strikes me that as we move into this next phase these historical things are coming back and force we’re seeing the polarization of the media and the political class people are going to sign up to more and more extremist ideologies exclusionary negative some ones unless we can give the positive views of the future the future of abundance of collaboration and more because otherwise you’re stuck in your local Maxima most of these elections have been I want change cuz fundamentally how many people believe in the American dream or the British dream
[01:18:00] or the Spanish dream or the Indian dream anymore people aren’t actually saying positive visions of the future because people don’t believe you anymore they don’t believe are politicians we leaders we need those positive Visions um we need the Star Trek uh utopian not the Star Wars when we’ve looked at when we’ve looked at um the we had as a community did some look at history of when Societies or certain Pockets meet abundance Peter what do they do right so the Romans take over uh Europe what do they do what happens when the bals take over India and they have relative abundance and it turns out they end up in in in food art music and sex as four major not in that order uh as major activities and then you find ways of doing creativity because human beings struggle for for the next level of things always you know we’re so built in for that so there’s some optimism in that world I I can’t think of a PL so
[01:19:00] listen Steven Cotler and our writing age of abundance it’s our follow on and the big element of the book we think about is how do we uplevel human ambition uh in a world in which we’re Gods uh and we are incredibly Godlike how do we uplevel our Ambitions that make it worth living make it challenging for us you know one of what’s that was just a quick comment here steuart brand uh the futurist used to say we are as Gods we might as well start acting like it yeah and he said that in 1968 no we’re more Godlike than ever so you know do we all revert into a video game world do we all get BCI you know this year at the abundance Summit I’ve got Max hodak coming Imad I don’t know if you know Max he was the co-founder of neuralink with Elon and he’s got a new company called science which is doing extraordinary work you know like 100 a th000 10,000 fold more neural
[01:20:02] connections and bandwidth uh on a BCI than we’re seeing with uh with neuralink you know can we add another Corpus colossum like connection to the cloud that allows us to couple with AI as AI is taking off versus you know be left behind like the movie her yeah I mean like these things are coming quick and we have to answer those questions like even this weekend I had six people I know call me and said I’m having a crisis of meaning because of R1 once I saw the logic and the way it was thinking right that’s going to happen more and more but then again like I said we have to think about the mass of people and The Human Side of this I think our current systems take away our agency as slow Dum Ai and one of the main things here is reintroducing the belief of agency I can’t do this I can’t do that with this technology there’s nothing that well there’s a lot more you can do cuz it raises the floor for everyone
[01:21:00] which is from my perspective why we had to get into the hands of everyone and they make them feel like they’re a participant in this cuz the other part of this is it seems remote I think this is another part of this shock that we’ve had in the last few days right how are you involved in AI you need to have nuclear reactors and like giant chips and this and that all of a sudden you can run it on your smartphone you know it’s very humanizing and this again why I’m a big believer in open source to have that amazing I I love that I love that as even a title of this of this pod the crisis of meaning um you know it’s incredibly uh powerful let’s talk about your new company um uh how much can you tell us on intelligent internet uh I don’t know if you want to talk about your tokenization plans uh you know I don’t want to open the kimono before before it’s ready but I would love to hear your vision of of what you’re building yeah
[01:22:00] so like in the previous company we got up to the eight-digit revenue hundreds of million model downloads great teams but I was like the API and SAS revenues are probably going to go down to nothing cuz intelligence gets commoditized intelligence too cheap to measure but someone’s got to build the AI for the full stack of cancer that helps you through your entire cancer gen and organizes all the cancer knowledge we have the computer to do that why is no one doing it same for autism same for Education once we build this once and I think what it Stuart BR who said Pace layering of knowledge you know you have knowledge of humanity or common knowledge that impacts everything that’s regulated and meaning education Healthcare government why do we organize that information into knowledge and then make a system that can get wise and make that available to everyone so I was like this strikes me as we need large amounts of compute that sounds like Bitcoin you know and the amount of compute you all use is inevit so use that to secure an Institutional grade digital currency we’ll have details of that coming soon but then in the whole crypto space most
[01:23:03] of which is rubbish and there’s increasing demand for at the start back in the day you know 12 years ago 13 years ago it was all you can mine on your laptop you can mine on your smart gpus right then it became about capital and do we really want to live in a world where Capital determines everything yet again I was like what matters is people so what if we could create a mining mechanism where the people can create currencies as well and use that to fund all of this Universal basic AI so we’ll have details about all that side of things where anyone can participate and be a part of it because people want to be a part of it give their data give their knowledge and we’ll organize all this with dedicated teams for cancer autism Education Health government that think about gent first release everything open source but I think it is important to have this someone needs to go and just do it cuz once we have a cancer model that forms human doors and empathy and works on a smartphone no one
[01:24:00] will ever be alone in their cancer Journey again mhm and that’s half the world will get cancer yes once we have a supercomputer dedicated entirely to organizing the world’s cancer knowledge and making it freely available anytime a new paper comes out we will advance a cure for cancer yeah you know we get it’s insane when I get when a friend when a friend of a friend has a particular cancer they call me and I’m like dude I will start asking around to see who the world’s expert is but it’s all of this is knowable you should be able to know what the trials are what the current state of thee art is where it’s available what the risks are and have that information instantly but you got to do it and again this is once you’ve built the gold standard data sets for our general common knowledge of humanity for every country it’s legal it’s medical it’s others and for all these sectors is the specializations we have this is what SEL was talking about earlier suddenly you have a whole gaggle of specialist agents
[01:25:02] and robots and data sets fully open source for everything and then you just need to update it and run it then we can be about wisdom and build intelligent systems that get wiser and wiser but have an objective function to help us CU for my take the more we help the higher the value of this new type of Bitcoin again more details soon and you can be massively collaborative and open because you want as many people to use it as possible and you want to help as many people as possible and the total amount of capital needed is not that large we’ll give some estimates but the wonderful thing is it’s possible for the first time the advances of o1 and R1 type models means that organizing the world’s cancer knowledge and making it available or Autism or Alzheimer’s is just a question of compute it’s no longer a question of Labor the ability to make that available to everyone open source on their smartphones is just a question of compute will there be one model to rule them all for each of these or will there be thousands that are created this is
[01:26:02] the wonderful thing about AI models the way that you train them is called curriculum learning you start with the whole internet then a subset subset and then you get into this tuning specialization and localization phase then it goes onto your laptop and it gets tuned continuously so if you release the data sets and the models for each of those you can build a modular system like we had these lauras the fine tunes of our image model where it can turn into anime or gly style it’s the same with this your Apple Intelligence on your smartphone is a base model that’s common and again you can ensure all the data in that is fine and not poisoned which is why open source open data is required in my opinion for regulated systems with these little adapters on the top that are learning about Sport and learning about your thing and tuning it to Apple photos so you’ll have this modularized system where everyone can pick and choose and that’s important when for example your kids education do you want to follow your school curriculum and be tied down to just that education model or do you want to be able to take that education model know
[01:27:02] exactly what’s inside it and then extend it with another calculus course or this or that you want the latter right and that’s why permissionless Innovation is so great and this comes back to our deep seek discussion right the fact that it’s open source means more people use it than anything else llama was open source more people use it than anyone else so if you build great quality models and data sets people use it they’ll innovate on it but you can set a really great solid foundation so the models inherit from each other they’ve all gone to the same school then they go to different college and then they go to different universities but they’re interoperable oh I I love it you know the the future is amazing if we survive it I mean that’s that’s really truly I mean we’re we’re heading toward this extraordinary world the most exciting time ever we just need to survive uh the the the downside Star Wars mad ma scenarios I have a question for you if we survive the next 5 to 10 years how long do we live for yeah so
[01:28:01] the you know this is a lot of the work I’ve been public on this uh and been having debates and uh in arguments with a lot of the traditional medical and scientific societies that are like listen we’re just not going to get past 120 it is it’s built into our genes uh in fact the probability that you Peter or you anybody is going to get past 100 in a healthy fashion is pretty damn low and the fact the matter is science and medicine steeped in history and the past there’s good reason to believe that but the same good reason to believe that humans would never fly and never get to the moon and never travel at the speeds we do and never have instantaneous Communications or quantum teleportation or all the things that were impossible just you know a few years a few decades or a century ago and the reality is uh we are a complex system of 40 human
[01:29:02] trillion cells with a billion chemical creactions per cell per second and there’s no way a human can understand this and understand what are the root causes of aging and why we age but AI can and I think AI can help us to understand the fundamentals and alter it and not accept what evolution dealt us Evolution Evolution had a mission Evolution had a mission of passing on genes by the age of 30 and then killing you off so you never stole food from your grandchildren’s mouths my mission is different we’re birthed for death what’s that we’re birthed for death yes yeah so that our genes can propagate and we we could break that cycle so to answer your question imod uh I think we’ve got an unlimited future um now the question is are you going to want to live the next 100 years in your meat sack uh or the next 200 years in your
[01:30:01] meat Sack or are you going to want to upload whatever the hell Consciousness is and your memories into the cloud and and be liberated um and we’ll see yeah it’s crazy to think about again this is such a time of change right and you look at the tools and techniques you look at the medical sphere we need to reimagine medicine from scratch which is like we core developer teams working in the open on each of these what is government like that’s a question that we’re having right now do we need to spend so much money what is the purpose of government how many people listening to this field represented by the government yeah what if you have your own AI that you own that is looking out for you that represents you that interacts with the government AI because every government decision will be checked by an AI within the next few years just do it and then they be made by an AI because obviously the AI is better than the government that’s scary but you can finally have representative democracy yeah true democracy for the first time ever these are the positiv you can have
[01:31:01] personalized medicine you have empathetic medicine how much of medicine is actually psychological mhm you know like I don’t have control of myself no one’s listening to me having that Aid these are systems that I think need to be built from scratch and reimagined and education I think is probably one of the biggest ones of those our education system is completely not fit for purpose despite the of everyone and we say that for a system of system like you see math academy and things like that and the results the people are already having uh did you see that one from the school in Nigeria recently I think it was like two weeks with chat GPT they did two years wow two weeks or two months of chat two years advancement just for chat in math it was insane I I think I I you know it’s we’ve had this conversation before where schools are up in arms and saying we making you know AI illegal you can’t use chat jbt you can’t Gemini 2o and the fact of the matter is sure you can’t use that to teach the way you used to but guess what you can use it to teach 100x
[01:32:02] faster and better and set massive objectives for your kids help them dream bigger than ever before um but it disrupts the entire you know teaching industry well it’s cuz the school was designed to reduce our agency and remove it to become a cog within the classical you you know this is a really important point that the last couple hundred years we’ve turned humans into robots you know you stood in an assembly line you Stamped Out widgets and the efficiency at which how many widgets you could Stamp Out per hour was your pay grade and your senority level and whatever and we measured you on kpis and so on and now we’re flipping it around and I find it fascinating that the most valuable colleagues and employees we have are the ones that learn the fastest uh and that’s starting to now become the human factor much more again and that’s very very encouraging now you can add to it some really fun AI stuff so this is I had this piece how to think about AI when I was like you know building on Nats think about AI Atlantis and things
[01:33:01] like that yes we can design AI in two ways one is we build agents to replace people the other is that we focus primarily on increasing human agency because our systems have taken that those are two different ways of Designing AI actually which is one of the reasons I think I look at the anthropics and Googles and others of the world I don’t think they’re focused on increasing human agency as much as Automation and business optimization because their customers are typically businesses or on the consumer side again I don’t think that it’s just become a bit different on the design pattern side but it’s exciting because we can revolutionize each of these important things for Living For The First Time it all we can say is it is going to be the most exciting time ever uh to be alive for sure this is why you need your eight hours of sleep at night you know yeah for goodness I did not get my 8 hours I woke up at 4:00 a.m. to prep for this podcast uh took a a cold shower to wake
[01:34:00] myself up uh but it was worth it cuz this was a phenomenal conversation um you lost me a cold shower but okay uh imod uh so happy to have you back on moonshots uh Saleem always a pleasure my friend uh Imad if anybody wants to follow your current work where do they go to see what you’re up to and learn more follow me on Twitter or ii. in in internet. in ii. Inc I love that it’s uh it’s it’s awesome uh gentlemen I look forward to having this conversation on WTF just happened in Tech again we’re going to have this more frequently because uh our heads are spinning at the speed that technology is moving just fundamentally spinning take care s take care Yad see you Buddies check guys
[01:35:03] [Music]