06-reference / transcripts

moonshots emad mostaque agi governance transcript

Wed Nov 22 2023 19:00:00 GMT-0500 (Eastern Standard Time) ·source: Peter H. Diamandis (YouTube)

hey M good to hear you it was always a pleasure yeah so where are you today I’m in London good other side of the planet I’m in Santa Monica it’s uh it’s been quite the uh extraordinary game of pingpong out there these last four or five days I was like I didn’t think the first thing that AI would disrupt would be reality TV right yeah yeah it’s been uh it’s been fascinating how uh how how X has become sort of the go-to place to find out the latest of where Sam is working and what’s going on with the AI industry so you found the notifications and the way it goes I this the thing like um what else will move at the speed of this like I was saying someone

[00:01:00] recently AI research doesn’t really move the speed of conferences or even PDFs anymore right you just wake up you’re like oh it’s 10 times faster so I think that’s why X is quite good I actually like unfollow just about everyone I just let the AI algorithm uh find the most interesting things for me um so I like 10 people that I follow and it’s actually working really well it’s getting better well uh it it has been I’ve enjoying enjoying the conversation it really feels like you’re inside a intim conversation among friends um as this is going back and forth I think this entire four or five days has been an extraordinary upclose intimate conversation around governance and around you know what’s the future of AI because honestly uh you know as it gets faster and more powerful um the cost of missteps is going to increase exponentially um let’s let’s begin here

[00:02:02] uh I mean you’ve been making the argument about open source uh uh as one of the most critical elements of governance for a while now can you you just let’s hop into that um yeah I think the you know open source the difficult one because it means a few different things like is it models you can download and use do you make all the data available and free um and then when you actually look at what all these big companies do all their stuff is built on open source basis you know it’s built on the Transformer paper it’s built on like um the new model by Kaiu Lee and z 1. a is basically llama like it’s actually got the same variable names and other things like that plus a gigantic super computer right the whole conversation has been you know how important is openness and transparency and and what are the governance models are going to allow the most powerful technology on planet uh to uh enable the most benefit

[00:03:01] for Humanity and the safety so I mean you’ve been thinking about this uh and and speaking to transparency openness governance for a while uh and could you I mean what do you think is is going to be what do you think is we need to be uh focused on where do we need to evolve to um so yeah it’s a complicated topic I think that uh most of the infrastructure of the internet is open source Linux everything like that I think these models um it’s unlikely that our governments will be run on gpt7 or Bard or anything like that how are you going to have black boxes that run these things I think a lot of the governance debate has been hijacked by the AI safety debate where people are talking about AGI killing us all and then there’s this precautionary principle that kicks in it’s too dangerous to let out because what if China gets it what if someone Builds an AGI that kills us all it would be great to have this amazing board that could pull the off switch you know um whereas

[00:04:01] in reality I think that you’re seeing a real social impact from this technology and it’s about who advances forward and who’s left behind if we’re thinking about risk because governance is always about finding as you said the best outcomes and also mitigating against the harms right and there’s some very real amazingly positive outcomes there are now emerging that people can agree on but also some very real social impacts that we have to mitigate against so I mean let’s begin how is how is stability governed uh stability is basically governed by me and um so I looked in foundations and Dows and everything like that and I thought to take it to where we are now it needed to have very singular governance but now we’re looking at other Alternatives and what do you think it’s going to you know where would you head in the future I mean let’s let’s actually jump away from this in particular what do you recommend

[00:05:01] the most powerful Technologies on the planet how should they be governed how should they be owned um you know where where should we be in five years I think there need to be public goods that are collectively owned and then individually owned as well so for example there was the Tweet kind of storm the kind of I am Sparticus or his name is Robert Bolson from the open AI team um saying open is nothing with it people right um stability we have amazing people 190 and 65 top researchers without its people with open models used by hundreds of millions it continues and if you think about where you need to go you can never have a choke point on this technology I think if it becomes into part of your life like the phrase I have is not your models not your mind so these models again are just such interesting things take billions of images or trillions of words and you get this file out that can

[00:06:01] do magic right TR on magic sand um I think that you will have Pilots that gather our Global Knowledge on various modalities and you’ll have co-pilots that you individually own that guide you through life and I can’t see how that can be controlled by any one organization you’ve been you know on record talking about having models uh owned by the citizens of Nations uh can you speak to that a little bit sure so we just released some of the top Japanese models from visual language to language to uh Japanese sdxl as an example so we’re training for half a dozen different nations models now and the plan is uh to figure out a way to give ownership of these data sets and models back to the people of that Nation so you get the smartest people in Mexico to run a stability Mexico or maybe a different structure that then makes decisions for Mexicans with the Mexicans about the data and what goes in it

[00:07:00] because uh everyone’s been focusing on the outputs the inputs actually are the things that matter the most um the best way I’ve thought about thinking these models is like very enthusiastic graduates so hallucinations I’m just prying too hard a lot of the things about like oh what about these bad things that models can output it’s about what you input and so what you put into that Mexican data set or the Chinese or Vietnamese one will impact the outputs and there’s a great paper in nature human behavior today about that about how foundational models or cultural Technologies so again how can you Outsource your culture and your brains to other countries to people that are from a very different place I think it eventually has to be localized yeah I think one of the points you said originally is we have to separate the issue of governance versus safety and Alignment um uh are they are they Ser are they actually different

[00:08:00] so I think that a lot of the safety discussion or this AGI risk discussion is because the future is so uncertain because it is so powerful right and we didn’t have a good view of where we’re going so when you go on a journey and you don’t know where you’re going you’ll minimize for Maxum regret you’ll have the precautionary principle and then that means you basically go towards Authority you go towards trying to control this technology when it’s so difficult control and you end up not doing much you know because anything can go wrong um when you have an idea of where we’re going like you should have all the cancer knowledge in the world at your fingertips or climate knowledge or anybody should be able to create whole worlds and share them then you align your safety discussions against the goal against the location that you’re going to again just like setting out on a journey and I think that’s a big change similarly most of the safety discussion has been on outputs not inputs if you have a high quality quality data set without knowledge about Anthrax your

[00:09:01] language model is unlikely to tell you how to build Anthrax you know so I think that’s it and transparency around that will be very useful so that’s a you know let’s dive into that safety alignment issue for a moment because uh it’s an area un I’ve been talking a lot about so Mustafa wrote a a book uh Mustafa wrote a book called The Coming uh coming wave in which he talks about containment as the mechanism by which we’re going to be making sure we have safe AI you and I have had the conversation of uh it’s really how you educate and raise and train your AI systems in in giving uh making sure that there’s full transparency and openness on the data sets that are utilized do you think containment is an option for for safety no not all like uh number of um leaders say what if China gets open S sayi the reality is that China Russia

[00:10:00] everyone already has the weights for gp4 they just downloaded on USB stick you know you know that there’s been compromised right there’s no way they couldn’t the rewards are too great and there is a absolutely false dichotomy here and think a lot of the companies want you to believe that giant models are the main thing and you need to have these gigantic ridiculous supercomputers that only they can run I mean look we run gigantic supercomputers the reality is this the supercomputers and the giant trillion zillion data sets are just a shortcut for bad quality data it’s like using a hot pot or suiting a stake that’s bad quality you cook it for longer and it organizes the information with stable diffusion we did a study and we showed that basically 92% of the data isn’t used 99% of the time you know because now you’re seeing this with for example Microsoft’s spy

[00:11:00] release it’s trained entirely on synthetic data Del 3 is trained on rvee and entirely synthetic data you are what you eat and again we cooked it for longer to get past that but the implications of this are that I believe within 12 to 18 months you’ll see gp4 level performance on a smartphone how do you contain that and how do you contain it when China can do distributed training at scale and release open source models so Google recently did uh 50,000 TPU training run on their v5s the new v5s their tpus are very low powered relative to what we’ve seen but again you can do distributive Dynamic training similarly like we funded five mind and U we’ve seen uh Google deep mind just did a new paper on localization through distributed training the models are good F fast enough and cheap enough that you can swarm them and you don’t need try

[00:12:01] supercomputers anymore and that has a lot of implications and how are you going to contain that so coming back to the question of do you mandate training training sets do you does you know does the government set out um what all companies should be utilizing and mandate if you’re going to have uh a aligned AI it has to be trained on on these sets how do we how do we possibly govern that look we have food standards right for ingredients why don’t you have data standards for the ingredients that make up a model it’s just data compute and some algorithms right and so you should say here are the standards and then you can make it compulsory that will take a while or you can just have an ISO type standard this is good quality model training good quality data you know and people will naturally gravitate towards that and it becomes a default are you working are you working towards that right

[00:13:01] now yeah I mean look we spun out elther AI as an independent P 1c3 so they could look at data standards and things like that independent of us kind the opposite of open AI um and this is something I’ve been talking to many people about and we’re getting National data sets and more so that hopefully we can Implement good standards similar to how we offered opt out and how the billion images opted out of our image data set because everyone was just training on everything is it required no but is it good yes and everyone will benefit from better quality data so there’s no reason that for these very large model training runs the data set should not be transparent and longed again we want to know what goes into that and again if we have the gradual analogy what was the curriculum that The Graduate was taught at which university they go to it’s something that we’d want to know but then why do we talk to gp4 where we don’t know where it went to University or where it’s been trained on bit weird that isn’t it what do you think the lesson’s going to be uh on from the last four

[00:14:04] days I’m just confused I don’t know who was against two or what I think I just posted are we against misalignment or Mok or I think probably the biggest lesson is it’s very hard to align humans right and the stakes are very large like why is this so interesting to us because Stakes are so you tweeted something that was you know serious and and and unfortunately funny which was how can we align AI with human human’s best interest if we can’t align our a company’s board with its employees best interest um yeah well the thing is it’s not the employes best interest it’s like the board was set up as a lever to ensure the charter of openai so if you look at the original founding document of openai from 2015 it is a beautiful document talking about open collaboration everything and then it kind of changed in 2019 but the chart is still emphasizes cooperation

[00:15:00] safety and fundamentally I posted about this back in March when I said the board and the government structure of opening ey is weird like what is it for what are they trying to do because if you say You’re Building AGI in their own road to AGI they say this will end democracy most likely I remember reading that because democracy there’s no way democracy survives AGI because either obviously it’ll be better and you get it to do it or swe everyone or we all die or it’s Utopia forever right abundance baby yeah there’s no yeah well this thing there but any regardless there’s no way it survives AGI there’s no way capitalism survives AGI the AGI will be the best Trader in the world right and it’s like who should be making the decisions on AGI right assuming that they achieves those things and that’s in their own words so I think that people are kind of waking up to oh there’s no real way to do this

[00:16:00] properly and previously we were scared of open and being transparent everyone getting this which again was the original thing of open Ai and now we’re scared of who are these clowns you know they put in the nicest way because this was ridiculous like you see better politics in a teenage sorority right and it’s fundamentally scary that unelected people no matter how great they are and I think some of the board members are great should have a say in something that could literally upend our entire Society according to their own words I find that inherently anti-democratic and a liberal at the end of the day um you know capitalism has worked and it’s the best system that we have thus far and it’s a self you know it’s it’s built on self-interest and built on op continuous optimization and

[00:17:01] maximization um I’m still wondering where you go in terms of governing these companies at one level uh internal governance and then governing the companies at a national and Global level um has anybody put forward a plan that you think is uh is worth highlighting here not really I mean organizations are weird artificial intelligence right they have the status of people and they’re slow Dumb Ai and they eat our hopes and dreams that’s what they feed on I think but um this AI can upgrade them it can make them smarter again how do you coordinate and from a mechanism design perspective it’s super interesting like in markets I think we will have ai market makers that can tell stories like the story of Silicon Valley Bank went around the world in two seconds the story of open AI goes around AI can tell better stories than humans it’s inev I think that gives hope for coordination

[00:18:01] but then also it’s dangers of disruption let’s I want to double click one second on the two words that you use most openness and transparency um and understand fully what those mean one moment because you know and the question is not only what they mean but uh how fundamental need to be so openness right now in your definition in terms of AI means what it means different things for different things unfortunately I don’t think it means open source I think uh for me open means more about access and ownership of the models so that you don’t have a lock step like you can hire your own graduates as opposed to read relying on Consultants transparency comes down to I think for language models in particular I don’t think this FS for media models you really need to know what it’s been taught that’s the only way to safety like you should not engage with something or use something if you don’t know what its credentials are and how

[00:19:00] it’s been taught because I think that’s inherently dangerous as these get more and more capabilities and again I don’t know if we get to AI like if we do I think it’ll probably be like Scarlet Johansson and her you know like just to goodbye and thanks for G assuming we don’t you still need transparent so again how can any How can any government or regulated industry not run on a transparent model they can’t run on black boxes I I get that and I understand the the rationale for it but now the question is can you prove transparency I think that again a model is only three things really it’s the data the algorithm and the compute and then they come and the binary file pops out then you can tune it with rhf or DPO or genetic algorithms or whatever but that’s really the recipe right and so the algorithms you don’t need algorithmic transparency here versus classical AI because they’re very simple one of our fellows recreated the Palm 540 billion parameter model this is

[00:20:01] lucid Reigns on GitHub you look at that if you’re a developer and you want to cry it’s GitHub it’s crazy in 206 lines of p and that’s it the algorithms are not very complicated running a gigantic super computer is complicated and this is why they freaked out when Greg Brockman kind of stepped down because he’s one of the most talented Engineers of our time built this amazing gigantic clusters and then the data and how you structure data is complicated so I think you can have transparency there because if the data is transparent then who cares about the supercomputer who really algorithm you know now let’s talk about the next term alignment here um alignment is thrown around in in lots of different ways uh how do you define alignment I Define alignment in terms of objective function so YouTube was used by the extremists to serve ads for their nastiness right why because the

[00:21:00] algorithm optimized for engagement which then optimized for extreme content which then optimized for the extremists did YouTube mean that no but they’re just trying to serve ads up right but it meant it wasn’t aligned with its users interests and so for me if you have these technologies that we’re going to Outsource more of our mind our culture our children’s Futures few that are very persuasive we have to ensure they’re aligned with our individual community and societal best interests and I think this is where the tension with corporations will come in because whoever licenses Scarlet y Hanson’s voice will sell a lot of ads you know they can be very very persuasive but then what are the controls on that no one talks about that the bigger question of alignment is not killer ISM making sure the AI doesn’t kill us but again I feel that if we build AI that is transparent that we can test that people can build

[00:22:01] mitigations around we are more likely to survive and thrive and also I think there’s a final element to here which is whose alignment yes different cultures are different different people are different what we found with stable diffusion is that when we merge together the models that different people around the world have built the model gets so much better I think that makes sense because a monoculture will always be less fragile Than A diversity I guess not talking about in the Dei kind of way I’m talking about it in the actual logical way so we have a paper from our reinforcement learning lab called Carper called QD HF QD aif quality and diversity through artificial intelligence feedback because you find these models do get better with high quality and diverse inputs just like you will get better if you have high quality and diverse experiences you know and so I think that’s something that’s important that we’ll get lost if all these models are centralized you know uh we you and I

[00:23:02] have had a lot of conversations about timelines here um we can get into a conversation of when uh and if we see AGI but we’re seeing more and more powerful capabilities coming online right now that are going to cause a lot of amazing progress and and disruption um how much time do we have imod and we’ve had we had a conversation when we were together at fi about um uh the disenfranchised youth coming off covid uh so let’s talk one second about timelines how long do we have to get our together um both as as uh AI uh AI companies and uh and investors and governors of society um we don’t I mean the speed here is is awesome um and frightening

[00:24:03] how long do we have almost everything everywhere all at once right we don’t have long like AGI timelines for whatever definition of AGI I have no idea it will never be less than 12 months right because it’s such a step change so let’s put that to the side okay right now everyone that’s listening are you all going to hire the same amount of graduates that you hired before answer is no some people might not hire any because this is a productivity enhancer and we have the data for that across any type of knowledge industry you just had a great app that you can sketch and it does a whole iPhone app for you right I got on record and saying there are no programers we do know in five years why why would there be what are interfaces you had a 50% drop I just post that on my Twitter in hiring from Indian iits that’s crazy so what you’re going to have in a couple of years is around the world at the same

[00:25:02] time these kids that have gone through the trauma of Co Highly Educated stem programming accountancy law simultaneously people will hire massively less of them because productivity enhances and you don’t need as many of them why would you need as many paralegals and that for me is a gigantic societal issue and the only thing I can think of is open Innovation and the generative jobs of the future through open source technology um because I don’t know how else we’re going to mitigate that because you know Peter you’re a student of History what happens when you have large amounts of intelligent disenfranchised youth history we’ve had that happen a few times we just had Arab Spring not long ago Revolt yeahi Civil War if not if not inter you know international law that’s war is a good way to soak up the excess you yep um but it’s not pleasant it’s not pleasant for society and

[00:26:01] fundamentally the cost of information gathering organization has collapsed like again you look at stable video that we released yesterday right it’s going to get so much better so quickly just like stable diffusion the cost of creating movies decreases the demand for Quality stuff increases but there’s a few years where demand and Supply don’t match and that’s such a turbulent thing to navigate that’s one of the reasons I’m creating stabilities for different countries so the best and brightest from each can help navigate them I loved your and people don’t talk about I Lov your idea that the um that the stability models and systems will be owned by the nation in fact one idea that I heard you say which I thought was fantastic was you graduate college in India you’re an owner in that uh in that system you graduate from in Nigeria you’re an owner in that system basically to incentivize people to complete their education and to have them have ownership in what is

[00:27:01] ultimately the most important asset that nation has and and and talk about it as infrastructure as well I think that’s an important analogy that people don’t get this is the knowledge infrastructure of the future it’s the biggest Leap Forward we have because you’ll always have a co-pilot that knows everything in a few years and that can create anything in any type but it must be embedded to your cultural values and you can’t let anyone else own that so it is the infrastructure of the mind and who would Outsource their infrastructure to someone else so that’s why I think Nigerians should own the models of Nigerians for Nigerians it should be the next Generation that does that you know that’s why you give the equity to the graduates that’s why you list it that’s why you make national champions because again that has to be that way this is far more important than 5G and this gives you an idea of the scale we’re just at the start the early adop phase a trillion dollars was spent on 5G this is clearly more important a more than a trillion dollars we spent on this and again it flips the world and so

[00:28:01] there is huge threat for our societal balance and again I think open is a potential antidote to create the jobs of the future and there’s huge opportunity on the side because no one will ever be alone and we can use this to coordinate our systems give everyone all the knowledge that they need at their fingertips and help guide everyone if we build this infrastructure correctly and again I don’t see the how that can be closed AGI um you know the conversation and and the definition of AGI has basically uh been all over the place uh Ray K’s prediction has been for 30 years that it’s 2029 um again that’s a blurry line of what we’re trying to you know Target but elon’s talked about anywhere from 2025 to 2028 uh what are you thinking um how how do you what’s your what’s your timeline for you know even digital super

[00:29:01] intelligence I honestly have no idea like people are looking at the scaling laws and applying it but as I’ve said data is the key and it’s clear that we already have like you could build a board GPT and it be better than most corporate boards right so I think we’re already seeing improvements over the existing um one of the complications here is swarm AI so even like it’s the whole thing like a duck size human or 100 human Siz Ducks right we’re just at the start of swarm intelligence and that reflects and represents how companies are organized Andre Cary has some great analogies on this in terms of the new knowledge OS and that could take off at any time but the function and format of that may not be this whole Western anti compromised Consciousness that we think of but just incredibly efficient systems that displace existing human decision- making right and so there’s an entire actual range of different AI outcomes depending on your

[00:30:00] definition and I just don’t know um but I feel again like I wake up and I’m like oh look it’s Fed Up 10 times the model you know like I’m just not no one can predict this but there is a point at which I mean we we’re heading towards an AI Singularity you know using the definition of a singularity as a point after which you cannot predict what’s coming next and that isn’t far away I mean how how far out is it for you a year two years I think you’re heading towards it in the next few years but like I said um every company organization individual has an objective function my objective function is to allow the Next Generation to navigate what’s coming in the optimal way and achieve their potential so I don’t want to build an AGI I don’t want to do any of this Amplified human intelligence is my preference and trying to mitigate against some of the harms or these agentic things things through data transparency good standards and making

[00:31:00] it so people don’t need to build gigantic models on crap which I think is a major danger if even have not from AI but again we just don’t understand because it’s difficult for us to comprehend H superhuman capabilities but again we’re already seeing that in narrow fields we already know that it’s a better Rider than us you know we really know that it can make better pictures than us and a better physician and a better educator and a better surgeon and a better everything um yeah and again I think it’s this methos of these big Labs being AGI focused whereas you can be better than us in like 5% of the stuff that humans can do and that’s still a massive impact on the world and it can still take over companies and things like that right like if you take over a company then you can impact the world and there’s clearly with a GPT 4 or a thousand of them orchestrated correctly that can call up people and stuff you wouldn’t know it’s not CEO you know I can make an mgpt and then I won’t have to make all these

[00:32:00] tough decisions nearly there and most of my decisions aren’t that good so it probably better um so I think that we’re getting to that point it’s very difficult and the design patterns are going fast we’re at the iPhone 2G 3G stage got copy and paste and we just got the first stage as well of this technology which is the creation step it creates stuff the next step is control and then composition where they’re annoyed because chat GPT doesn’t remember all the stuff that you’ve written that won’t be the case in a year and the final bit is collaboration where these AIS collaborate together and with humans to build the information superstructures of the future and I don’t feel that’s more than a few years away and it’s completely unpredictable what that will create let’s talk about uh responsibility that AI companies have for making sure that their technology is used uh in a pro-human and not a disruptive fashion you think that is a responsibility of of a company of a

[00:33:00] company’s Board of a company’s leadership how do you think about that um again with the corporate’s capitalist system it typically isn’t because you’re maximizing shareholder value and there aren’t laws and regulations which is why I think there’s a moral a social and a legal regulatory aspect to this companies we’ll just look at the legal Regulatory and some cases we’ll just ignore them right um but I do think again we have a bigger moral and social obligation to this uh this is why I don’t subscribe to EA or EAC or any of these things I think it’s complicated and it’s hard given the uncertainty and how this technology proliferate and you’ve got to do your best and be as straight as possible to people about doing your best um because none of us are qualified to understand or do this and none of us should be trusted to have the power over this technology right you should be questioned you should be challenged um with that and again if you’re not transparent how you going to challenge when I think of the most linear organizations on the planet I think of

[00:34:01] governments uh maybe religions but governments let’s leave it there uh how can you know let’s talk about um Western Government at least the US I would I would have said Europe but I’ll say the UK and Europe um I what should they be what steps should they be taking right now you know if if you were given the Reign to say how would you regulate how would you what would you want them to do or not do I believe it’s a complicated one so I signed the first fli letter I think I was the only AI CEO to do that back before it was cool um because I said I don’t think AGI will kill us all but I just don’t know I think it’s a conversation that deserves to be had and it’s a good way to have that conversation and then we flipped the wrong way where we went overly AI death risk and other things like that and governments were doing that at the AI safety Summit in UK and then we had the king of England come out and say this is

[00:35:00] the biggest thing since fire I like okay that’s a big change right uh the king of England said it so I must be on the right track um but I think if you look at it regulation doesn’t move fast enough even the executive order will take a long time the EU things will kind of come in instead I think that governments have to focus on the tangibles AI killer ISM again it can be addressed by considering this is infrastructure what infrastructure do we need to give our people to survive Thrive the US is in a good initial place of the chips act but I think you need National data sets you need to provide open models to stoke Innovation and think about what the jobs of the future are because things are never the same again you don’t need all those programmers when co-pilot is so good and you mve co-pilot to the level above which is compositional co-pilot and then collaborative co-pilot right you’ll be able to talk and computers can talk to computers better than humans can talk to computers so we need to articulate the future on that side side but then the other side like one of the examples I give is a loved one had a recent

[00:36:02] misdiagnosis of pancreatic cancer right you you know I talked about this and the loss of agency you feel and many of you on this call have had that diagnosis the ninda is huge and then I had a thousand AI agents finding out every piece of information about pancreatic cancer and then after that I felt a bit more control why don’t we have a global cancer model that gives you all the knowledge about cancer and helps you talk to your kids and connects you people like you not for diagnosis or research but for humans this is the Google Med pomm 2 model for example that outperforms humans and diagnosis but also empathy and what if we armed our graduates to go out and give support to the humans that being diagnosed in this way that makes Society better and it’s valuable you know and that’s an example of a job of the future I think I don’t believe in Ubi I think believe in Universal basic jobs as use Universal basic opportunity right unival basic opportunity un Bas

[00:37:01] jobs but then polic makers need think about it now because the graduate unemployment wave is literally a few years away that and it will happen all yeah that is I mean when I think about what I parse the uh challenges we’re going to be facing in society into uh a few different elements I think you know what we have today is amazing and if if generative AI froze here we’d have an incredible uh set of tools to help Humanity across all of its areas and then we’ve got what’s coming in the next you know zero to five years we’ve talked about patient zero perhaps being the US elections and the dis you know I think you had said you know was it was Cambridge analytica that required interference now it’s any kid in the garage that could could play with the elections um that’s a challenging period of time uh and and this graduate unemployment wave as you mentioned uh coming right on its heels uh that’s you know the question becomes is

[00:38:00] is the only thing that can create alignment and help us overcome this AGI at the highest level meaning it is uh causing challenges But ultimately it’s a tool that will allow us to enable to solve these challenges as well I mean that’s a crazy thought right like all this stuff is crazy like the sheer scale and impact of it and you know these discussions we had them last year Peter and now everyone’s like yeah that makes sense you’re like oh wow right it may be AGI it may be these coordinating automated story makers and balances from the market right next year there’s 56 elections with four billion people heading to the PO what could possibly go wrong I could possibly go wrong you know oh my God but again the technology isn’t going to stop like even if stability puts down things if open AI puts down things it will continue from around the world because you don’t need much to train these models again the supercomputer thing is

[00:39:00] a myth youve got another year or two where you need them you don’t need them after that and that is insane to think about um you just released stability video congratulations or stable video diffusion um uh and I’m enjoying some of the the clips how far are we away from uh me telling a story to my kids uh and and saying let’s make that into a movie what two years away two years away so this is a building block it’s the best creation step and then like I said you have the control step composition and then collaboration and self- learning systems around that so we have comfy UI which is our node-based system where you have all the logic that makes up an image like you can take a dress and a pose and a face it combines them all and it’s all encoded in the image because you can move Beyond files to intelligent workflows that you can collaborate with if I send you that image file and you put it into your com UI it gives you all the logic that made that up how insane

[00:40:01] is that right so we’re going to step up there and what’s happened now is that people are looking at this AI like instant versus again the huge amount of effort it took to take this information and structure it for but the value is actually in stuff that takes a bit longer like when you’re shooting a movie you don’t just say do it all in one shot right unless you are very talented director and actor you know you have Mison PL you have staging you have blocking you have cinematography and it takes a while to composite the scenes together will be the same for this but a large part of it will then be automated for creating the story that can resonate with you and you can turn it into Korean or whatever and there’ll still be big blogbusters like Oppenheimer and Barbie but again the floor will be raised overall similarly like we had a music video competition check it on YouTube with Peter Gabriel he all us to use kindly his songs and people from around the world made

[00:41:00] amazing music videos to his thing but they took weeks I think there somewhere in the middle here we again we’re just at that early stage because chat GPT isn’t even a year old you know stable diffusion is only 14 15 and and Ne and I think you’d agree that neither of them is the end all and be all it’s just it’s the earliest days of this field um I had the convers the tiniest building I had this conversation with Ray while uh two weeks ago um we just after a singularity board meeting we had we’re just hung on on a zoom and shatted and you know the realization is unfortunately the human mind is awful at exponential projections and despite the convergence of all these Technologies we tend to project the future as a linear extrapolation of you know the world we’re living in right now um but the best I can I can say is that we’re going to see in the next decade right between now and 2033 we’re going

[00:42:00] to see a century worth of progress but it’s going to get very weird very fast isn’t it um I mean there’s there’s two-way doors and there’s oneway doors right like and December of last year multiple Headmasters called me and said we can’t set essays for our homework anymore and every Headmaster in the world had to say that same thing it’s a oneway door yes and this is the scary part the oneway doors right like when you have an AI that can do your taxes what does that mean for accountants all the accountants at the same time it’s kind of crazy right it is um and the challenge I mean one of my biggest concerns so listen I’m the eternal optimist I’m not the guy who the glass is half full it’s the glass is overflowing um and one of the challenges I think through when I think about uh where AI AGI ASI however you want to project it to is um the innate

[00:43:01] importance of human purpose and um unfortunately most of us derive our purpose from the work that we do you know I ask you you know what you know tell me about yourself and you jump into your work and what you do and so when AI systems are able to do most everything we do not just a little bit better but you know orders of magnitude better um red defining purpose and redefining my role in achieving a moonshot or a transformation uh is it’s the you know it’s the impedance mismatch between uh human societal uh uh growth rates and Tech growth rates what are your thoughts there yeah I mean I think again exponentials are hard like if I say gp4 and 12 to 18 months on a smartphone you be like well that’s not possible why you

[00:44:00] know like gp4 is impossible stable diffusion is impossible right like now they’ve almost become commonplace but why would you need supercomputers in these things I do age this is mismatch and that’s why we’re in for five years of chaos that’s why I called it stability I saw this coming a few years ago and I was like holy crap we have to build this company and now we have the most downloads of any models of any company like 50 million last month versus 700,000 from mral for example um and we will have the best model of every type except for very large language models by the end of the year so we have audio 3D video code everything and a lovely amazing Community um because it’s just so hard again for us to imagine this mismatch there’s a perod of chaos but then on the other side like there’s this P Doom question right the probability of Doom I can say something with this technology the probability of Doom is lower than

[00:45:00] without this technology because we’re killing ourselves and this can be used to enhance every human and coordinate us all and I think what we’re aiming for is that Star Trek future versus that Star Wars future yes right I’m into that um and I think that’s an important point that the level of complexity that we have in society uh we don’t need AI to destroy the planet um we’re doing that very very well thank you um but the ability to coordinate so one the one of the things I think about is a world in which everyone has access to all the food water energy Health Care education that they want uh really a a world of true abundance in my mind is a a peace more peaceful world right why would you want to destroy things if you have access to everything that you need and that kind of a world of abundance is is on the backside of this kind of awesome technology we have to navigate The Next

[00:46:01] Period I believe we’ll see it within our lifetimes particularly if we get longevity SS right um and that’s so amazing right but then we think about as you said why peace a child in Israel is the same as a child in Gaza and then something happens a lie is told that you are not like others and the other person is not human like you all wars are based on that same line and so again if we have ai that is aligned with the potential of each human that can help mitigate those lies then we can get away from more because the world is not scarce there is enough food for everyone it’s a coordination failure it and that can be addressed by this techn age one of the most interesting and basic functions or capabilities of generative AI has been the ability to translate my ideas uh into Concepts that someone who is a different frame of thought can understand right um that’s but that’s

[00:47:00] what this generative AI is it’s a universal translator for sure it does not have facts the fact that it knows anything is insane hallucinations is a crazy thing to say again it’s just like a graduate trying so hard gp4 with 10 trillion words and 100 gigabytes is insane stable diffusion has like 100,000 gbt in a 2 GB file 50,000 to one compression is something else it’s learned principles yes and this is It’s knowledge knowledge versus data yeah it’s knowledge versus data and you buy some experience you get the wisdom right because it’s learned the principles and contexts and it can map them to transform data because that’s how you navigate you don’t navigate based on like logical flow we have those two parts of our brain navigate sometimes based on Instinct based on the principles you’ve learned so test new Auto driving model self-driving model is entirely based on I can’t say which

[00:48:00] architecture I if they said it publicly is based on this technology it doesn’t have any rules it just learned the principles of how to drive from massive amounts of Tesla data that now fits on the hardware without internet and so they went from self-driving being impossible to know hey it works pretty well you know because it’s learn the principles and so that’s why this technology can help solve the problem this is why it can help us amplify our intelligence and our Innovation because it’s the missing part the second part of the break you know uh next I can’t give more details yet but next week we’re announcing the largest X prize ever it’s $101 million um it’s 101 so it’s uh Elon had the uh had $100 million prize that got him to fund a few years ago for carbon sequestration and the funer the first funer of this prize wanted to be larger than Elon I said okay you add the extra million um it’s for luck it’s for luck we did our seed round at 101 million oh really okay

[00:49:00] that’s great that’s a good popular number um anyway uh the uh and it’s in the field of Health I’ll leave it at that uh folks can go to x pri.org to register to see the live uh event on November 29th we’re going to be debuting the prize what it is what it’s going to impact eight billion people uh long story short it’s it’s a nonlinear future because we are able to utilize Ai and make things that were seemingly crazy before uh likely to become inevitable um and it’s an amazing future we have to live into yeah I mean again because it’s oneway doors the moment we create a cancer GPT this is something that we’re building we have trillions of tokens and new Google tpus and things like that that organizes Global cancer knowledge and makes it accessible and useful even if it’s just for guiding people that being diagnosed the world changes the

[00:50:02] 50% of people that have a cancer diagnosis in their lives in every language and every level will have someone to talk to and connect them with the resources they need and other people like them and talk to their families you know and how insane is that and so I can these positive stories of the future need to be told right because that will align us to where we need to go as opposed to a future full of UN certainty and craziness and do um in our last couple minutes here buddy uh what can we look forward to from stability um in the months and years ahead um we have every model of every type and we’ll build it for every nation and we’ll give back control to Every Nation so coming back to governance here um again is the nation state the unit of control is it no I my my thinking is the stabilities in every nation should have the best and brightest to beach because what you’ve seen is there are amazing

[00:51:01] people in this sphere the best and brightest in the world know this is the biggest thing ever and they all want to work in it and it’s just finding the right people with the right intention the rightest people go back to Singapore or Malaysia or others because of the future of their Nations and again now we’re doing a big change and we don’t talk about all the cool stuff we do we just taken it because we need to articulate that positive vision of the future because the only scarce resource that actually this is human capital it’s not gpus it’s not data it’s about the humans that can see this technology and realize that they can play a part in guiding it for the good of everyone their own societies and more and that’s again what I hope stability can be well I wish you the best of luck pal thank you for joining me in this conversation it’s a it’s been a crazy four or five days um and uh wish Sam and Greg and entire open AI team um stability in their lives yeah yeah they

[00:52:02] have a nice Thanksgiving they’re they’re absolutely they’re an amazing team building world changing technology it’s such a concentration of talent I think again I really felt for them over the last few days you know much as I kind of post memes and everything and I posted that as well um I think this will bring them closer together and hopefully they can solve the number one problem that I’ve asked them to solve which is email Sol right and then we’ll crack on from there all right cheers my [Music] friend