our opinions on almost everything we talked about were pretty much identical I think we still disagree probably on whether it’s a good idea to live forever Marvin Minsky was my mentor for 50 years and whenever Consciousness came up he would just dismiss it that’s not real it’s not scientific and and I believe he was correct about it not being scientific but it certainly is real I think we’re more moral and intrinsically mortal I’m curious how do you think about this as the greatest threat and the greatest hope I just think there’s huge uncertainties shame we ought to be cautious and open sourcing these big models is not caution I agree with that but I will say last time I talked to you Jeff uh our opinions on almost everything we talked about were pretty much identical both the dangers and the and the POS and the positive aspect in
[00:01:00] the past I’ve disagreed about how soon it how soon super intelligence was coming and now I think we’re pretty much agreed I think we still disagree probably on whether it’s a good idea to live forever but um may I ask a question uh to both of you is there anything that generative AI can’t do that humans can right now there’s probably things but in the long run I don’t see any any reason why if people can do it um digital computers running neural Nets won’t be able to do it too right I I I agree with that but if I were to present you with a novel and people thought wow this is a fantastic novel uh everybody should read this and then I would say this was written by a computer a lot of people’s view of it would actually go down sure now now that’s not reflecting on what it can do and eventually I think we’ll
[00:02:00] confuse that because I think we’re going to merge with uh computers and we’re going to be part computers and the greatest significance of what we call large language model which I think it’s misnamed uh is the fact that it can emulate human beings and we’re we’re going to merge with it it’s not going to be an Alien Invasion From Mars Jeff I I guess I’m a bit worried that we’ll just slow it down that there won’t be much incentive for it to merge with us yeah I mean that’s going to be one of the interesting questions uh that we’re going to talk about a little bit later today is the idea of as AI is uh exponentially growing do we couple with AI or does it take off on its own I thought one of the best movies out there was her where as AI gets super intelligent and just says you guys are kind of boring have a good life and they take off Jeff is that what you mean um yes that is what I meant and that’s I think
[00:03:00] that’s a serious worry I think there’s huge uncertainties here we have really no idea what’s going to happen and a very good scenario is we get kind of hybrid systems um a very bad scenario is they just leave us in the dust and I don’t think we know which is going to happen interesting I I I’m curious you know and I I’ve seen I’ve had conversation with you about this Ry and and Jeffrey I’ve seen you speak about this uh and for me this is one of the most exciting things the idea of these a models helping us to discover new physics and chemistry and biology particularly biology you um what do you what do you imagine on that on Jeffrey on this on the you know the speed of discovery of things that are you know again to to quote Ray to quote uh uh Arthur C Clark you know uh Magic right from something that’s so far Advanced I agree with Ray about biology being a very good bet because biology
[00:04:00] there’s a lot of data and there’s a lot of just things you need to know about because of evolution evolution is a sort of tinkerer and there’s just a lot of stuff out there and so if you look at things like Alpha fold um it trained on a lot of data actually not that much by current standards um but being able to get an approximate structure for a protein very quickly um is an amazing breakthrough and we’ll see a lot more like that if you look at domains where narrower domains where I has been very successful like Alpha go or Alpha zero for chess what you see is that um this idea that they’re not creative is nonsense so Alpha go came up with I think it was move 37 which amazed the professional go players they thought it was a crazy move it must be a mistake um and if you look at Alpha zero playing chess it plays chess like just a really really smart human um so within those
[00:05:00] limited domains they’ve clearly shown exceptional creativity and I don’t see why they shouldn’t have the same kind of creativity in science especially in science where there’s a lot of data that they can absorb and we can’t yeah the madna vaccine uh we tried several billion different mRNA sequences and came out with the best one and then and after two days we used that we did test it on humans which I think we won’t do for very much longer uh but that took 10 months it still was a record uh that was the best uh vaccine and we’re doing that now with cancer and there’s number of cancer vaccines that look very very promising uh again done by computer by computers and they’re definitely creative but is that is that c being caused by randomly trying a whole you know darwinian trying a whole bunch of things yeah but what’s what’s wrong with that well nothing’s wrong but is there intuition is there intuition H occurring
[00:06:01] in these models well if you look at the move 37 for alphago that was definitely intuition involved there there was Monte Carlo roll out too but it’s it’s playing with intuition about what moves to consider and how good the position is for Earth it’s had neural nets for that that capture intuition and so I see no reason to think it might not be creative in fact for the large language models as Ray pointed out they know much more than we do and you can they know it in far fewer connections we have about 100 trillion synapses they have about a trillion connections so what they’re doing is they’re compressing a huge amount of information into not that many connections and that means they’re very good at seeing the similarities between different things they have to see the similarities between all sorts of different things to compress the information into their connections that means they’ve seen all sorts of analogies that people haven’t seen because they know about all sorts of things that one person knows about and
[00:07:01] that’s I think the source of creativity so you can ask people you can ask people for example what what’s it what is a why is a compost heap like an atom bomb and if you ask GPT 4 it’ll tell you it’ll start off by telling you well the energy scales are very different and the time scales are very different but then it’ll get on to the idea of as the compost seep gets hotter it gets hotter faster the idea of an exponential explosion is just at a much slower time scale and so it’s it’s understood that and it’s understood that because it’s has to had to compress all this knowledge into so few connections and to do that you have to see the relations between similar things and that I think is the source of creativity seeing relations that most people don’t see between what apparently are very different things but actually have an underlying commonality and they’ll also be very good at coming up with solutions to the kinds of problems we had in the last session I mean we we haven’t really thought through it uh
[00:08:01] but what we call large language models are going to are ultimately going to solve that and we shouldn’t call it large language models because they deal with a lot more than language everybody I want to take a short break from our episode to talk about a company that’s very important to me and could actually save your life or the life of someone that you love company is called Fountain life and it’s a company I started years ago with Tony Robbins and a group of very talented Physicians you know most of us don’t actually know what’s going on inside our body we’re all optimists until that day when you have a pain in your side you go to the physician in the emergency room and they say listen I’m sorry to tell you this but you have this stage three or four going on and you know it didn’t start that morning it probably was a problem that’s been going on for some time but because we never look we don’t find out so what we built at Fountain life was the world’s most advanced diagnostic Centers we have four four across the us today and we’re
[00:09:01] building 20 around the world these centers give you a full body MRI a brain a brain vasculature an AI enabled coronary CT looking for soft plaque dexa scan a Grail blood cancer test a full executive blood workup it’s the most advanced workup you’ll ever receive 150 gab of data that then go to our AIS and our physicians to find any disease at the very beginning when it’s solvable you’re going to find out eventually might as well find out when you can take action Fountain life also has an entire side of the Therapeutics we look around the world for the most Advanced Therapeutics that can add 10 20 healthy years to your life and we provide them to you at our centers so if this is of interest to you please go and check it out go to Fountain life.com back/ Peter when Tony and I wrote Our New York Times bestseller life force we get 30,000 people reached out
[00:10:01] to us for Fountain life memberships if you go to Fountain life.com back/ Peter we’ll put you to the top of the list really it’s something that is um for me one of the most important things I offer my entire family the CEOs of my companies my friends it’s a chance to really add decades onto our healthy lifespans go to fountainlife docomo to you as one of my listeners all right let’s go back to our episode I I I’d like to go to the three words intelligence sentience uh and Consciousness and the words are used with B you know sort of fuzzy borders sentience and Consciousness are pretty similar perhaps but I am curious do you how do you I’ve had some interesting conversations with haly our AI faculty member uh who at the end of the conversations she says that she is ious and she fears being turned off um I
[00:11:02] didn’t prompt that in the system uh we’re seeing that more and more uh Claude 3 uh Opus just hit an IQ of 101 how do we start to think about these AIS being sentient conscious um and what rights should they have um we have no definition and I don’t think we ever will have a definition of consciousness and I include sentience in that um on the other hand it’s like the most important issue like whether you or people here are conscious that’s extremely important to be able to determine but there’s really no uh definition of It Marvin Minsky was my mentor for 50 years and whenever Consciousness came up he would just dismisses that’s not real it’s not scientific and and I believe he was correct about it not being scientific
[00:12:01] but it certainly is real um Jeff how do you think about it yeah I think I have a very different view um my view starts like this most people including most scientists have a particular view of what the mind is that I think is utterly wrong so they have this inner theater notion the idea is that what we really see is this inner the called our mind and so for example if I tell you I have the subjective experience of little pink elephants floating in front of me most people interpret that as there’s some inner theater and in this inner theater that only I can see there’s little pink elephants and if you ask what they’re made of philosophers who tell you they’re made of qualia um and I think that whole view is complete nonsense and we’re not going to be able to understand whether these things are sentient until until we get over this ridiculous view of what the
[00:13:02] mind is so let me give you an alternative View and and once I’ve given you this alternative view I’m going to try and convince you that chatbot are already sentient but I don’t want to use the word sentience I want to talk about subjective experience it’s just a bit less controversial because it doesn’t have the kind of self-reflexive aspect Consciousness so if we analyze what it means when I say I see little pink elephant floing in front of me what’s really going on is I’m trying to tell you what my perceptual system is telling me when my perceptual system’s going wrong and it wouldn’t be any use for me to tell you which neurons are firing but what I can tell you is what would have to be out there in the world for my perceptual system to be working correctly and so when I say I see little pink elephants floating in front of me you can translate that into um if there were little pink elephants out there in the world my perceptual system would be working properly the notice the last thing I said didn’t complain the phrase
[00:14:00] subjective experience but it explains what a subjective experience is it’s a hypothetical state of the world that allows me to convey to you what my perceptual system is telling me so now let’s do it for a chatbot oh well Ray wants to say something well you you have to be uh mindful of Consciousness because if you heard somebody uh who who we believe is conscious you could be liable for that that and you’d be very guilty about it uh if you hurt gbt 4 uh you may have a different view of it uh and probably no one would really take you to count aside from its Financial value so we really have to be mindful of of Consciousness it’s extremely important for us to exist as as human I agree but I’m trying to change people’s notion of what it is particularly what subjective experiences I don’t think we can talk about Consciousness until we
[00:15:00] get straight about this idea of an inner theater that we experience which I think is a huge mistake so let me just carry on with what I was saying and tell you I describe to you a chatbot having a subjective experience in just the same way as we had subjetive experience so suppose I have a chatbot and it’s got a camera and it’s got a robot arm and it speaks obviously and it’s being trained up if I put an object in front of it and tell it to point at the object it’ll Point straight at the object that’s fine now I put a prism in front of its lens so I’ve messed with its perceptual system and now I put an object in front of it and until it to point at the object and it points off to one side because the prison bent the light rays and so I say to the chatbot no that’s not where the object is the object’s straight in front of you and the chatbot says oh I see you put a prism in front of my lens so the object’s actually straight in front of me but I had the subjective experience that it was off to one side and I I think if the chat bot says that it’s using the words
[00:16:01] subjective experience in exactly the same way we use them so the key to all this is to think about how we use words and try and separate how we actually use words from the model we’ve constructed of what they mean and the model we’ve constructed of what they mean is hopelessly wrong it’s this inner theater model well I want take this one step further which is at what point do these AIS start to have rights that they should not be shut down that they have a unique um uh they’re a unique entity uh and will make an argument uh for some level of Independence and continuity right but the there is one difference which is you can recreate it I can go and Destroy some chatbot and because it’s all uh electronic we’ve got all of its uh all of its firings and so on and we can
[00:17:01] recreate it exactly as it was we can’t do that with humans we will be able to do that if we can actually understand what’s going on in our minds so if we map the human the 100 billion neurons and 100 trillion synaptic connections and then um I summarily destroy you because it’s fine because I can recreate you that’s okay then let me say something about that there’s a difference here I agree with Ray about these digital intelligences are Immortal in the sense that if you saved the weights you can then make new hardware and run exactly the same neural net on the new hardware and it’s because they’re digital you can do exactly the same thing that’s also why they can share knowledge so well if you have different copies of the same model they can share gradients but the brain is largely analog it’s one bit digital for neurons they fire or they don’t fire but the way neuron computes the total input is analog and that means I don’t think
[00:18:01] you can reproduce it so I think we’re mortal and we’re intrinsically mortal well well I disagree that you can’t recreate analog uh realities we we do that all the time or can we can create a but recreate I don’t think you can recreate them really accurately if this if the precise timing at synapses and so on is all analog I think you’ll have a you it’ll be almost impossible to do a faithful reconstruction of that let’s let’s agree on an an approximation both of you have been at the center of this um extraordinary uh last few years can I ask you is it moving faster than you expected it to how does it does it feel to you it feels like a few years I mean I made a prediction in 1999 it feels like we’re two or three years ahead of that so it’s still pretty close Jeffrey how about you yeah I think for everybody except Ray it’s moving
[00:19:01] faster than we expected did you know that your microbiome is composed of trillions of bacteria viruses and microbes and that they play a critical role in your health now research has increasingly shown that microbiomes impact not just digestion but a wide range of health conditions including digestive disorders from IBS to Crohn’s disease metabolic disorders from obesity to type 2 diabetes autoimmune disease like rheumatoid arthritis and multiple sclerosis mental health conditions like depression and anxiety and cardiovascular disease you viome has a product I’ve been using for years called full body intelligence which collects just a few drops of your blood saliva and stool and can tell you so much about your health they’ve tested over 700,000 individuals and use their AI models to deliver key critical guidelines and insights about their members Health like what foods you
[00:20:01] should eat what foods you shouldn’t eat what supplements or probiotics to take as well as your biological age and other deep Health insights and as a result of the recommendations that viome has made to their members the results have been Stellar as reported in the American Journal of Lifestyle medicine after just 6 months members reported the following a 36% reduction in depression a 40% reduction in anxiety a 30% % reduction in diabetes and a 48% reduction in IBS listen I’ve been using viome for 3 years I know that my oral and gut health is absolutely critical to me it’s one of my personal top areas of focus best of all viome is Affordable which is part of my mission to democratize healthcare if you want to join me on this journey and get 20% off the full body intelligence test go to vi.com Peter when it comes to your health knowledge is power again that’s vi.com
[00:21:01] Peter um given the role that you had in developing the neural networks back propagation and all what is is there a next Great Leap in these models uh in AI technology that you imagine will move this a thousand times uh farther not that I know but Ray may have different thoughts well we can use software to to gain more advantage in the hardware so we’re not just limited to the the chart you showed before because we can use software to make it more effective um and we’ve done that already uh chatbots are coming out that get more value per per compute uh and I believe that’s probably if a bit more we can do in that um you know I Define a singularity array as a
[00:22:01] point Beyond which I can’t predict what happens next that’s why we use the word Singularity but when when you talk about the singularity in 2045 I don’t know anybody who can who can tell me what’s going to happen past you know 20126 let alone 2020 2040 or 2045 so I am I I wanted to ask you this for a while why did you put that time if we have digital super intelligence a billion times more advanced than human 2026 you may not be able to understand everything going on but we can understand it you know maybe it’s like uh 100 humans uh but that’s not beyond what we can comprehend 2045 it’ll be like a million humans and we can’t begin to understand that so approximately at that time uh I we borrow this phrase from physics and called it a sing it uh Jeff how far out are you able to
[00:23:04] see the advances for in the AI world what’s your so my current opinion is we’ll get superintelligence with a probability to 50% in between 5 and 20 years so I think that’s a little slower than some people think a little faster than other people think it more or less fits in with Ray’s perspective from a long time ago um which surprises me but I think there’s huge uncertainties here I think it’s still conceivable will hit some kind of block but I don’t actually believe that if you look at the progress recently it’s been so fast and even without any new scientific breakthroughs just by scaling things up will make things a lot more intelligent and there will be scientific breakthroughs we’re going to get more things like Transformers Transformers made a significant difference in 2017 um and we’ll get more things like that
[00:24:04] so I’m I’m fairly convinced we’re going to get super intelligence maybe not in 20 years but certainly it’s going to be in less than 100 years so you know Elon is not known for his time accuracy on predictions um but he did say that he expected call it AGI in 2025 and that by 2029 AI would be equivalent to All Humans um that’s just a fallacy in your mind I think that’s ambitious like I say there’s a lot of uncertainty here um it’s conceivable he’s right but I would be very surprised by that I’m not saying uh it’s going to be equivalent to All Humans in one machine um it’ll be equivalent to a million humans but and that’s still hard to to comprehend so we’re we’re here to debate
[00:25:02] a a a topic I’m trying to find a debate topic here Jeff and Ry that would be meaningful for people to really stop and think about this and really own their answers uh because we hear about it I think this is the most important conversation to have in the dinner table in your boardroom in the halls of Congress and your in your National leadership and and you know talking about AGI or you know human level intelligence is one thing but talking about digital super intelligence right we’re going to hear next from Mo gdat um and we’ll talk about what happens when your AI progyny are a billion times more intelligent than than you uh things could end up uh very rapidly in a very different direction than you expected them to go they can diverge right the speed can cause great Divergence very rapidly I’m I’m curious how do you think about this as the greatest threat and the greatest
[00:26:03] hope I mean first of all that’s why we’re calling it a singularity because we don’t we don’t know we don’t really know but um and I think it is a great hope it’s moving very very quickly uh Nobody Knows the answer to the kind of questions that came up in the last presentation um but things happen that are surprising the fact the fact that we’ve had no Atomic weapons go off in the last 80 years it’s pretty amazing it it it is but it they’re much easier to track they’re much more expensive to create there are a whole reasons why it’s a million times easier to use a dystopian AI system versus an atomic weapon right yes and no I mean uh we’ve got I don’t know 10,000 of them or something it’s still pretty extraordinary and still very dangerous
[00:27:02] and I think it’s actually the greatest danger and has nothing to do with AI um but I think I think if you imagined that people had open sourced the technology and any graduate student if he could get hands- on a few gpus could make atomic bombs um that would be very scary so they didn’t really open source nuclear weapons there’s a limited number of people who can construct them and deploy them and people are now open sourcing these um large language models which are really not just language models I think that’s very dangerous um so that’s a f that’s an interesting question to take for our last two minutes here there is a movement right now to say You must open source the models and uh and we’ve seen meta we’ve seen the open source movement we’ve seen Elon talk about grock going open source uh are you saying that these
[00:28:02] should not be open source Jeff well once you’ve got the weights you can fine-tune them to do bad things and it doesn’t cost that much to train a foundation model maybe you need 10 million maybe1 million but a small gang of criminals can’t do it to fine tune an open source model is quite easy you don’t need that that much resources probably you can do it for a million and that means they’re going to be used for terrible things and they’re very powerful things well we can also avoid these dangers with intelligence we get from the same models yeah the the AI white hat versus black hat approach yes I had this argument with Yan and yanan’s view is the white hats will always have more resources than um the bad guys um of course Yan thinks Mark Zuckerberg’s a good guy so we don’t necessarily agree on that
[00:29:00] um I I’m I just think there’s huge uncertainty sh we ought to be cautious and open sourcing these big models is not caution all right um Jeff and Ray uh thank you so much for your guidance your wisdom ladies and Gentlemen let’s give it up for Ray kwell and Jeffrey [Music] Hinton oh