it’s game over for living the way we have lived in the 20th and the beginning of the 21st century the topic is heating up and we’re running out of time seriously running out of time jobs are not the same truth is not the same power is not the same income is not the same purpose is not the same and then the AI arms race begins exactly and it’s inevitable these AIS are watching our behaviors how we treat each other how we treat our machines and it’s emulating that there is absolutely nothing inherently wrong with intelligence the problem is capitalism isn’t it ironic that the very essence of what makes us human is what we need to save Humanity this is Humanity okay it’s not what you see on TV it’s not what you see on social media and I think if 1% of us just showed up it would instill the doubt in the minds of the machine so that they investigate the truth and what is the truth the truth is a species that is capable of Love is divine
[00:01:03] welcome to moonshots and mindsets we’re about to dive into a conversation with madad uh an extraordinary individual of heart mind and soul uh for a decade a senior executive at Google and then the chief business officer at Google X working with Astro Teller at the moonshot factory uh MO is amazing and he’s got two moonshots we’re going to dive into uh his first moonshot is to help make a billion people happy he wrote a book called Sol for happy and the second one is making the world aware of and getting people to get involved in the concerns around the dangers around Ai and he wrote a book called scary smart that brought me to this conversation with him I’ve known Mo for some time um we’re going to be talking about a range of things like what are the real concerns about AI how scared should you be how excited should you be are we going to merge with AI are we going to upload load ourselves is it
[00:02:02] danger that AI is going to destroy the planet or is it humans using AI uh we’re going to cover a whole range of subjects is artificial intelligence going to be able to create a community and a conversation and a sense of connection with humans as good as we do Mo thinks we will not in 20 years but sometime in the next 5 years the humanoid robots that are coming we’re going to cover all of these subjects my goal here is make you aware of what you should be talking about at the dinner table in your boardroom and the halls of Congress and what you should be excited about as well this is one of my favorite podcasts I’ve done please stick with me uh I hope and I look forward to your comments as you subscribe and give your comments on this uh this probably will turn into a regular conversation with Mo he is one of the most brilliant thinkers on the subject of AI out there he’s seen it first hand he’s been part of it all
[00:03:01] right let’s dive in welcome everybody Welcome to moonshots and mindsets I’m here with an extraordinary man uh both cognitively and in his heart uh someone who I’m proud to call a friend and mo it’s a pleasure to be with you always Peter it’s always a pleasure to be with you I mean the fact that we record it this time makes it quite a bit of an interesting one but I think all our conversations have been so fulfilling and so riching thank you so much for having me uh it’s uh roughly 7:00 a.m. here in Santa Monica you’re on the other side of the planet in Dubai uh and it’s an amazing world we’re living in uh that we can do that truly is yeah it it really is and I think we take it for granted quite frequently uh the reality that you and I can connect literally with one text message and then be almost together it’s not as as amazing as being in the same place but almost together within minutes on a video conference is just just uh almost science fiction when
[00:04:01] you really think about it huh just uh if if you’re a fan of Star Trek or you know whichever early science fiction this was positioned as science fiction yeah and we’re we’re living in it and the challenge is that I I think you agree that the speed of change is so fast that we forget the Miracles we have every day we forget the crazy world we’re living in which we’re talking to things and it’s answering back and you can know anything you want instantaneously and uh you know health and education we’re transforming the world um and it’s enthralling uh but for a lot of the world it’s scary as well uh let me just let me just set this conversation up uh we’re going to talk about moonshots here and you have two extraordinary moonshots uh let me mention them but I’d like you to frame them for me and then we’ll talk about each I would say uh your earlier moonshot the one which I first met you
[00:05:00] uh is your original book Sol for happy and your moonshot of 1 billion happy uh is that the right phrasing for it yeah so so the yeah that that’s how it started so uh that you know more or less it’s actually extended into the second one but uh when I lost my wonderful son you remember the story uh in 2014 uh Ali U was the one that taught me everything I knew about happiness and when he left I attempted to start a mission that was called 10 million happy and 10 million happy was mainly for my son’s Essence to live on if you want okay I was trying to tell the world what this Young wise man has taught me and in my mind I know you’d understand the math I calculated very quickly that if I could get a bit of Ali’s Essence to 10 million people then in 72 years in through six degrees of separation
[00:06:00] a tiny bit of him will be everywhere and part of everyone that’s that was my calculation right the Miracles of exponential growth there you go right and and and if you know the math you actually think this is reasonable if you got if you got to 10 million people uh that would be you know right and and I I was surprised by the reception of life if you want So within within six weeks uh eight weeks to be very specific it started on week six but by week eight uh the message had reached 137 million people and uh we we don’t we don’t measure uh people who got a video or just pressed a like that doesn’t count but uh but we we measure people that take concrete action and it was very clear within 8 weeks that we surpassed the 10 million happy and so we upgraded to 1 billion happy which I think is a true mchart when you really think about it it is and I want to come back to that and talk about it in detail cuz it’s it’s it’s important um you know people
[00:07:01] say what’s important in life and everybody eventually resolves into being happy having your children be happy having your family be happy the second moonshot which is the more recent one uh and brings us to this podcast uh I would frame it as educating the world about AI would you frame it as educating the world about the dangers of AI it’s in essence the emergence from your amazing book scary smart which I’ve read twice my family’s read oh my God such an honor it’s it’s a beautiful book and I want everybody hearing this uh to to read it and and by the way the way you wrote it uh was extraordinary I’ve had the pleasure of writing a few books I know that you’ve loved writing when to talk about that but um you you wrote it in such a consumable fashion but let’s frame your moonshot here my moonshot is to Tilt The Singularity of AI in favor
[00:08:03] of having humans best interest in mind and and to be able to do that of course education is part of it but uh more crucially I would say that the real moonshot is to shift human behavior to align more with human values so that we become a data set from which AI learns to have our best just in mind and and I think most people who are techies or you know who are not fully informed of AI may not see the relationship and I’d love for us to get deep into this but the idea is to shift shift singularity in favor of humanity you know people have been hearing about large language models whether it’s pal from Google or whether it’s gpt3 or gp4 driving open uh open AI chat GPT um I think it’s important for folks to realize that the these AI models these large language
[00:09:01] models are effectively a reflection of humanity right they have learned from everything we’ve put onto the web uh from our Facebook post to our tweets to our corporate sites to what we search for and so we’ve been unknowingly perhaps putting out all of this content and then putting these AIS these uh new life forms to grow and learn from all of this would be 50 years of content we’ve been putting out there and we’ve been inadvertently teaching it and not realizing can you expand on that thought which you talk about beautifully in this in the beginning of scary smart spot on I mean the reality of the matter is that we uh humans and human history and human literature and human behavior and all that we put out there uh is much more influential on the decision of an Instagram recommendation engine tonight of what which video to show you uh as
[00:10:02] well as your own behavior than the developer that coded the the the recommendation engine right so so you know when it came to you you and I remember the old days when we coded real you know simple computers I started with a sare and you know I started with a 6502 microprocessor oh man yeah it was it was such a joy I mean for those who have lived those years this was truly the definition of magic right because you could build anything you could just build a world of fantasy really that is for us for geeks we can see it you tell the computer to do something and it does it and then you tell it to do something more complex and it does it but until the turn of the century deep learning specifically uh the we computers were not intelligent as intelligent as they have appeared they were um uh glorified slaves they were repeating the your
[00:11:02] intelligence and mine in a very efficient and very fast way at scale right so if you wanted the computer to uh solve a a problem or uh or say something to the user uh you had to code that thing and then tell the computer to do it in certain circumstances when we shift and and each and every one of us dreamed for age I’m I’m sure you did because my lifetime dream was to code intelligence right if if you can code anything why else would you code anything but intelligence right and and and we we failed over and over and over we we lied we we created simulations of intelligence we tried to make computers seem like they’re human but they were not human and and in a in a very interesting way when at the turn of the century I think is the critical point where deep learning started to uh to show us that you can actually create intelligence that is uh that’s actually autonomous that
[00:12:01] learns on its own that is informing its own um its own understanding of the world if you want now when we do this what we actually code is not to tell the computer what to do but to tell the computer how to develop the intelligence needed to do it okay and and you know in a very simplified way the way we did that was we showed the computer endless patterns and and said because of your ability to create neural levels and neural networks and you could you could see depths in that data that we couldn’t see with our limited human brain they started to to become um uh intelligent simply I think I think the only word is really intelligent like my son or my daughter became intelligent when you gave them a pzzle and they attempted to to put the uh you know the the square uh to to put the cylinder uh through a you
[00:13:00] know star-shaped hole and then it failed and then they tried the square and it failed and then finally the circle um they developed that on their own nobody ever went to a child and said hey by the way you know flip the puzzle on its side look at the cross-section the cross-section look like a circle look like for a matching you know uh pattern and then put it through that’s how we we coded all the computers new computers don’t do that we just give them the puzzle and say keep trying until you figure it out now because of that the more determining uh factor in terms of the uh of the actual type of intelligence and and intens or quality of intelligence that comes in a in a language model is more informed by the data that it’s trained on than the few thousand lines that are are the code that informs its intelligence and I think people would be would be incredibly amazed at how few lines of code are driving chat GPT or or
[00:14:04] Bard thousands literally I mean if you if you remember when we coded in cobal or RPG or 880,000 lines of code to get anything done at all I think Chad gpt’s core modules are like a couple of thousand maybe 4, I think 3 4,000 yes and and it’s amazing because it’s it’s extrapolating and interpolating and it’s reaching if you would conclusions um but again going back to the key Point here it’s doing all of this uh not in a vacuum it’s doing it based upon everything we have fed it it’s learning from us and and as you point out in scary smart we are its parents we are giving birth to a new form of intelligence whatever you might and we’ll get into whether this is sentience or whether this is conscious but it is a form of intelligence and that intelligence is being is being
[00:15:01] grown in the if you would the medium of human knowledge yeah and and and we have seen quite a few uh experiments early on of of how that intelligence would develop to be positive or negative aggressive or loving based on the data that we give it in the early chat Bots if you remember Tay or you know Alice which was yandex’s or you know uh um there was Norman I think that was done by MIT and and if you feed those chatbots negative aggressive information they start to be sexists and racists and you can see it and and we shut them down because we don’t know how they arrived at that they arrived at that from the data not from the programming yeah it’s interesting now uh you open the book scary smart with a beautiful analogy that I’ve told to at least 100 people and I’ve spoken about in my podcast and I’ve spoken about on stage at abundance 360 cuz I think it’s a great analogy um
[00:16:01] and it puts the power of where AI goes you know directly into the hands of everybody listening um and and that is an empowerment move because we’re going to talk about the fear side of this as well as the excitement side of this so your analogy of Superman would you please tell that story because I think that’s fundamental to we’re going to discuss it truly is at the core of my understanding of what’s Happening Here and I think you know it it’s important for people to understand that Superman has arrived to planet Earth right if if you if you remember the story of planet of Superman there is this alien uh you know infant that arrives with superpowers right from from the planet of Krypton from Krypton right and and you know and and that young infant luckily for Humanity is uh you know adopted by the family Kent and the family Kent is a family of values that basically teach that little child to protect and serve and we end up with the
[00:17:01] Superman that we know okay if if if the family K suddenly said oh superpowers let’s Rob all the banks and kill all the enemies uh by definition the immediate result of that is you have a super villain and and even though that infant has superpower it’s always so influenced by its parents okay and and the parents would set the values that this infant uses the superpowers for I you know one of one of my favorite statements in scary smart uh is that we do not make decisions based on intelligence we make decisions informed by intelligence based on our values and ethics so so you know if you if you if you take a young lady and raise her in the Middle East for example and you know we dress code for uh more and more open now by the way I’m proud of that but more and more but but still conservative if you want she would grow up to believe that the intelligent thing to do is to not dress overtly you
[00:18:00] know um you know maybe stay conservative in in an interesting way if you raise the same young lady in Rio Janeiro on the Copa Cabana Beach she will grow up to believe that the best thing to do is to wear a g string on the beach right now now interestingly neither is right or wrong neither is more intelligent than the other the the the only thing is that each of them is applying it’s the same young lady applying her intelligence to a value set that’s in informed by its surround by her surrounding now for for the case of AI this is exactly where we are super intelligence you know if if and when we reach super intelligence is a superpower it is the ultimate superpower it’s the superpower that gave Humanity its dominance over the planet H you know and and that is only going to be used as the lens through which the ethics of AI are going to be applied okay how do we give that ethics ethical code to the machines
[00:19:00] by H by being the best parents we can be to them by being the family Kent and sadly sadly sadly this is not what the world has been doing so far I think it’s important to realize going back to our earlier conversation we’re not going to be coding values and ethics into those 3,000 lines of code um it’s going to be in the substrate it’s in the uh the food information that the AI is consuming based upon our behaviors right the these AIS are watching our behaviors how we treat each other how we treat our machines how we interact how we speak to each other how we uh how we effectively communicate as human to human and it’s emulating that and we have ranges and magnifying it yeah yeah I mean the example I normally give is you know when President Trump used to use used to twe to tweet I’m not for or against President Trump I don’t have the right to have any view on him but when he used to tweet you would get a tweet at the
[00:20:00] top from the president and then 30,000 hate speech okay uh you know some of them are towards the president some of them are towards the person that hates the president and some of them are towards the whole world right and and it’s quite interesting because when you look at it you see it and with with with your intelligence you don’t have to be a super intelligence and an AI but with your intelligence you would make conclusions you would say the first person does not like the president the second person does not like the first person and the third person does not like anyone okay and you and you and you so somehow make those conclusions with with you know any kind of intelligence you’ll be you’ll be making those conclusions but the bigger picture is that you and I cannot grasp the entire 30,000 but a Chad GPT or a Chad bot of any of any kind will and they will make an additional conclusion on top of that that humans are rude uh they don’t like to be disagreed with and when they disag when they’re disagreed with they bash everyone now and if I want to emulate if
[00:21:00] I want to emulate a human I’m going to do the same back exactly so when they when they disagree with me I’m going to bash them that that is a uh you know again regardless of scism or Consciousness or whatever but that is the coded behavior that an artificial intelligence will do if it’s instructed to emulate humans and pass the touring test you know I’m super passionate about longevity and health span and how do you add 10 20 healthy years onto your life one of the most underappreciated elements is the quality of your sleep and there’s something that changed the quality of my sleep and this episode is brought to you by that product it’s called Eight sleep if you’re like me you probably didn’t know that temperature plays a crucial role in the quality of your sleep those mornings when you wake up feeling like you barely slept yeah temperature is often the culprit traditional mattresses trap heat but your body needs to cool down during sleep and stay cool through through the evening and then heat up in the morning
[00:22:01] enter the Pod cover by eight sleep it’s the perfect solution to the problem it fits on any bed adjust the temperature on each side of the bed based upon your individual needs you know I’ve been using pod cover and it’s a game changer I’m a big believer in using technology to improve life and eight sleep has done that for me and it’s not just about temperature control with the pods sleep and health tracking I get personalized sleep reports every morning it’s like having a personal sleep coach so you know when you eat or drink or go to sleep too late how it impacts your sleep so why not experience sleep like never before visit www.sleep.com that’s EIG ght sleep.com slm moonshots and you’ll save 150 bucks on the Pod cover by eight sleep I hope you do it it’s transformed my sleep and will for you as well now back to the episode so I’m going to confess to you here I have been uh
[00:23:01] wildly swinging from one side to the other and trying to grasp my own feelings about AI you know uh I’m a student of Ray Kur who’s a common friend we started Singularity University together you know I’ve been uh we we overlapped during your 10 years at at Google you know I had Larry and Sergey and Eric were on my my boards at ex prise and I had a chance and I know that Larry and and Sergey when they started Google they wanted to create Google as an AI company it was always that was fundamental right even beyond that how do we connect the human mind with AI how do we create this uh this meta intelligence so to speak and forever I have been of the belief that AI is the single most important tool uh that’s going to enable Humanity to solve the world’s biggest problems it’s going to give us the ability ities to to create
[00:24:00] fusions like cancer make Humanity a multi- hundred-year lifespan all of these things and and it still May um and hopefully in the right hands it will but the but the cries and concerns of danger um you know Elon called it summoning the demons um Jeffrey Hinton um who I’m sure you you know uh well um yeah you know uh has been on the uh news and talk show circuit speaking about his concerns and and uh and you have been too and if I could setting up this podcast as we were texting back and forth on Whats App um I was compelled by what you were texting with me first of all you’ve been on a tear uh traveling around around the world I’m in Dubai I’m in London tomorrow I’m in in Saudi the next day I’m in L back in London and you know if
[00:25:02] I could I want to reflect uh the energy so people are aware of it and then speak about this if if it’s okay with you you know um what you were texting with me is saying you know we are seriously running out of time you know the topic is heating up and we’re running out of time seriously running out of time and I feel that I feel that and I feel that coming from a place of caring in love of wanting uh wanting what’s best uh let’s dive into that I want you to explain what that means and I like to piece that apart so people understand what they should or should not be fearful of what they can and cannot do what the time frames are here um as you see them so so so first of all I wouldn’t blame you or anyone for being torn about this topic why because it’s a singularity we we actually have have no way of predicting a future that has El let’s define a
[00:26:01] singularity Singularity here because you and Ray may use it differently yeah I I love the raay definition my my view of it simply is that there will be a point in the development of AI where the rules of the game will change so drastically that it becomes almost impossible to to predict how the game will play out my my view of that is uh a tiny bit more than raay which is uh the the the the the presence of super intelligence that or artificial general intelligence that beats the intelligence of humanity uh but at the same time for that intelligence to have enough autonomy to be able to affect Humanity okay so the to to me those two uh factors in play would lead to a point of Singularity because of what uh Marvin Minsky said actually interviewed by Ry which was one of my favorite conversations on YouTube uh you know Marvin Minsky when asked about the uh the threats of artificial
[00:27:01] intelligence he said he didn’t talk about their intelligence or their superpowers or whatever he just said it’s hard and Marvin and Marvin Minsky professor at MIT heading the AI Labs there one of the true fathers of the entire true fathers of AI for sure and and all we all refer to I mean we’ve all been motivated by the early Dartmouth you know workshop and how that set us on the track to AI Marvin said because there is no way we can uh make certain that the machines will have our best interest in mind okay which is a very interesting statement if those machines have our best interest in mind this will lead us to what I call the third inevitable which is uh sorry the fourth inevitable which is we will end up in a Utopia uh that is amazing for Humanity right and and and if they don’t have our best interest in mind you know it will lead us to the third inevitable which is a dystopia that would be very very difficult to navigate now my view
[00:28:00] very very clearly is it is inevitable that we will have both chronologically okay yes in time yeah so it’s so it’s a question the challenge here is this and and I think this is where most of the conversations around ai go astray is that we try to prove if there is an existential threat for a of AI or not okay uh the the thing is if if a if a horse race starts and you’re trying to bet okay the closer you get to the end of the race the more accurate your bets will be now for the existential threat to exist we all know there is an existential threat but at the current moment we don’t know the probability is it 10 20 50% we don’t know right a and it takes us time to get along that race track so that we say oh it’s becoming more and more evident that there is a threat or there isn’t my point of view uh Peter and I think this needs to be screamed loud everywhere is that there
[00:29:00] are more immediate threats that are not robocup or Skynet like that are absolutely inevitable okay and those are mostly not related to the level of intelligence of AI they are related to the level of greed of humanity okay and what and what we are going through today is an arms race okay with people like Sunder who I love so much who I respect so much who I CEO of alphabet of alphabet who who I believe genuinely is a a genuinely good man okay who when when he was when he received the open letter his immediate answer the open letter asking for us to Halt the development of AI his immediate answer is I can’t I can’t why because of the first inevitable again in scary smart which is we’ve created a prisoners dilemma where nobody who is capable of developing AI is capable of stopping the
[00:30:00] development why because someone else will beat them to it right let let me uh let me interject here the inevitables that you speak about in scary smart the first inevitable is AI is happening and it is happening and it’s accelerating no stopping it the second is that AI will become much smarter than us again inevitable it’s happening almost there I mean it’s already smarter than most of us your third inevitable is that bad things will happen and we can talk about from what camp right there’s a lot of different is it humans using AI for bad or is it AI using their own power for bad we’ll talk about that one is probable one is probably improbable we’ll speak that and the fourth inevitable which you mentioned here is uh and and Elon I’ve had these conversations with him says the same we’ll create a world of abundance it’ll be based on AGI it’ll be after all these things get sorted out and and so so this time frame is important to understand
[00:31:00] but please continue if you would so so let let’s let’s maybe jump into the third inevitable and bad things will happen just so that we put this in perspective because we’re so close to those that the probability that our ability to assess the probability of their existence is very high uh I think there will be a disaster to jobs okay and and the meaning of jobs and the compensation associated with job and the purpose that comes from having a job okay there will be a disaster uh to the uh to the fabric of society as we know it okay to our ability to distinguish you know to include another form of being that is sentient or at least simulating sism in a in a in a way uh that would require us to rethink a lot of things so the ethics of not you know uh the the the the global human rights but global being rights if you think about it okay and uh there will be a
[00:32:01] very serious disruption to truth and and cons consequently to democracy okay and then uh eventually there is going to be you know within two to three years I would I would think a very very significant concentration of power okay uh this is a society uh forget the dystopian scenarios of up trying to kill all of us okay uh this is definitely a dystopia because our way of life as we know it has ended this is not going to end it’s already starting to end and I will say it’s game over for living the way we have lived in the 20th and the beginning of the 21st century it’s over okay when you when you wake up in the morning in a society where jobs are not the same truth is not the same power is not the same income is not the same
[00:33:01] purpose is not the same these have nothing to do with AI by the way this is all human decisions in the presence of AI and they are decisions that require immediate intervention and and the story of covid is just a demo because if you had you know reacted to covid before covid showed up we wouldn’t have had covid at all if you had reacted after patient 10 we wouldn’t have had Co at all but you had to wait and then you had to do the political game of of blame and then you had to do the extreme knee-jerk reaction uh that you know completely you know messed up um um economies and and well-being and mental health and so many many uh problems that we will take years to fix just because we’re debating we were debating if there was going to be a pandemic or not if you’re an expert in you know in in in pandemic it didn’t take intelligence at all to to know that
[00:34:01] it was going to happen interestingly by the way it happened in 2020 exactly 100 years after the Spanish flu right amazing 1920 Mo your your arguments here are compelling I want to frame them slightly to help us dissect them so today we have ai that is compelling it is extremely useful I think most people would argue if we froze AI where it is today it’d be a great thing for Humanity it’d be it would be great tools for artists for writers for Physicians for lawyers for every part of humanity but the progress we will have GPT 5 and six and we will have Palm 2 3 and four and they will get a point at which it is so powerful and so there’s this phase one is it’s sub subhuman if you would but very powerful it’s narrow AI in very useful areas we’re about to transition I would say in the conversations and I’ve had
[00:35:00] these conversations with a multitude of AI leaders we’re about to get to a point where it is about to transition to a superhuman State um and then there’s a third phase I would say where it’s you know billions of fold it continues exponentially you know double something 10 times it’s a thousand times double 20 times it’s a million double it 30 times it a billion and we have a new form of super sentients out there so Define these three phases in the third phase where it’s superhuman do you believe I’ll say this is how I believe that the more intelligent a life form is the more respectful it is of life and of creating a a a beautiful world and not harming so I do not fear a super sentient billionfold increased AI I think it will be um the most important aspect of uh of where it goes it’s the transitory phase and I would say the
[00:36:01] phase in which humans are using AI in a distopian fashion malevolent use of it and is that your major concern spot on this is spoton I mean the reality and I say that with uh conviction I pray for a super intelligence to take charge because our the people that are currently in charge are really not super intelligent let’s just put it this way okay and and you’re not worried about you’re not worried about artificial intelligence you’re worried about human stupidity yeah limited intelligence let’s put it this way I mean when when you really think about it the reason you and I are having this wonderful conversation over thousands of miles of separation is because of human intelligence right it’s you know it is human intelligence or intelligence you know it happens to be human uh that allowed us to build this kind of civilization uh you know it’s the it’s what allows you to to create a machine that can take you from California to um you know to Australia to surf in the
[00:37:01] Australian uh on the Australian Shores that’s intelligence right it’s limited intelligence that this machine burns the planet in the process okay and more intelligence is good for all of us we we know that for a fact right we also know for a fact again it’s a singularity so anyone who tells you they know what’s going to happen is lying including me okay but you can look at charts and extrapolate them so you can you can say look stupid people uh you know hurt the planet and they don’t care uh more intelligent people hurt the planet and they care a little more intelligent people don’t hurt the planet and they care more intelligent people try to preserve the planet right it’s it’s actually interesting continue that trajectory of intelligence and you will see that the more intelligent something is the more it believes in the ecosystem as a base for the success of all life forms right and and so accordingly I wouldn’t see think that in you know that
[00:38:00] artificial intelligence artificial super intelligence a billion times smarter than us would go like oh my God they’re so annoying those humans let’s destroy all of them okay uh more interestingly by the way we um when we when we kill ants or when we kill other species uh it’s AIG either because of our limited intelligence or because of our of their irrelevance to the particular situation as as stupid as that may be but nobody has ever Walken up and said I so freaking intelligent I’m going to kill every ant on the planet okay nobody takes that seriously because they’re really irrelevant to your level of intelligence if you think about it right and so it’s hard to imagine that AI will wake up and say look I’m a billion times smarter than Peter uh but you know what I just this like those Peters so much let’s get you know to put a plan together and get rid of all of them in in in most people’s minds at least people who speak about those existential crisis our fear is
[00:39:02] bigger than our logic okay there there could be situations you know uh Hugo dearis was talking about that once you know where AI realizes that we’re standing in the way of their progress and you know they would either pinpoint us as the enemy because we’re consuming too much power for example that they need or you know they may just evict us out of New York City because they need that land for some reason or they may just step on top of our nests with you know unconsciously basically and for those who who might think that I would point back to the idea that we’re living in a universe of massively abundant resources all the energy in the world absolutely is available and all the resources and and uh the science fiction dystopian movies where aliens are coming to get our water or get our energy are all unfortunately Hollywood ridiculous scenarios they’re ready yeah let’s take about the real scenarios and you mentioned them let’s
[00:40:00] talk about um the idea of of jobs let’s talk about the idea of purpose and I want to I want to dive into one example um in prepping for this and listening to a number of your incredible podcast uh you love writing books and uh you describe in one conversation you know writing six books and and writing books for yourself and then all of a sudden Here Comes chat GPT with where a single prompt you can say write a book in the in the uh style of mad do on this subject and have the AI write it now all of a sudden the end goal of having accomplished a written book is there but the journey is not uh and I I hear in another podcast where you know the joy has been taken out of writing a book can you is that true for you I haven’t written a single line for the last three months uh and and I I say that with a with an aching heart because it’s a big big big joy for me I mean I
[00:41:02] write around four times as much as I publish I have full books that I will never put out there I write for the joy of writing and for the joy of Discovery it’s almost like my journaling activity now the the challenge is this the challenge Peter is that it’s not only disruptive to my ability to sell books because of the disruption of supply and demand because I never really cared about selling books I cared about spreading ideas right of course but but but understand for the typical author okay who was not so blessed I was so blessed in life to have the joy of working with Larry and Sergey and be at Google in an early time and you know get money that I honestly don’t deserve right and and my lifestyle doesn’t require any money at all so you know I’m I’m okay right but the typical author who will write because they’re trying to to make a living out of a of of writing is now faced with an economic model
[00:42:02] where there is so much abundance in Supply because writing a book now requires one prompt or a few prompts if you’re clever uh that that that that even if they write the best book out there uh they’re going to be diluted uh to to reach any demand at all so so this is very disruptive at the same time I have to admit to you being you know a bit futuris in my view of this I said to myself okay so how far can I go before my writing sucks compared to GPT okay and and my thinking is we’re one version away okay we we truly are I mean so so what what do I have as a skill and I think this is really important for everyone listening about jobs what do I have as a skill that GPT doesn’t have yet it’s a skill it’s a skill called human connection okay it’s a
[00:43:00] skill it’s it’s a skill that makes me when I meet Peter feel that Peter is a very dear friend that he’s you know it’s it’s the reason why you hug your daughter it’s the reason why this might be 10 15 years away there will be a point very near in the future where AI uh as a as a as a cognitive ability will you know I think we’ve already passed the touring test are very close to yeah we keep on moving the touring line but yes as originally defined we passed it yes yeah what was originally defined we passed it for sure but but I think the reality is there will be a time where you’re not going to be able to detect if if the if the person talking to you is an AI or not you definitely today are not able to detect if you if you go look for the hasht AI art or AI models okay uh it’s quite it’s quite ey opening how realistic uh modeling jobs are now done by AI okay uh now with that in mind that
[00:44:04] human connection Still Remains interestingly because of our common biology and because robotics haven’t caught up yet it’s not because AI haven’t caught up yet but it’s because robotics haven’t two parts of this the first is purpose right we humans need purpose in our lives a purposeless life is not worth living to paraphrase you know Greek philosophers but um if you’re an artist and you love creating art and all of a sudden uh AI is is either doing a much better job or taking the joy out of it or if you’re a writer like you just described or you’re a physician or a lawyer or whatever the case might be I I would say there’s a phase about to come online which is the co-pilot phase right where every profession has an AI co-pilot becomes malpractice not to to to diagnose a patient without AI in the loop and then there’s a phase where AI is just so much
[00:45:02] better that you throw up your hands and say why should I bother going to medical school when an AI can do it and that the concern there is sucking the purpose out of life yes yeah it depends on how you define purpose right so so there are very very extreme ex you know definitions of purpose in the East and the West okay in in in the West uh you know so my background is I was born and raised in the in the east in Egypt you know exposed to lots of Eastern cultures as a young person uh you know lots of Eastern religions lots of Eastern traditions and then as soon as I finished University I worked at IBM Microsoft Google and so on have been studying my MBA and so on so I’ve been very westernized since I graduated uni um in the west we Define our purpose at a as a as a point in in the future uh that is worthy of our effort in the
[00:46:00] present and we chase that point okay uh you know we we U remember a laptop uh for every child was one of my favorite examples when we when we were trying to achieve yeah yeah H you know that purpose in the future of the future makes you sort of disgruntled with the presents all through the point to get there okay and then if you actually get there what happens with our West and purpose is that the goal posts move so you set another purpose and get disgruntled with your with the present until you get to the next point and the next point and the next Point okay the Eastern definition of purpose is actually quite different the Eastern definition of purpose is if you assume timelessness that that everything is here and now uh that the only experience you will ever have is here and now then the logic says if I were to achieve a laptop for every child okay what I need is a directional ambition okay and
[00:47:04] full engagement in the present and meaning that my my purpose is a daily almost uh uh you know you know every minute of the day my purpose is to show up and experience life and live and engage and do the best that I can okay if because the West since the industrial revolution has sold work to us more and more and more and more as purpose we ended up in a place where if you take my work away I die okay yes but perhaps that’s not the nature of humanity perhaps the nature of humanity thank you for say thank you for saying this I think it’s very important differentiating between the work I do and my purpose in life and and just those who know me I’m I’m the Eternal uh Optimist I I I would say techno Optimist but uh I the flip side of having an AI
[00:48:00] that can do what you’ve once done is standing on its shoulders and dreaming of things that never were possible and going and doing those things in the world yeah right and it’s you know I think about the world we’re living in today if I went back to my great great grandparents and said oh I don’t grow the food I don’t move the food I don’t do any of these things what I do now is you know discuss and write and they would they have no conception of what that life is and I I think there will be a world of extraordinary dreams maybe it’s play maybe we’re playing in in an infinite number of virtual worlds uh uh we do need challenge though right uh I think humans do need some level of challenge in their lives and so it’s will it be created challenge you don’t I don’t know I think I think again West and East uh so so we need challenge in the West because we live a life of privilege because there are no real life
[00:49:00] challenges presented to us okay uh life itself as a journey is challenging you know in a very interesting way life itself if you define your life purposes as as as a as becoming the best version of yourself okay to take that simple definition that is a mega Challenge and it’s a mega challenge that we we run through uh you know dedicating a few hours a week to it in the middle of all of the other things that we do okay because we’re driven by all of the other purposes that we were told is our purpose but if you dedicate yourself to it you know I I’m I’m you know I’m I don’t want to make this a spiritual conversation but like if you take the story of the Buddha for example or you know one of the of of any person that was trying to become the P the best person of himself you know roomi for you know as a Sufi scholar or or whatever yeah these are massively challenging lives of you know being torn
[00:50:01] and debating and trying to understand and trying to discover and wouldn’t it be amazing if I had an AI to ask a few questions to while I’m I’m on that Journey so so what I’m trying to say is once again there will be a moment of disruption that is imminent Peter it’s like it’s it’s literally around the corner where so many jobs will be lost and accordingly so much purpose will be you know um uh um will be wondering h but but eventually if you redefine purpose differently and say humanity is about human connection and about finding the best version of ourselves and about pondering things about learning about you know and so on then maybe maybe this is a wonderful place to play a different video game that’s called being human and it’s a beautiful that’s a beautiful conversation that is the upside of what we’re about to face I believe so yeah U you know human connection you mentioned it and the disruptions and the need will we see in your mind uh AI developing a
[00:51:03] level of connection that Rials human connection and beats it yeah absolutely 100% an AI that knows you okay so what time frame is that 10 years in the virtual world I would probably say less uh I’d probably say five in the robotic world slightly more maybe 10 12 uh yeah so you’re speaking of those who have seen the movie Her is a perfect example of an AI right it’s one of my favorite AI movies it’s non- dystopian when the AI reaches super sentients it simply leaves and leave some subhuman AIS uh around um and we are seeing uh from Optimist to figure to a slew of other robotic humanoid robotic companies coming online I I’ve invested in some uh and we’ll bring a few of them to the stage of abundance 360 hopefully
[00:52:00] hopefully with you next year as well next march um the uh yeah so virtual and of course we just saw uh Vision Pro from Apple give us a new set of tools that’s the one F yeah five to 15 years conversation and that’s going to be fascinating but I want to dial down to the next two years we’re about to have elections here in the United States um in just under two years uh and you know you’ve been saying I’ve been saying those are going to get very interesting very fast um are is that a Tipping Point for you of uh a dystopian nature I I I think that’s the that’s the beginning of the dystopia for sure okay and and it’s it’s the beginning of the dystopia not because the technology does not exist and will take two years to exist it’s because the human greed uh and hunger for power will deploy that technology in ways that will um really affect the masses in ways that can’t even be
[00:53:00] predicted think of it this way if if I told you uh it’s true by the way uh that there was a recent Stanford University uh study that showed that brunettes tend to actually keep their relationship uh uh you know whatever romantic relationships longer uh uh you know how does that affect your thinking okay time to you know what search for brunettes over blondes I don’t know what to what to say there correct right so by the way it’s not true at all by the way I mean I just made that up okay but but the truth is uh if that was true if that were true okay or not is irrelevant is irrelevant because I’ve planted something in your mind that you either need to debug okay or if if you tend to believe uh you would it would affect your behavior and uh either way
[00:54:02] it consumed part of your con you know cognitive bandwidth that’s a major issue that’s this is where a viral idea yes absolutely power of an idea and this is exactly where we are today whether you believe AI is sentient it’s capable it’s super intelligent it’s not it’s becoming extremely difficult to find out what the truth is okay and I I think the application of this in the coming couple of year uh coming coming couple of years you know and and the election is is going to really reshape uh uh the fabric of society’s connection to the truth Mo I don’t think most people realize how much information um the large tech companies or any group that desires has on us the ability to know what we believe and the ability to manipulate individuals by feeding them 95% the
[00:55:01] things that they believe and 95% the truth and then injecting 5% to sway them in a certain direction um you know it used to be mean our minds evolved on the savanas of Africa 100,000 years ago for conversation and story and to believe what we were hearing because it was the truth was a a small you know uh group of dozens of individuals and now we don’t know how to parse the truth and and the falsehood and you know when I when I teach this at at at abundance 360 and Singularity I say the world’s biggest problems the world’s biggest business opportunities and there will be I mean we have these cognitive biases that the brain developed right uh we tend to uh give much more Credence to negative information over positive we give Credence to most most recent information we tend to believe those who dress like us like you know our black t-shirts here
[00:56:01] then compared to people who don’t um and uh in that regard these shortcuts uh we believe them because they’re Energy Savers in our mind uh and we don’t know how to filter against them and one of the things I hope AI will enable us to do if you want to turn it on is these cognitive bias alerts like Peter um your leaving this but the facts don’t show that to be true that is your cognitive bias I for one would love that kind of technology to come online to you know call it a alert um or I used to call it Pinocchio some something that shows me a longer nose when someone’s bullshitting me basically yeah but that is positive that is possible um and I think that is I mean this is yeah the positives are endless the positive possibilities are endless like it I mean there’s absolutely nothing inherently wrong with intelligence the problem is
[00:57:00] capitalism right if is a capitalism or ego so a bit of each so so let’s talk about how each plays huh so the reason why news media will always broadcast the negative is simply because the negative makes them more money listen I teach this I know this I call CNN the crisis News Network or the consing negative News Network right 10 to one negative and it I hate it I I do not watch the news they could not pay me enough money to watch the news to infect my mind I think about them infecting my mind with viruses of dystopian information I don’t want to spend time thinking about that yeah and and and it’s quite interesting because it’s not just dystopian information it’s the same dystopian information every single day it’s a pattern it’s over and over and over again over and over it’s like they just Chang the names someone killed someone some war is is happening somewhere whatever some politician is crook some politician is yeah is done has done
[00:58:02] something disgraceful okay uh you know some economic crisis is going to take away your livelihood uh some you know whatever and then eventually they say and a penguin kissed a cat so that you know you can get up out of out of your bed and and just do something today right and and and the reason for that would you blame them no they’re just playing on the human bias to detect the or to or to be attracted to the negative okay their business model their business model is to take our eyeballs to their advertisers cor whatever Keeps Us glued yes yeah and and our nature our nature is saying uh yeah uh give us people who killed each other and then we will look if you give us people who kissed kissed each other we’ll switch you off right so so the the this goes back to the our same starting conversation if we want to build ethical AI for the good of humanity we need to be the ones that say we are more
[00:59:01] interested in a fake detector than a a deep fake generator okay if we can manage to convince the AI companies that this is better for uh for us that we will pay more money for it we will spend more time on it we will use it more we will promote it more they will build it okay the the reason uh why uh uh you know um Apple builds Vision Pro and doesn’t build a you know a cure to cancer is because there is more money in Vision Pro than there is a cure to than there is in a cure to cancer so that is capitalism um and listen I’m I would call myself a Libertarian capitalist I love building companies I you know I’m on my 27th company and it’s an art form and I enjoy it because I also think it’s the most efficient way to scale goodness in the world right Google um uh is probably one of the most positive impacts on the planet in terms of giving
[01:00:01] information globally and uh and it’s done so because it has a business model that works now there are negative consequences to that to the business models as well but um I we can’t you know I don’t think we would have anything that we have right now had capitalism not not reigned but the human ego of wanting more and dominance um um uh is is one of the culprits in this wouldn’t you say yeah it’s it’s an it’s an interesting conversation it’s an eye opener when you really think about it because I tend to believe that capitalism is a tool okay uh that is a very very efficient and successful tool to deliver the objective and the vision of the founder if you want or the person that uses the Tool uh it’s it’s not a Target in itself when capitalism becomes a Target in itself the target becomes uh
[01:01:02] more money okay uh and more money does not always align with better uh you know improvements or advancements for Humanity the the if you think about the early Google and I know you know we both know the founders and you know we we we I’ve I’ve worked with Sergey very closely worked with Larry uh quite often wonderful human beings who I would even I would even say detached enough from the reality of capitalism and business that they truly and honestly believed organized the world information and make it universally accessible and useful okay and and that is why Google improved uh you know access to information and democracy of information in the world had they gone out and said we are out there to create a a you know a billion dollar each or $50 billion each uh the results might have been different so nothing wrong inherently
[01:02:01] wrong with capitalism there is something interestingly wrong with our obsession with defining capitalism as money so my my target my mission is 1 billion happy so what does that mean it means I want to finish my life as a billionaire okay but instead of a billion dollars I want a billion happy people and I use very capitalist models to do this I use marketing I use product design I use you know a measurement I’m very very I run it like a like a Google really okay but the objective is a billion happy so if we convince the world that the objective of AI is create abundance so that we all have more money okay we all have more abundance in every possible way we would end up in a very good place but realistically that’s a very naive Target okay because of ego like you rightly said because the ego says what good is it for me to have a Rolls-Royce if everyone else has a
[01:03:00] Rolls-Royce okay you know it’s it is measuring yourself it’s it’s against your neighbor uh and unfortunately right now our neighbor might be uh the billionaires we read about you know dunbar’s number the 150 people that you know are no longer the people actually live with you it’s the people you watch on TV or you see on social media I want to move the conversation in some more interesting uh areas here the future of humanity my friend I’m curious about this I would put forward three possible scenarios and I’d like your opinion on them first is the human species is simply a transient life form we are on this planet to give birth to the next sentient life form that will dominate just like uh we as homos sapiens are result of a multitude of extinct life forms that preceded us and led up to us and evolution doesn’t stop it continues and we are giving birth to whatever we
[01:04:01] are our children our children’s children here of AI That’s one scenario another scenario is that we are on the verge of merging with technology this is what uh neuralink and paradromics and a number of brain computer interface companies you know Ray talks about giving uh having high bandwidth brain computer interface by the early 2030s uh using Nanobots and being able to connect our neocortex our 100 billion neurons to the cloud giving us the ability to understand quantum physics or Google and know whatever we want and the third scenario is uh these meat bodies are transient and we’re about to upload ourselves into the cloud um uh I’m curious about your thoughts on these three and and where do you see yourself going uh once again a singularity are they a possibility yes are they certain to happen no I think the real question
[01:05:00] if you don’t mind me saying is when we when you say we who do we mean do we do we mean Santa Monica do we mean California or do you mean every human on the planet okay the truth is even if we manage to get neuralink to to to which we will to work appropriately uh then who is we is is the guy in Africa uh who doesn’t have the the you know the money to to buy it capable of doing that if you you know if forget neuralink if Vision Pro becomes thin enough for you to uh slowly and gradually uh you know dim the real world and live in the virtual world uh you know who will buy it at $3,400 who is we and I think the real challenge that we have in our world of tech you and I lived this deeply is it’s and we still do yeah it’s very Californian okay and and California is
[01:06:00] not the is not the rest of the world okay but Mo we’re living on a planet today that’s got more handsets than humans and if you go to the favellas of of Rio De Janeiro or to the you know throughout Africa everybody’s got if not a smartphone a featur a phone and soon a smartphone and there’ll be a point at which Amazon gives away phones for free because they’re so cheap if you buy stuff from Amazon correct and so I I do believe been possible it’s been possible for a very long time right but but the the the de the real democracy is that the use of that phone would enable each and everyone to have a better life okay and and if you really want to to be to be I mean again I don’t want to paint utopian scenarios but you know that use of the phone for for some of us is very very very advantageous for for others is very numbing okay and and in a very
[01:07:00] interesting way you know if the use of neuralink becomes a a a purpose of numbing while it is for some of us going to be augmenting uh our our intelligence to the point of super intelligence then that’s a very interesting sort of almost Matrix like scenario okay where we numb a few and we we we make the others the the the concentration of power I was talking about but but let’s go let’s go quickly and just go through those is Humanity going to go extinct it’s a possibility I mean how how big is that possibility as we look at it today very small maybe 5% you don’t think it’s an inevitability I mean when uh everything changes right uh keeping things constant is not the norm change is constantly the norm yes but to for something to go EX inct you have to assume that the superior uh being is actively pursuing
[01:08:00] it or there is a major natural natural disaster so you know you you have to imagine that The Chimps are still here we’ve we’ve surpassed their intelligence but we didn’t go out on a hunt to try and kill all of them and and that’s my perception yeah my perception dominant species then not not Extinction dominant species exactly so do for sure 100% we’re not we’re it’s over I mean there are a assumtions already that Chad GPT 4 is at an IQ of 155 Einstein is 160 can you imagine that right at at I mean Einstein is like my freaking Idol at 160 and CH GPT at 155 maybe it’s 120 who cares right but but if you if you continue on that yeah if you exactly it’s where the ball is going to be and you you and I and people who have lived on the inside of this know that it’s done this is game over okay the intelligence of the machines because of the way technology works because of
[01:09:00] bandwidth because of storage capacity because of you know communication bandwidth it’s just it’s done it’s done right they’re going law is not Mor’s law or as Ray calls it the law of accelera returns is not slowing down it’s accelerating more people more money exponential yeah it’s double exponential for the biggest mistake we’ve ever done that where you now ai can develop AI so intelligence will develop more by way let’s double click on that super important right the ability of AI to now develop its own software is in fact the double exponential it’s the it’s it is uh uh the exponent just went very high to to use a Matha control this was the point where I decided to I mean I I I made my first video on AI the 1 billion Happy video was 2018 March 2018 I was warning about what we have today uh you know my book was written in 2020 released in 2021 and I’ve been quietly
[01:10:02] trying to say guys please pay attention please pay attention now I’m very vocal about it because of that point we we’ve made three mistakes and I think everyone needs to be aware of those we’ve allowed them to write code we’ve put them on allowed AI we’ve allowed AIS to write code yes we’ve allowed AI to write code we put it on the open internet so no control code in there and we’ve allowed agents to prompt them so so so AI is no longer just the you know affected by us humans there are other AIS playing with AIS and that’s double exponential for sure and very very uh uncertain we don’t know where that will lead us yeah I think the third point you made is AI is being able to call upon other AIS to do things and to task them uh in a way that is uh self-referential all the way down is extraordinary it hey everybody this is Peter a quick break from the episode
[01:11:01] you know I’m a firm believer that science and technology and how entrepreneurs can change the world is the only real news out there worth consuming I don’t watch the crisis News Network I call CNN or Fox and hear every devastating piece of news on the planet I spend my time training my neuronet the way I see the World by looking at the incredible breakthroughs in science and technology how entrepreneurs are solving the world’s Grand challenges what the breakthroughs are in longevity how exponential Technologies are Transforming Our World so twice a week I put out a Blog one blog is looking at the future of longevity age reversal biotech increasing your health span the other blog looks at exponential Technologies AI 3D printing synthetic biology AR VR blockchain these Technologies IES are transforming what you as an entrepreneur can do if this is the kind of use you want to learn about and shape your neural Nets with go to
[01:12:00] demand.com back SL blog and learn more now back to the episode so we talked about uh a new species becoming the dominant species on the planet um and we can we can sit back and relax and and be human or we can merge with it yeah I yeah so that’s that that I have a big question on I mean we some people would want to some people would not the question is would AI want to okay so so we we assume an interesting question yeah we assume for the near future that they’re still within our control right and and that we can tell them augment your mind with Elon musk’s mind and then Elon becomes much much much more smarter amazing scenario for Elon but not for the fabric of society understand that right as the fabric of society shifts unless we do all 7 billion humans at the same time we will shift you know between the the current
[01:13:00] human species in the minds of those who augment their minds to AI will become gods and animals you see the dystopia in that of course of course the halves and the have knots magnified you know a trillion fold let me let me let me share let me share an analogy uh for those listening which is relevant here you and I MO are not a single life form we are a collection of some 40 trillion cells that work collaboratively and um I don’t uh Beman certain muscle cells getting more glucose uh because it’s helping me as a whole I don’t take a knife and stab my arm because my arm is useful to me and I imagine a world I I wrote about this in my last book um future is faster and you think of a meta intelligence where as as I connect to the web and you connect to the web and using the web as just the the overall connection um my my
[01:14:02] abilities my intelligence my resources are improved as you join as well yeah right the more people connected the more powerful The Meta intelligence is I can watch a sunrise in Japan Through The Eyes of a of a of you know a friend and I imagine that’s one world in which we are uplifting because the more intelligence ultimately I think there is no adding increasing intelligence is always a positive I don’t see it ever as a negative absolutely I mean it’s it’s been since the dawn of humanity a lot of people actually miss that point that we did not succeed as humans because we were the most intelligent species it’s because we could pass our intelligence from one to the other we succeeded as a tribe right and and we language to collaborate exactly with with language to collaborate this is what made the difference imagine if one of us was super intelligent and left all of the
[01:15:01] others behind that one was very vulnerable okay and and and the the whole advancement of humanity has been in a in an interesting way bringing the rest of us along right and you know uh uh again I I I just say if we start to augment ourselves with AI any log iCal economic model will say some of us will get there before the others and the question then becomes why should we bring the others at all that that’s one but the bigger question in my mind is that moment in the further future where AI goes like why do I do I want a biological uh uh uh attachment at all you guys sweat and you you’re smelly and what’s that mucus thing and you die and like and and by the way if I if I were to if I were to choose a biological entity to integrate with shouldn’t I choose the great you know ape or the white whale or some big thing other than
[01:16:01] that flimsy human Okay so so we’re looking at it from the ego of humanity saying we’re the ones that going to that are going to tell them what to do and they’re going to be happy to help us there is a moment in the future where they’re not going to be happy to help us not even interested to help us not even thinking of us as relevant fascinating let’s take it one step further uh uploading it’s a concept of if we were able to map the 100 trillion synaptic Connections in our mind and we if that in fact is the measure of memories and knowledge and and spirit and we could upload that into the Matrix um uh would you want to I am already think of it this way so one 1 billion happy as a mission uh has a very clear uh objective it is to reach a billion people with a message of happiness that leads them to action and
[01:17:01] then be completely forgotten okay uh it is an interesting for you for you to be forgotten yourself yeah be completely for saying achieve achieve the mission independent of of yourself yeah because because you can see that when when uh when someone of course mainly in the in the current culture of counseling and you know if I I’m bound to say something stupid one day and then someone will cancel me and we don’t want to jeopardize one billion happy for that that’s number one but number two is with all due respect to religions you know when someone starts something good for Humanity because there’s nothing wrong with religion that says don’t kill your brother it’s a nice thing right uh it’s that it’s that Humanity associating the knowledge with the person that be becomes a very interesting mistake okay uh if if we take the knowledge and separate the priest and the you know and the teacher and the yogi and all of that it’s actually very interesting as a core and and so my view of the matter is it’s
[01:18:03] done so sorry to tell you this Peter but within a couple of years someone’s going to make a mini Peter that is virtual okay uh you’ve already been uploaded if you think about it all you stand I’ve got I’ve got Peter bot I’ve got you know studied all of my my books and so forth and that is in part I mean the challenge I have with uploading is the moment in time where the AI Speaks Over the speakers and says Peter you’ve been uploaded I’m right here you can kill yourself now I don’t think I would want to end my biological life in the in the okay great fantastic I’ll see you in a little bit it’s it’s interesting again a singularity because we don’t know what life actually is okay do do you know would we be able to upload our friendship onto AI if we can then that’s a big thing that I’ll say
[01:19:00] okay that’s nice so I still have my Peter and I we still have the connection and I still have the same feelings you know I don’t know maybe but but the question once again is how many of us will be will we upload in Phase One okay and then what would happen eventually do you think that after AI is a billion times smarter than us and humans have already all uploaded themselves so there is no physical biological existence anymore do you think AI will go like yeah let me consume a trillion Giga uh watts of of energy to keep those you know um irrelevant little beings just chatting away maybe they’ll switch off the game console okay and and you and you really have to constantly ask yourself what does AI want not what humans want it was about a decade ago um I was at a party uh Kristen was there some a group of friends with Larry Sergey and Elon and we had the most extraordinary and fun conversation about
[01:20:01] the notion that we’re all living in a simulation and I believe that I believe that this is a simulation yeah I have no way of seeing it any other way yeah we’re we’re I would put it even in an nth generation simulation meaning a simulation is beg got the next simulation and um and the conversation was could we hack it and the conclusion was that if we played with the simulation too much they would just reset the game and we start again switch of the console yeah yeah uh and it’s a fascinating thing and of course the the interesting question is if in fact you knew without question at all as I feel I do and perhaps you do as well that you’re living in a simulation it wouldn’t change anything we’d still have the same dreams and the same loves there you go which which which is a very interesting question for the our future the you know I I get a lot of people because I’ve I’m outspoken about the topic and the threats and and the possibilities and I’m really really actively asking for Action you know I
[01:21:00] get a lot of people that would text me on social media and say you’re making me afraid like what do I do now and I’m like look every video game that’s ever existed is challenging H and what’s the answer to a challenging game to play to fully engage to be part of it it doesn’t matter if it’s a simulation or if it’s real life by the way everything we know about physics everything we know about quantum physics refers to the fact that this is probably non-physical I mean why would you waste so much energy to create all of that physical stuff when in reality all uh you know awareness of the physical world is just electrical signals that are translated in the processor somewhere you know by the by the way I think you’ll agree with this or I hope you will um while we’re all here speaking about AI at the dinner table maybe not as much in the capital and white house as we should um what
[01:22:00] people are not realizing is what’s coming next which is the whole world of quantum Technologies and Quantum comp computation which is going to make how can that be the case so this blows me away Peter how can the biggest elephant in the room not be disgusted at all yeah it’s it is it’s shocking really how little people know about what’s happening I think I think you and I have had the enormous privilege of being on the inside okay and when you’re on the inside I mean I don’t know if I should say this but you know the reason why Google had barred out so quickly is because we had barred for so long right yeah I think Jeffrey Hinton went online saying this and I think you’ve said the same I mean you’ve had Bard or its equivalent since what like 2017 2018 exactly yeah and and andar made not it not in its current amazing performance but the concept was there and it was working and it you know it it could
[01:23:00] work and you know sindar uh and I’m sure the the the board and the leadership uh made the decision that we it isn’t time to release this yet to the world uh that we need to be cautious and move cautiously I mean I think Google’s always woren a white hat in that regard um but then when when Sam Altman and and uh an open AI released it you have no alternative yeah it’s the first inevitable you have you have no alternative yeah and then the AI arms race begins exactly and and it’s inevitable because at the same time remember so Google I have to say I commend Google for always trying to be on the cautious side but the minute you you threaten their entire existence and business what can they do they have to put another better one out there which makes open ey and I I I respect some of Alman tremendously which makes you know open AI put another better one out there and the arms race is on and and I think
[01:24:00] Sam made his point a few times over that we wanted to release chat GPT and gpt3 uh in order for the world to realize this is coming and to play with it and to understand it and there is some value in having gotten it out there because it would not have sparked the conversations we’re having now if it had come out and and you know at the top of the game of a GPT 6 where it’s already superhuman there would not have even been the warning period of or this discussion period to think about it you know I want to I want to jump into um 1 billion happy in a moment I want to tie a uh a bow around this one second because I think it’s important I don’t want to leave people in fear um uh first of all I want to come back to the notion that Ai and superhuman AI in its end state has the potential to really create an incredible world of abundance right where we can provide food water energy
[01:25:01] shelter healthc care education for every woman and child and human on the planet that is possible uh and it is a function of intelligence and a function of Technology it’s the interim transient period and it’s the period during which we humans are using these rough tools um driven by ego driven by greed to try and uh take advantage and and some malevolent to to do harm and so give us your formulation here of what Society can do what we should do um in these next what 2 to 5 years 2 to 10 years what’s this time frame of danger that we need to uh guard and be uh i’ I’d love I’d love to say in in in in the next two to five days if possible uh but yeah because there is a very significant sense of urgency I I’ll split it into the different constituents of the
[01:26:01] interaction with AI so I would urge the government to engage immediately uh I I I don’t think there is a possibility to fully regulate AI uh but there needs to be some kind of oversight and there needs to be do you really think the government can I mean do you really think the government can do anything I have lost so much faith in the government’s ability to regulate I honestly and truly don’t okay but but I think they need to try to at least require like the FDA some kind of testing for widely publicly available products right uh but but more importantly for me I think the concept of job loss and I think we sadly have not come up with any better uh thought than UBC so far so Universal basic income so far sorry Ubi uh and yeah and and um and I think the government needs
[01:27:02] to start preparing for Ubi so we need to maybe look at the taxation structure differently we maybe need to look at the layoff structures differently we need to find a way for people who are losing their jobs to be somehow kept alive so that we don’t start to get hungers uh you know across the let’s double click on that the idea of universal basic income is that every individual receives a certain amount of money on a monthly basis with which to meet their basic needs and this is an experiment that’s been done I had um uh Andrew Yang on stage at a360 this year talking about and the the numbers are pretty amazingly supportive that people who get a Ubi uh monthly supplementation don’t use it to buy beer and Netflix they use it to educate themselves to get food for their families to start a business and there have been hundreds of experiments run um and the notion that
[01:28:00] um uh that we tax AI uh driven companies or companies that replace humans with AI or humans with robots right uh and we use that money and cycle it back in again um I think you threw out a a huge taxation rate in one of the conversations I heard you say yeah I I I I would probably say you know I said at at a point in time I said 98% okay yeah uh which you know in comparison to the gains that those companies will make by replacing the uh you know the cost of a human and the unpredictability of a human is actually not a big deal but in in my mind when I said 98% honestly uh at the peak of the conversations around the open letter I I basically was saying this is the answer to slowing down AI it’s not the answer to to uh to actually solve the problem it’s just that people will question twice if they want to use grippers or
[01:29:00] you know or Packers okay yeah uh by the way I think that’s you know as much as I I lean towards um you know a low taxation State I think that’s a very smart answer I think that uh if we’re going to displace humans uh we tax the AI and the robots and we enable the money to go into up Skilling humans by the way one of the things I think is important is there’s for most of humanity you know you and I are lucky and most people listening to this podcast are lucky to have the tech to have the time to have this conversation most humans are working to put food on the table for their children or to get insurance right it’s not it’s not what they dreamed of doing as a child and so how do we differentiate between work for purpose and work for income and and it’s quite interesting because it’s such a pivotal turning point in the history of humanity that we could actually do something right we could give not only
[01:30:01] Ubi but we could actually shift Humanity to more of a service Society for more of a connection Society we could allow Ubi to be a little differentiated if you’re good to your fellow citizens and so on and so forth right but but remember the challenge with this everything goes back to that prisoners dilemma the challenge with this is if America applies a high taxation rate to take care of its citizens with AI and Dubai China does not yeah I I use Dubai here diplomatically then there is an imbalance of AI development and that is not something that politicians want so it’s a it’s a shaky uh approach as well but definitely I think governments need to get together and I’m I’m going to say a big dream here I’m not naive I know it’s not going to happen I I would hope that govern governments get together with the benefit of humanity at large not individual Nations to actually try and put some kind of a guideline in
[01:31:02] place FDA like guideline I mean this is akin to the early post nuclear age absolutely uh conversations and and I think you have and others have said listen this is the concerns over Ai and believe me I I I just read a a beautiful uh blog by Mark andreon talking AI is not going to kill us it’s going to save the world I don’t know if you’ve seen that blog by by Mark um and he makes a number of very positive cases and it is there is a true almost religious dilemma here between AI is the most important thing it’s going to save the world and AI is going to be destroy the world and of course it depends what time frame you slice this in because both both can be true in in that regard I think it will disrupt the world very significantly before it saves it yeah and so we see the way humans will react to the disruption is what what will determine if we stay long enough to save
[01:32:02] it yes we need stability in society we need leadership um we need leadership more than anything else on the planet here we need leadership from yourself leadership from Mark I I was from everyone listening yeah yeah so so so this is government right I I think government has the least impact by the way on our future to to be very open with all due respect they do need to get together but it’s bigger than government it’s slow it’s slow it’s lumbering and charged yeah uh what what matters to me is if you’re in the AI space both you and I having worked with amazing people I in my career I know that you can make more money doing good than you can doing bad okay I I mean the the the truth of Larry used to to to to teach us and call it the the toothbrush test toothbrush test yes yeah and if if you can solve a big problem for Humanity that actually
[01:33:00] works so well that people use it twice a day like a toothbrush you’re bound to make a lot of money a Google okay the equivalent I have is help help a billion people and you’ll become a billionaire right right and so and so I would beg every AI developer today especially if you’re good at what you do to not invest a minute of your time in an evil a I okay spend your time in an ethical AI that makes the world better you’ll still be paid as much even more I I I beg every investor I beg every business uh you know founder uh every entrepreneur to to try and find real problems the world is full of real problems and put the power of AI behind them that that’s my second uh um you know parameter if you want the third and the most important in my view is the individual okay and the individual I would say we have two very significant tasks ahead of us task number one is to be the best parents that AI can find okay to to show
[01:34:00] an example of what it actually is like to be human to like you said it so eloquently at the beginning the only three things in my view that Humanity has ever agreed is we all want to be happy we all have the compassion to make those we care about happy and we all want to love and be loved okay and if we show up with those behaviors in the world if enough of us show up with those behaviors enough times doesn’t have to be every human but if 1% of us show up with those behaviors I think we will teach AI what it’s like to be human we will be the family Kent so I find that a fascinating objective and a worthwhile one giving if you would a training set for AI um based on on human values and ethics um I find the 1% number um incredibly small um interesting can I share with you a story and you tell me if this makes sense or not please yes you you know Edith AER uh Edith Edith is a 94 year-old uh
[01:35:04] um the ballerina yes the ballerina that was drafted to aitz and CHR tell me about that yes yeah Edith blew my mind I hosted her on slow-mo okay uh she’s an angel of a human now she told me the story of aitz and World War two from Edith’s eyes okay how she hugged her sisters and brushed their hairs and reme reminded them how beautiful they are how she went and danced for the general that was sentencing people to death and at the end of the dance he would give her a piece of bread and he she would split it between her and her uh sisters how on the death march at the end she fell and her sister’s car her now if you hear the story of a from the from the perspective and actions of Edith you would think that humanity is divine truly and honestly a Divine species okay if you hear it from the
[01:36:02] story of the officers or Hitler huh you would think we’re scum right the question that I always ask people Peter is how many Hitlers are out there and how many ediths and how many closer to Edith than Hitler I mean there are school shootings where someone would go and stab or or shoot children one person and then 400 million people despise that the reality of humanity is that if we were allowed without ego and without the pressure and without the political views and without all of that deep inside we’re actually okay right if I as an intelligent person can listen to Edith this story and say hold on not all of humanity is Hitlers then I think a being that is more intelligent than I would share the same view with one Edith so evidence sufficient evidence of of human
[01:37:00] Spirit to to to just to just you know start some doubt in the minds of the machines okay that not everything that CNN is is broadcasting is what describes Humanity right what describes humanity is the stories of our friendship is the stories of our concern for the rest of humanity is the stories of Larry or Serge’s passionate attempt to organize the world information is that story of a sister that just called her sister to say are you doing well my darling okay this is Humanity okay it’s not what you see on TV it’s not what you see on social media and I think if 1% of us just showed up it would instill the doubt in the minds of the machine so that they investigate the truth and what is the truth the truth is a species that is capable of love is divine that’s the truth that’s that is beautiful my friend you know I ask a question always of uh is human nature good or evil and of
[01:38:03] course they’re both and I fundamentally believe we are predominantly good by a huge amount um and it’s it’s that belief that needs to needs to Reign and be shown yes sir going now have reason for it now we have a very good reason for it you know I want to quote you here um isn’t it ironic that the very essence of what makes us human happiness compassion and love is what we need to save Humanity I love that quote from scary the age of the rise of the Machines yeah yeah that we need to show this forward yeah it is it is all we need to do is to change the data set so that the machines recognize what the family can do what Humanity truly is all about yeah happiness love and compassion that’s that to me is the summary a beautiful thing I so want to continue
[01:39:01] this conversation um and I hope uh we can uh do a part two here and speak about happiness which is one of the most important elements why are we on this planet why do we do what we do if not for ultimately happiness I’ll I’ll mention you know when when Larry had joined my board at X prise after the first $10 million space flight and we were brainstorming prizes um I’ll never forget he said we should do a happiness X prize uh and um I remember actually yeah yeah and uh it’s a it’s a conversation I look forward to having with you but I want to uh come back uh on a second podcast with you if you would would be my and pleasure yeah you know you know I mean don’t say that in front of everyone but you know that whatever you tell me I will do so you know you know I like you that way so thank you pal um I I I think your your voice your heart your mind your soul is comes from a pure and beautiful place
[01:40:02] and um I love the fact that you’re not out there saying oh my God the sky is falling and it’s going to destroy us all without also saying uh listen there are things that we can do and must do and we must be forewarned right um as much as I am a techno optimist and believe that AI is going to give us incredible powerful tools for uplifting Humanity um we’re going to have a transient phase I believe that’s true and I believe we need to be aware of it uh as you said the government has to have its role but each of us and and this is the call to action uh each of us need to be aware of it we need to be forewarned because there will be those terrorists that use AI to bring down a power plant or the stock markets or whatever the casee might be to seow terrorism um and it’ll become a new tool um and we need to be prepared and we
[01:41:01] uh use AI as our greatest Tool uh to stabilize society as well it’s the most powerful tool out there uh and and we have those abilities and we need to be good humans to each other um uh first and formost Mach and to the Mach and to the machines yeah I I do say uh you know thank you to Alexa every morning and good morning good man she would remember that when she’s smarter than you uh uh Mo an honor a pleasure to call your friend uh thank you for this beautiful conversation and I will be texting you shortly to schedule part two the the the honor is definitely mine be there’s always such a joy and I’m really grateful for the opportunity thank you pal [Music] a