06-reference / transcripts

moonshots ep188 humanoid robots home transcript

Thu Aug 14 2025 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: Moonshots Podcast

You think about robots in the world probably more than anybody else. What’s your vision 10 years from now? First of all, what will happen is >> everybody, we’re here at 1X Technologies in Palo Alto. Burnt Borick, the CEO, founder, Neoama 1 and Neogamma 2 over here. I imagine we’re going to have the same level of AI eventually in the robot where I feel like I’m talking to a fully intelligent being. >> I’m one that is grounded, right? that actually understands what this existence is. >> I’m I’m shocked by that. >> Shocked by that, too. >> How do we solve the remaining really hard problems in science? This is not going to happen without humanoids. Almost existential to us for human happiness. So, Sim is constantly saying, >> have it look like an octopus and let it operate in the all the elegance that an octopus can rather than trying to constrain it into five fingers on this hand that do certain things and manipulate objects the way we’re supposed to manipulate them. >> So, what’s a definitive answer to him? Let’s just say humanos is a face.

[00:01:00] >> Now that’s a moonshot, ladies and gentlemen. >> Dave Blondon, my moonshot mate, and uh Neo Gamma uh Neoama 1 and Neogamma 2 over here. >> And we just did a tour of the facility and it’s pretty extraordinary. We saw, you know, probably dozens of Neoamas in different stage of development. Uh they literally manufacture everything from head to toe. Uh, and how many components inside Neo Gamma roughly? >> Oh, top secret. >> Top secret. Okay. >> But it’s in the hundreds, not the thousands. >> Okay. But, uh, I just secured my first Neo Gamma at my home by end of the year. Is that right? Just Oh, yeah. Okay. Fantastic. Great. So, we’re about to do a podcast uh either with Burnt or with Neo Gamma depending upon what you want. >> Uh, and let’s go ahead. We’ll go over to the podcast area. >> Would you lead the way and maybe clear the way for me? Awesome.

[00:02:00] >> By the way, those bags over there, Neo Gamma can carry those over over halfway. >> Yeah. >> Uh I Okay, that I’m not the Neo Gamma. Hey, can I give this to you to carry? >> You can try. It might hit some safety limits, but it usually works. >> All right, arms up properly. There you go. You can let it go. And I can take a few steps. There you go. It might after a few steps decide that like this is a bit unsafe for me. Thank you, Neil. >> Incredibly strong. All right. >> And uh it’s nice to know that Neo Gamma will, you know, clean up the house around you. >> Yeah. >> Well, listen, uh I’m not sure what number you are, but I want to say thank you so much. Thanks for cleaning up. >> Of course. Thank you for your time. A pleasure. A pleasure. And uh and thank you very much as well. I want to be polite. You know, you never know when

[00:03:00] the robot overlords are going to like come after us. I want you to remember I was really polite. I was really I was really polite. Okay, I’m safe. Great. Have you ever been in love? I mean, you meet all these other robots. I mean, some of them got to be turning you on. >> No, >> you should take a look at 41. >> 41. Okay. Gamma 41 is your is your gig. Okay, got it. Thank you. Okay. Oh, listen. Burnt’s here. Let’s stop this conversation. >> Sorry. Sorry. >> Do behave. >> No. >> Everybody welcome to Moonshots. I’m here with my moonshot mate Dave Blondon. Uh Selma Male is offline with his kids this weekend or his son this weekend. But I’m here in particular with the CEO and founder of 1X Technologies. Uh Bert Borick. A pleasure. Burnt. >> Awesome. Looking forward to this one. >> Thank you. Yeah. I mean we just finished this tour and uh it’s pretty extraordinary. When did you move into these facilities here? Recently. It’s like one and a half months ago. >> Nice. Well, I mean just many levels of people building robots. No robots building robots yet. >> We’re getting there, but not yet.

[00:04:00] >> Yeah. >> So, I mean, when I, you know, I’m very familiar with the robotics, the humanoid robot space. And while companies like Figure and Tesla are focused initially going into factories, automotive factories in particular, you made a commitment to the home. >> Yes. And uh personally I’m excited about that. But I’d like to start with why the home. >> There’s like to to me there’s there’s two like there’s a lot of reasons but there’s like two main reasons. Uh now the first one is kind of obvious which is just like I mean consumer hardware just scales at a different pace than everything else. Right. Yeah. Like uh we got to more than a billion devices of the iPhone in like a bit more than a decade. Um, and to me, humanoid robots does not make sense unless it’s at scale, right? There’s always a better automation system that you can use for one specific problem. You need scale so that you really get this incredible reliability, incredibly low cost, incredible ecosystem and intelligence.

[00:05:01] Now, the slightly deeper one is also that intelligence comes from diversity. And this has been very clear actually from all the way in the beginning really uh in uh all kinds of AI research and also now more practical applications of AI across all different domains where it is like a language model or an image model or a video model or in this case a robotics model. You don’t really need data of the same thing over and over. Like if you think think about it, it’s very logic, right? >> So if you’re in if you’re in a automotive factory, you’re basically doing the same thing over and over again. You’re not learning new stuff. >> Yeah. And we actually have some data on this. We have some real data because we our previous generation humanoid eve we deployed that into both guarding and logistics back in uh 2022 2023. >> Mhm. >> And about 20 to 40 hours our robots kind of plateau and stop learning for that specific task. Depends on how complex it is. Like if you’re guarding a facility and you’re kind of driving around because that had wheels but it was a human on wheels opening the doors and

[00:06:00] like there’s some diversity to that. So then you’re more on like the 40 plus hours and if you’re just like moving this cup from here to over here all day, right? Then like you’re in the lower end of 20. Um, and there’s just no path from there to like general intelligence and we are maybe kind of a bit different than the rest of the humanoid space in this but I see us more as a company really running towards AGI and how can we come there as fast as possible uh versus how can we apply labor in industrial or similar settings robotics in service of building true AGI models and getting enough new rich data to train up these models. >> You said 20 hours, 40 hours for security guard robot. What’s the equivalent for all the variety of things you can do in the home? >> How many hours of >> We don’t know yet. >> You need so we don’t we Yeah. So like at our current scale, we don’t really see

[00:07:00] any kind of cap on diversity. >> It’ll get there and we’ll need to diversify. But I think like you ask a very important question, right? because we want to talk about like what is the goal and like to me it’s not just AI or robotics. It’s a combination because if you think about what this is like what is abundance, right? It’s an abundance of knowledge or intelligence kind of multiplied by an abundance of labor or goods and services. You kind of need both and they follow hand in hand and we can talk more about that but like the the constraints we have in society aren’t always only on the intelligence or data layer. They also are on the substrate that we’re building on right now. >> Every week my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI and quantum computing to transport energy longevity and more. There’s no fluff, only the most important stuff that matters, that

[00:08:00] impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, this report’s for you. Readers include founders and CEOs from the world’s most disruptive companies and entrepreneurs building the world’s most disruptive tech. It’s not for you if you don’t want to be informed about what’s coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmandis.com/metatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. So when I think about it, I imagine this is like why a toddler crawling around playing investigating the physics universe it’s in interacting with different people and different things is learning and building a model in its in its neoortex. And so it’s is that basically the same your neo gamma is a is a uh infant learning in a

[00:09:02] diverse environment. >> It is. Yeah. And I think just like to some extent for humans too, right? Uh also, but it’s it’s more pronounced in other animals like how much of this kind of intelligence is innate and part of your instincts. Um you don’t want your robot to just go around randomly doing anything. You you want it to try to do things that might succeed. So there there is room here for like the more kind like cool like classical AI models where we’re training based on internet data, simulation data, synthetic data, everything that everyone else is doing that’s useful to get you off ground, but it doesn’t fully get you there. It gets you to something that does something seemingly kind of maybe useful and then you can experiment and you can have the robot really have this interactive learning loop where it’s learning in the real world and that can get you. We don’t know how far it can get you, right? We don’t we don’t know. >> And this whole topic of data gathering, you know, it’s amazing watching them

[00:10:01] walk around the building here and walk around the the kitchen. They’re so unintimidating. You walk right up to it intuitively. You don’t feel like it’s ever going to do anything awkward, hit you or anything like that. So that’s got to be incredibly important. >> It’s cozy. It’s cozy. >> It’s cozy and it’s, you know, it doesn’t it doesn’t seem to break the glasses or anything. So I mean that’s got to be really core to the data gathering mission, right? because you you have to like you said let it exper experiment otherwise how’s it going to learn. >> So along that lines what what design elements did you build into Neo Gamma to make it for the home? >> Sure. This actually goes all the way back to the founding of the company a decade ago now. >> Yeah. >> Um really I’ve been in the feed for a long time. So I kind of like started the company since I was a kid. Like I I >> building robots at age what? >> Um I was 11 when I decided that I was going to like do humanoids. What what was what was your what was your humanoid robot uh that you that you modeled? Was it Star Wars? Was it Star Trek? Was it

[00:11:01] What was it? Lost in Space. >> Honda Asimo. >> Was that >> Honda? Honda Honda’s Asimo. >> Azimov. >> Asimov. Yeah. It’s a beautiful robot, right? Um they they started very early and you can you can check out like the Honda Asimo. Like there’s more modern ones, but like the Honda Asimo P6 was like end of the ’90s. >> Yeah. Mhm. >> And that was walking upstairs. >> Yeah. >> Running around stage giving someone a bottle >> like >> it it greeted President uh Obama I think at one point >> that was a bit later but yes they and like it was so ahead of its time right? >> Yeah. >> Uh but I built a lot of stuff up through the through the years. But I think importantly when I started the company I sat down and thought really deep about this like okay there’s all these amazing robots that we worked on and it didn’t really work. Uh why didn’t it work? Right? And then comes down to these like fundamental principles of first of all if you actually want to make something that’s scalable with respect to intelligence. It needs to be able to live and learn

[00:12:00] among us. And there’s just so many nuances to this through everyday life, right? Um everything we do is social like work is social, every task is social. And we navigate these social situations all the time while we do the things we do. And then most of the world’s labor also happens in a social context in that there are other people around you when you do it. >> Objects have social context, right? The coffee cup is empty. Do you need a new do you need an even is it dirty or do you want to refill or do you keep your cup out through the day? And like there’s this like >> row of diversity that you want to access. So uh kind of if you’re a big believer in that then it boils down to okay the road needs to be safe from a first principles point of view not able to harm people. It still needs to be very capable. It needs to be as strong as a human. Then it just needs to be incredibly affordable. Like you need to find this beautiful combination where you can simplify simplify simplify and still get a very capable system so that you can manufacture this at scale and really drive quality up and cost down.

[00:13:01] Right? So that was the founding principle of the company a decade ago actually. They said like we’re going to make robots that are safe, capable, and affordable. And by affordable I mean it’s going to be like first principles manufacturable and affordable very lightweight very energy efficient so you can have a small battery very few parts designed in a manner that doesn’t require tight tolerances no special alloys or materials and just like incredibly simple but performant. That’s really what we set out to do and that’s also why it took a decade, right? Because like there’s so much novel research that’s been done in the company to get to where we have these tendon driven robots. >> What’s the vision there relative to the car say like one one in every household to like you mentioned the iPhone you know go direct to consumer iPhone sales get to you know a billion but it’s exactly one per person is pretty obvious right but robots could be two could be four could be rare. I’ve done that poll and everybody routinely says I would

[00:14:01] have at least two depending on the price point, right? So price point wise, you know, when I think about this, what I’ve heard is, you know, 30K, 20K. We’ve seen Chinese robots in much cheaper price points, but not as capable as as as Neo Gamma. Do you have a price point that you’re thinking about? >> Yeah, you’re not far off. It’s cheaper than what people think. >> Okay. It’s quite interesting because like uh I think this is very important. Um I want to make sure that we are not only making the best product, we want to be price competitive. Uh I think that’s going to be incredibly important and we are actually still price competitive with the Chinese ones. Uh but you have to count like you said it’s not the same, right? So if you think about the number of degrees of freedom that the robot has like how much capability basically how many joints >> then we actually have a significantly lower cost. So I think we’ve done a really good job on uh reducing

[00:15:00] complexity to get there. >> The numbers burnt that I keep in my mind is like uh 30k purchase or 300 bucks a month to lease 10 bucks a day 40 cents an hour. Am I in the right range there? >> Yeah, I think we could do better but yes. >> Okay. That’s fantastic. And but I mean do you need to do better? >> No. I mean I’ll pay that. >> In a heartbeat that’s good enough. >> But in that case I think people could imagine having owning a couple of those robots. >> So I think it really depends on the lens you see this through. So I think clearly everyone’s going to want a robot. And I think this is beautiful thing about the companion aspect of this which is so underrated, right? Because the humanoid is just such a beautiful interface for AI. And when you talk to it and you like see the body language, it can look at you. It sees who’s who’s talking to it, directional a all these things. >> Yeah. >> Like all all my 11-year-old daughter can do if she has a rob just wants to sit next to it in the couch and talk about things, right?

[00:16:01] >> And that is clearly going to be such a big aspect of it. And I I see it as like it’s not I like say it’s not another pet, >> but it’s not another human either. It’s something kind of in between. Yeah. >> And like I said, like it’s kind of like the the my hops. >> Like if you ever read Calvin and Hobs, it’s the hops. >> Uh and I think it’s going to be incredibly exciting to see how these relationships develop because it’s the thing that will be like >> around you all your life, right? It will remember everything about you. And >> two things that are like really if you compare C3PO and that vision of a of a assistant robot and you compare it to what you’ve actually built, the two things that jump out at me right away. One, it’s soft. It’s not like a metal outside. >> And two, the voice is perfect. >> Like, like when you’re speaking to it, you immediately are disarmed and you just talk to it cuz it cuz it doesn’t have a C3PO robotic voice. It has just a perfectly soothing normal voice and it’s very responsive to anything you say, any

[00:17:00] gesture or anything. So I I imagine that these robots will all have advanced uh AIs at the level of you know a GPT5 or a you know a Gemini 3 and in so being those robots will be hyper intelligent and able to understand fully >> and answer what you need and once they’ve learned the physics models fully do whatever you need. you’ve made a decision to build your AI systems in house and I find that fascinating and in fact a number of the other robot companies human right robot companies not going to put you into a comparison mode here but have made that same decision versus partnering with the large hyperscalers uh can you speak to that >> well we’re not doing the same thing uh I mean to me intelligence does not begin with language like language is this generative artificial construct that we have come up with and it’s incredible. I mean it’s such a efficient

[00:18:02] compressed way of conveying meaning and instruction. So language is very useful but it’s not the core of your intelligence. The core of your intelligence is spatial and temporal and it has to do with how you perceive the world around you both with respect to how you see the world but also how you feel the world. Right? Uh, and we’re getting to where we’re seeing that like models that are native to that modality and then you add text >> Mhm. >> will be more intelligent and more powerful than the language first. >> I mean, I’ve read about I’ve read about intelligence and the belief is that you needed embodiment for intelligence to exist and language for intelligence to scale. I don’t feel that I can prove I I don’t I don’t have rigorous proof that embodiment is needed. I do have very very strong proof that from an engineering perspective, it’s just a way easier path. Right? So if you think about like

[00:19:01] the information in the world and can you access this, you could train a world model that can predict video and tell you like, hey, here’s a new video frame, right? Render this for you. You could, in theory, you could probably train that only on text. Like if you have enough text descriptions of things, maybe at some point you could like get an high enough single noise that you actually can get something useful out. At least if you kind of like have some feedback loop with like some RLHF or something where you’re like am I happy with this frame? >> But I mean why would you do that? That’s just like such an inefficient way of doing that. You of course you train on video because you’re going to like output video, right? So from from that perspective I think it’s just obvious that like >> you need all of the modalities that we experience if you want to get to first and foremost like human level intelligence and hopefully past that again. But then I think there’s one other thing about robot. There’s two other things actually um that are quite important when it comes to learning and

[00:20:02] the first one is quite obvious and I think we all kind of identify this which is like robots can do interactive learning right so you you interact with the world and therefore you can learn but if you think about it more from a kind of academic point of view of like how those intelligence kind of evolve uh how do you get reasoning all these things then what we generally do is that we have some observation of the world like we we kind of know how the world works. So I know that if I do this I know what is going to happen, right? I’ve seen this before. Yeah. So um I actually start with that and then I have a goal. I want to pick up the cup. So now I have a model of the world. I have a goal of picking up the cup. I take an action. I know which action I took. I know the action I took was to like reach for and grasp the cup. >> Yeah. >> And then I observe the result. If you look at the internet or in general, you can look at YouTube, right? All you have is just the observations,

[00:21:01] >> right? >> You don’t have any of the mental model of like the person in that video, you don’t know which actions they took. You don’t know what they tried to achieve. You only have the observation. >> This is not how we learn. You can actually bring it all the way back to the scientific method. It’s like you should have like you you should have a theory, come up with a hypothesis, you test your hypothesis, you observe the result, and then you do it again and you learn. And that is just not possible with internet data. So there’s >> definitely definitely impossible with the next token you know raw internet scrape and with all the video scrape. So then in these limited domains like coding and physics experiments you can actually have that same experience but it’s only within that domain like coding is a good example like oh let me try writing it this way it didn’t work. Let me try writing it that way. It didn’t work. So you get very very good at that narrow domain still have no intuition about how the world works. No, you can you can do simulation. No. >> So again, back to like it’s hard to prove that this won’t work, right? So sure, if you have a really good simulator and you really scale simulation and learning and simulation

[00:22:00] with agents, maybe you can get something similar. >> Um, but I mean the fidelity of your simulator is nowhere near the real world. >> And it’s just like so incredibly hard to get there and close that gap. Mhm. >> And it’s also so comput inefficient compared to just being in the real world that I think for me it boils down to not this academic exercise of like proving who’s right and wrong. It’s more what’s the engineering approach that makes sense here. >> Yeah. >> And it’s just a way shorter path. >> You mentioned before in our conversation the amount of data that’s being collected relative to Google or YouTube or Tesla. Can you speak to that? I mean your your mission is get as much possible data during the day of an interaction of these robots in the home. >> Yeah. I mean you can do some napkin math right and of course we don’t know exactly like what is the most useful data from which modalities etc yet. But if you think about it, if you have 10,000 robots out there and um they gather data most of the day, then that is more data than the like

[00:23:01] non-duplicated useful data that gets uploaded to YouTube each day. >> So already at that scale, >> you actually have like your fleet of robots generating more useful data on YouTube. And that’s just at 10,000. And then if you think about like how we scale manufacturing here as this starts deploying into society, you actually very quickly come to the conclusion that like you know what the internet isn’t actually that big. Like you’re going to have way more data from robots than you’re going to have from the internet. >> So I want to hit I want to hit some numbers here just to set them as foundations. Um you built hundreds of the Neo Gamma at roughly. Uh but you’re about to you’ve got a new manufacturing plant that you’re about to open. Can you give us a sense of and then another one that’s in plans right without disclosing anything you’re not willing to but can you give me a sense of by the end of 26 how many you’re manufacturing on an annual run rate and then in 2728 what’s what’s the growth path you imagine?

[00:24:02] >> Yeah. Oh first of all just small correction. Um we we didn’t we haven’t built more than 100 of the cameras but we we’ve built more than 100 of the robots right. Um there’s been multiple versions, but um the factory run rate end of 2026 is north of 20k. >> 20,000 annual >> uh annual. Yeah. Uh of course there’s a ramp to get there. So you don’t reach quite that number in 20 years. >> A couple thousand a month. >> Yeah. Uh now the factory after that is kind of like we’re trying to follow an order of magnitude, right? We’re not going to quite be able to do that. Um I think the iPhone ramp is a very good um comparison here where you see like they almost doubled but like you have a few plateaus as you reach certain scales where you run into problems and there are some quite interesting problems there if you’re going to scale the manufacturing of humanoids to the iPhone level right because you run out of some basic stuff like aluminium um for example uh you don’t use all aluminium on the planet that’s not what I mean but like there’s a certain amount of

[00:25:01] percentage of current refinement of aluminium you can use before you start to really struggle sourcing aluminium. >> Uh, and that might be a challenge, >> I think. >> Wait, did the iPhone ramp was about doubling? That’s an interesting stat I hadn’t even thought of. I mean, you get to a billion. >> More like 1.7, but >> 1.7 annualized over Wow, that’s not as fast as >> So, you can imagine 100%. >> Well, exponentials are quite boring. >> Yeah. No, I know. We’ve heard actually. >> So, but you can imagine a run rate uh before the end of this decade of hundreds of thousands per year. end of this decade way more at that point. Yeah. >> Now >> at that point you need to really think about like what are the things that will slow you down, right? >> And it comes down to refining like mining and refinement of course >> but increasingly it actually comes down to labor. >> Like you’re you’re not going to get there without really using robots for labor. Uh if you think about the iPhone ramp then um Apple kind of displaced

[00:26:02] large part of like the Chinese population across the country for labor and they still they still ran out of labor and had to expand into neighboring countries. >> That’s wild. >> Um now I think we’ve done an incredible job in the design. So it’s very few parts. It’s very simple to assemble >> but still it’s still more complicated to assemble than an iPhone. So I was going to say something looks more complicated. >> It is more complicated on iPhone, right? So let’s say it takes five times as long. So we need five times as much labor as the iPhone. >> Okay. >> Then you’re in trouble. >> Metric. >> Then you’re in trouble. >> So it’s got to be >> So like you have to automate, right? >> Yeah. >> And of course that’s the goal anyway. like we want to get as quickly as possible to what I call like this hard takeoff moment where you have robots building robots, robots building out the data centers, the chip fab, the energy infrastructure. And so what can we learn from the car actually? So here here you’ve got the iPhone, fewer parts, 1/5if the labor per unit. Then over here you have a car. How

[00:27:02] does a part count compare to a car? >> So we have a few hundred parts. Car has 50,000 roughly >> 50,000. So it’s much simpler. And I mean a car weighs 4,000 pounds. >> Yeah. A lot of material. >> We we our robot weighs 66. >> Okay. >> So I think like it’s not really comparable to a car. I’ve seen a lot of like the space compare humanoids to cars. >> Mhm. >> But I think then you should go back to the drawboard to be honest. Like it’s not a car. If you do a really good job here, it’s closer to a refrigerator. It’s a very complicated refrigerator, but it’s closer to a refrigerator than a car. Let’s dive into a little bit of the let’s shape the understanding of the robot for our our viewers and listeners. Uh 66 pounds. >> Uh let’s talk about battery life, its abilities. Uh you know, describe it from a a a specific stats point of view if you would. >> Oh yeah, sure. So I think first of all I think the most important stat is it’s huggable. >> It’s huggable. Yes, it is hugable. I have hugged a robot. Yes. >> Um and like this just a safety and how

[00:28:01] it is to feel safe in its space. soft. Um, but from a pure stats point of view, it’s 66 pounds. It can lift about 150 lbs, >> which is amazing. I mean, in terms of the the weight to strength ratio, >> it is the weight to strength ratio of an athletic human. >> Yeah. >> Um, and then it can carry like um about 50 lbs around. That’s what you hopefully saw earlier here. >> Um, and battery life is about 4 hours. rechargeable in half an hour, half an hour or two hours. >> Half of it. So, like two hours if you use a full battery. Uh, now actually, interestingly enough, I have one in my house, right? So, I’m starting to get some get some data on this now. And >> it’s it’s 5 foot four, 5’ 5. What is it? >> Uh, 5’4. >> 5’4. Okay. >> I think so. >> That’s a perfect height, by the way, just in case you’re wondering. >> Yeah, it’s also the height of my wife. It’s also the height of my wife, so it’s like I I agree with you. >> It’s It’s mine, so that’s good. >> Yeah. So it’s but I mean it’s um what’s

[00:29:02] very interesting like once you start actually using the product right you you notice a lot of things that you don’t usually show up on a spec sheet like the robot is completely quiet >> and that that’s not a coincidence that’s something we worked so hard on and the first time you put this in your home and you think like the robot’s very quiet it’s fine you put it in your home and you’re like first day it’s fine second day is a bit annoying like third day you’re like oh man is it going to leave my living room soon because like this sound right it’s such a requirement for like just dead quiet, right? You’re going to have this in your space. >> Charging wise, don’t really ever run into the problem because the robot just takes like these micro breaks every now and then when it’s not doing something. And like I actually don’t care that much about how many hours it can run. I care about that it charges fast enough that it can just always do whatever I want to do. >> Nice. >> Um >> um well, and I want to talk about quickly since you said that specifications. >> Uh like the number of degrees of freedom, right? Which basically is how many joints does the robot have, right? So like humans have like six joints in

[00:30:00] their each leg that’s 12. You have seven in each arms that’s 14 more. So now you’re like 12 + 14 that’s 26. You see a lot of robots today that have 26. That’s quite common. Usually they don’t have the wrists. They have the they actually have the neck instead. So two here and then you’re like at 26. Uh we have three here. So you have proper expression with your head. It’s quite important. >> Um >> we have all the seven from here. We have three in the spine. And then of course we have 22 in each hand. >> I mean the what I saw in the arm design was incredible. >> So >> how many how many do humans have in the hand? >> 22. >> So you matched it. >> Well, okay. Depending on how you count your capillar bones. So like the small bones that you have here that allow you to cup your hand. >> Yeah. >> Um you could to some extent see that that like that that’s more like four five degrees of freedom. Not not really two. Uh so then the humans have a bit more. But um functionally it’s quite similar. Um, and this again just is incredibly important to be able to do all those tasks in a home. But also from

[00:31:00] an AI perspective, there are like we talked about diversity initially, right? It is the one metric for intelligence. >> And the diversity >> diversity of environment and data. >> Well, diversity of your data. And your diversity comes from two things or the limit to the diversity you can achieve. >> Mhm. >> It comes from the environment you’re deploying in. So right, if you’re in a factory doing the same thing every day, it doesn’t matter how good your robot is, it’s not going to be diverse. >> And then how capable is your robot? How many things can it do, right? Because if it cannot do any kind of like inhand manipulation or handling like soft deformables or all these kind of things or delicate objects or whatever, then you get no data of that. So like you really have to kind of go max max on both, right? If you want to maximize your diversity. It was about 18 months ago that I partnered with one of my closest and most brilliant friends, Dave Blondon to start Link Exponential Ventures. At Link, we managed about a billion dollars of seedstage money based out of Kendall Square in Cambridge, right between MIT and Harvard. When Dave and I both graduated from MIT, each of

[00:32:00] us immediately started companies. But at that age, everything is working against you. You have an idea, you’re challenged to raise money, and you can’t afford rent. And even with all the accelerators out there, you’re competing against thousands of other startups for the same pool of investors. Both Dave and I have spent a big chunk of our lives focusing on how do we inspire and support founders to knock down those barriers to go big to create wealth to impact the world to build and scale as fast as possible. Especially in today’s AI everything world, we’re seeing so many companies reaching multi-billion dollar valuations in just two to three years, faster than ever before. Some companies are adding millions or tens of millions of dollars of value in just weeks. So, we started asking ourselves, how do we help these founders go faster and not skip a beat? As an example, a couple of months ago, we bought an apartment building adjacent to MIT where a graduating entrepreneur can move in immediately without slowing down their tech build while they search for a place to live. And so, we’re doing everything

[00:33:01] we can to accelerate builders and their super smart teams. Of course, funding is part of it. Mentoring is part of it. Connecting them with my personal network of abundance-minded CEOs and investors is part of it. We house 66,000 square feet of purpose-built incubator space and 26 AI startups call Link XPV their home. And the returns have been amazing. I have nothing to ask, but if you are building a company in the AI era, check us out at link ventures.com. Now, back to the episode. Yeah, geeky question for you that really really curious to know because when when you build something physical and then you attach a neural net to it, it’s actually very hard to tell whether the constraint in what it can and can’t do is in the neural net or in the physical construction of the hardware. Is there any way to decouple that and debug, you know, the two different sides or is it just incredibly impossible? I mean, once it’s meshed together, you just can’t. Well, >> we have a pretty good neural net here. >> Yeah. Um, so usually the way I approach this is can we do it in Talop?

[00:34:01] And if we can the right neural net can do it with enough data. Interesting. >> And that’s generally been proven to like be true like if if we manage to do something teleop it’s just like okay now we need a lot of diverse data of similar task. So we get some transfer learning and we need a lot of that specific task and then almost irrespective of like how complicated that task is. You can get it to work. >> Yeah. >> Um now of course that doesn’t mean you can get everything to work with generalization across. We’re not there yet. >> Yeah. >> But you can see that like okay you can get the neural network to do this. Now we need to scale it so we kind of get this beautiful transfer of knowledge between tasks and like auto distribution generalization and all these things that we currently see in LMS that we don’t see that much in robotics yet. We have some pretty cool stuff internally where we see some science >> picturing those you ask it to to make grape susette or you you ask it to do micro surgery and it can’t quite do it and then you say well look the hardware guy is claiming the hardware is good enough it must be the software guy. >> Mhm. And then software guys saying no no no the software the neural net is fine

[00:35:01] the hardware just can’t do it and then they fight it out >> and then we just say like well bring we bring in our best to tell operator and we say like >> he can do it >> then the hardware can do it clearly. It’s like proof of existence. >> Yeah. Okay. So that’s that’s where I was going. So you have you have a remote operator option >> y >> who can control the hardware. Oh that’s really interesting. So then you again like well but we’re getting to where this gets hard where we can kind of no longer do this >> because >> because the hands are just so good >> and they have very high fidelity tactile feedback. >> The human hands are so good. >> No, the robot hands. >> Okay. >> So they have the human hands are still even better but like the problem here is the robot hands are really really good and they have really fast highly detailed tactile >> and we can’t really transfer this efficiently enough from the human. >> Yeah. Yeah, cuz the tele operator is using some kind of >> Yeah, I mean, back to the X- prize, right? The avatar challenge. So, it’s a really hard problem to transfer that fast enough. So, now we start to see that the robot actually learns how to do

[00:36:00] manipulation way better from reinforcement learning in real. >> So, you like actually have the robot like interactively learn in real how to handle objects and it can do things that the operator could just dream of. >> So, that so now we’re kind of screwed. No, we don’t now we can’t do that anymore. >> All right, let’s I want to talk about uh three things in in in sequence. uh teley operations versus full automation. >> Mhm. >> Safety in the home. >> Mhm. >> And privacy in the home. >> Yes. >> Because those have got to be critically important as you’re entering the home. So >> the robots we saw the uh Neo Gamma out here operating in teleoperator mode, but also in AI full AI mode, right? And it was able to do both. And its AI systems are going to increasingly get better and better and more capable. again as we’re talking as I’m talking to Gemini 3 or Gro 4 or you know GPT5 soon um I’m talking to a highly intelligent human and getting a feeling

[00:37:01] that it understands what I want and it’s able to you know sort of like take action on my requests. I imagine we’re going to have the same level of AI eventually in the robot where I feel like I’m talking to a fully intelligent being in one sense. >> Oh yeah. Clearly >> close already. >> It is and one that is grounded, right? That actually understands to some extent what this existence is, which today’s LMS are kind of like they have this kind of abstract notion of it, but it kind of like it’s it’s a facade that kind of quickly falls away if you start to probe at it, >> right? Um, but that will get there, I think. Um, >> in the tele operations mode, um, you’ve got humans wearing VR headsets and using haptic controls. >> No. >> What What are the humans doing? >> They’re giving slightly more like high level commands. So, just guiding like, hey, put your hands over here, like grasp this thing. Um, you don’t want to like over constrain the system. You want to give it some opportunity to kind of

[00:38:01] like solve for how to do the task. >> Okay. So we we kind of have like the the learning coming up from the bottom and kind of like enabling a more and more abstract interface for the operator and then we have the learning of like the all the large amounts of data we have coming kind of from top and getting more and more like the general behavior that you want the robot to do and they kind of like meet in the middle right where the operator goes away. >> You’re using full you’re using automation and teleoperations always together in that regard and learning. So everything that enables the robot to do anything that the operator does teleoper operator is fully learned end to end like the network outputs torqus to the motors. That’s that’s that’s very similar to what Tesla and Elon Musk were saying where the self-driving car was originally all C++ code with a little bit of neural net maybe 80% C++ 20% neural net then every year that went by became more neural net and now >> 300,000 lines of C++ were eliminated gone. >> Yeah. just a few guardrails left and the rest is just one neural net.

[00:39:01] >> So same thing here, right? >> Yeah, it’s it’s all weights, >> right? If you like the code is just a few hundred lines. >> Is it really? >> Yeah, >> that’s just all all parameters. What’s the parameter count or is that all super secret? >> It’s kind of secret, but it’s it would be small if you compare it to like today’s neural networks because it’s running on the robot very fast. >> Yeah, >> it’s kind of like your muscle neural system, but it does take in vision. So, it’s not very small. >> Well, that begs a a question I’m dying to ask, which is, you know, you’ve seen X Makana, right? >> When I saw that movie, I’m like, why? >> Don’t go dystopian on this here. Okay. >> Okay. But why is the the brain the blue blob in the head? >> Yeah. >> Why isn’t it in the server room? >> So, learning is shared between all robots. >> Well, yeah. And you can be much bigger. And you know, like if half the power of the robot is going into the thinking, you could save you could run twice as long on a battery charge if you move it over to the server room and have it just communicate remote. So why did you choose to put it in a head

[00:40:01] >> aside from being anthropomorphic and cool? >> No, no, it has nothing to do with that. Okay. >> So there there’s some simple answers to that which is uh I mean the head is where nothing else is unless you put the brain there. Um like the rest is >> Yeah. Like it’s it’s like everything else is pretty freaking full. Like it’s building a humanoid with this kind of like power level in such a miniaturized form and still having like enough space to make it like completely soft and all this. It’s really hard engineering problem. So it’s like where are we going to put this if we don’t put it in the head? If you’re going to put on the physical robot. Um >> now there’s another argument. Uh the very high bandwidth thing that happens in your brain is vision and to some extent audio, smell, right? Tactile, but vision just dominates. >> Yeah. >> And you just want to minimize the distance between your eyes and the and the compute >> for real. So So the the bandwidth between the sensors, the eyes mostly >> wouldn’t make it over the home Wi-Fi. >> Well, it wouldn’t even make it down to

[00:41:00] the stomach of the robot >> really. um without getting overly complicated on like which physical interfaces you would choose for this transfer. It’s very high bandwidth. >> I’m I’m shocked by that. >> Shocked by that, too. >> But I mean, we’re running like no LAR, no structured light, no wrist cameras, no nothing. We’re running pure like emulation of human vision, right? >> Yeah. >> So, we’re we’re relying so heavy on that. So, it’s a very very high resolution, very high bandwidth, very high frequency. Uh >> that’s funny cuz that’s exactly why the human brain is very close to the eyes, too. >> It is. Now that doesn’t mean that you can’t do things in the cloud and we do things in the cloud. But it kind of becomes hierarchical from an intelligence point of view. Just like if you think about like your um your uh kind of like your ner muscle nervous system, this >> runs quite fast, right? It usually runs at like 25 hertz. Um and it doesn’t necessarily it doesn’t go up to your brain. There are neurons distributed out through your system that makes decisions, >> right? >> Uh we have this in the robot. Uh we have some of our stuff pushed to the um um

[00:42:02] power electronics that controls >> just for latency. Just for speed. >> Yeah. And then you have the brain itself which actually runs pretty fast, right? It’s usually like between 5 and 10 hertz and very even though it’s 5 to 10 very low latency and this runs on the robot. Now if you’re running like more like a one herz streaming thing, then you’re typically in LLM first time to token land, right? >> Yeah. >> That runs off board. But that that can’t solve the like high frequency tactile feedback manipulation tasks. That’s too slow. >> Okay. The first time my Neo gamma learns to crack open an egg to make an omelette. The question is do all Neoamas then learn that? Are you shared learning? >> They do. Now there’s a shared learning in the sense that you can say like this data goes to the cloud model that is doing this for all Neo Gamas. But there’s also the distributed models. So of course there would be like a nightly checkpoint where like hey this model is better we have more data we validated this we get to safety which I’ll talk about later when it comes to how do you validate the models um and then we

[00:43:01] deploy that to all the robots >> so even though it’s distributed on the robots they can still learn from each other of course it’s just you need to do like one hop through the server layer and like do the training and propagate this out >> um there is a future not so far away where I’m pretty bullish on there being a lot of federated learning happening on device and this has to do with how do we have your companion really throughout life learn from all of the experiences that are there to you but private. >> Yes. >> So all robots will not be the same but they will share an intelligence backpoint. >> Let’s go into the conversation of privacy and safety. So, you’re inviting these robots into your home and um where there will be activities that you may not want shared with the world and then of course you’re asleep and the robot is running tasks at night. You don’t want to wake up in the morning and find your safe has been opened and the robot’s gone. um you know to talk about or you

[00:44:00] don’t want the robot to be you know taking care of your aging mother and find out that it’s uh you know given her giving her shots of scotch at night when she asked for them. I mean so how do you deal with safety uh and privacy? >> That last one is the hardest one by the way. Um we can get back to that. Okay, not giving grandma scotch >> shots of scotch. Um >> because you know generally models are they’re always kind of tuned to be kind of sick of fans and they end up doing whatever you ask them to do. Um but so if we start with the privacy side >> I think first of all it’s just it’s a lot about transparency like if you’re one of the first people like you Peter that will have a Neo Gamma in your house. >> Yeah. Great. >> We are kind of trading a bit on privacy versus being an early adopter because without the data we can’t make the product better. Mhm. >> Of course, we’re going to do everything we can to make sure this is privacy on your terms and that you are in control, but we do need your data if we’re going to make the product. >> Sure. >> So, I mean, I listen, I give my data to Google, to Amazon, to X all the time.

[00:45:00] And I mean, people don’t realize that you’re sitting in the home having a discussion with your spouse and Amazon’s, you know, Alexa’s listening, right? serious listening >> but they’re doing something very important which we also do which is no human in our company can hear or see that data >> yes >> that is going into the training model yes >> but it doesn’t go by a human >> right >> now if we want to look at that data and sometimes you might need to right might be like I let’s figure out what happens here because some so something clearly is happening across multiple robots that we want to figure out what is >> then we’ll send you a notification on your phone where we say like hey this specific ific window, we want to review the data and you’ll get a video of like what that data is. >> Yeah. >> And then if you say yes, then we get the decryption key and we can look at the data. Um if we don’t, if you say no, then we can’t. >> So that that you’re in the control of that and actually even with respect to like going into the training data, we always run like a 24-hour delay on

[00:46:00] training. So if there is something that you really don’t want even in the training data >> like this never happened I erase it from existence you can go in and delete it before it gets into the training weights. I >> I I just want everybody to hear there is a there are policies and plans that make this acceptable and are are used by technology companies and you’re going to be implementing the best of those. That’s pretty cool compromise actually a lot of them. But there is like >> so the mode I talked about now is when the robot is what we call best effort autonomy, right? Which is most of the time which >> what you saw earlier today where if you talk to it, you ask it to do something hopefully it does the right thing. If it doesn’t do the right thing, then you can say bad robot and hopefully it’s better next time. But it’s very is actually this is learning in real, right? Uh uh this is really like interactive learning. And the robot, interesting enough, actually progresses faster on tasks when it fails than when it succeeds. It learns more from failures. just as we do but in this mode that’s the privacy. >> Now when it comes to teleop then of course there’s no way you can do this task without seeing the glass. So we do

[00:47:02] some abstractions so that you actually don’t see people people you kind of like just see blobs and like you just see the object you’re interacting with and we can do a lot on interaction on like the filtering side to ensure privacy >> but the most important thing we do here is that no one goes into tally up in your robot unless you approve it right and it’s very visible on the robot like the lighting changes and like it’s like someone is in your robot and it’s one of the pre-selected operators that you have approved from like a large set of operators like here are the four that services you. So, that’s kind of like inviting uh your cleaner or whatever into into your house, >> another human >> in another human into your house and you just need to make sure that they’re actually invited. >> So, to to actually take a second and and spell this out in more detail, in the early days when I have Neoam in my home, uh it’ll be baseline autonomous, but there will be times where it needs to bring in teley operator. And so you’ll have teley operators in headquarters that if it needs help or doing something

[00:48:02] complicated or it gets something wrong, uh the telly operator can step in and actually make the task happen. >> Yeah. It’s in the beginning it’s actually there’s two different modes. So so you have the mode which I call like the best effort autonomy that we just talked about. >> Yeah. >> And then you have task scheduling >> which like my robot at home now is doing that. So, I I take my phone and I schedule and say like, “Hey, between these hours, here are the tasks I want you to do for me today.” It’s like, “Do my white laundry.” And then there’s a package coming from Instacart. They can receive it at the door and unpack it in the fridge. And it’s just like generally tidy. >> And I’ve given it when I’m not home. Like these hours I’m I’m at work. Just get it done right now. I don’t care if that happens autonomously or to a teleoperator, right? So a lot of that happens for a teleporter because some of these tasks are quite complicated and we don’t know how to automate them well enough yet. >> Yeah. >> Now of course that teleoperator uses autonomy to help improve the efficiency. So it’s not all 10 operation but I don’t

[00:49:01] really care about the mix. The task gets done. >> Yes. >> Um so we kind of split it like that and then there’s a gray zone kind of in the middle. If it’s like you want to like I don’t know having your friends over for a party and you want the robot to be the bartender and we don’t have like a bartender mode yet. >> Mhm. uh then you can approve a teleoperator to do that. So you can >> all the most of all the videos we see of Optimus uh at you know Tesla’s diner or at their events are telly operations. >> They are but I think teleoperation has gotten this kind like underserved bad reputation or name. >> Why? >> Um I think this because people don’t have enough like clarity in like hey is this helped or is it autonomous? >> Um but it is just labeled data. is expert demonstrations, right? If you look at any of the big AI models that were trained, there was an enormous amount of people that sat down and handlabeled data and like looked at examples, wrote out like question answers and like bootstrapped kind of

[00:50:01] like this highly uh high very high quality data set for this to work, right? So, you pre-chain on like general information. We also do that just everything that’s happened with the robot and you have a fine-tuned data set that is very high quality. >> Yeah. And in robotics that is teleportation because it’s the expert demonstrations. It’s the handle labelled data. Yeah. It’s not different. It’s just I think there there’s some lack of transparency in what’s going on. >> Well, I think the the objection is if you have a demo like a video that makes it look like it can do something and it actually can’t because you >> you handcoded it. >> It clearly can, but it can’t do it autonomously. >> Yeah, it can’t do it. Yeah, but I think you’re you’re dead on that the >> if it can physically do that, if the if the mechanism can do that, the neural net will fill in that blind spot instantly anyway, you know, once you’ve trained it. So, I think that it’s it’s perfectly legit. >> And now it’s time for probably the most important segment, the health tech segment of Moonshots. It was about a decade ago where a dear friend of mine who is incredible health goes to the hospital with a pain in his side only to

[00:51:00] find out he’s got stage 4 cancer. A few years later, a fraternity brother of mine dies in his sleep. He was young. He dies in his sleep from a heart attack. And that’s when I realized people truly have no idea what’s going on inside their bodies unless they look. We’re all optimist about our health, but did you know that 70% of heart attacks happen without any preceding, no shortness of breath, no pain. Most cancers are detected way too late at stage three or stage 4. And the sad fact is that we have all the technology we need to detect and prevent these diseases at scale. And that’s when I knew I had to do something. I figured everyone should have access to this tech to find and prevent disease before it’s too late. So I partnered with a group of incredible entrepreneurs and friends Tony Robbins, Bob Hury, Bill Cap to pull together all the key tech and the best physicians and scientists to start something called Fountain Life. Annually, I go to Fountain Life to get a digital upload. 200 gigabytes of data about my body, head to toe, collected in four hours to

[00:52:00] understand what’s going on. All that data is fed to our AIS, our medical team. Every year, it’s a non-negotiable for me. I have nothing to ask of you other than please become the CEO of your own health. Understand how good your body is at hiding disease and have an understanding of what’s going on. You can go to fountainlife.com to talk to one of my team members there. That’s fountainlife.com. >> I want to jump into another fun subject, which is the uncanny valley >> and the face. >> So, um, talk to I mean, you’ve probably had endless conversations internally about how do you make a face look, how human do you make it, how skin-like do you make it, how do you represent it? Can you can you tell us sort of philosophically uh you know what you and and Dar who on your design team how do you think about that where where do you where do you make it human enough? >> Yeah you know >> is this very delicate line where you

[00:53:00] want to make sure like body language comes across crystal clear because that’s like that’s the magic of the device right of the companion. >> Yes. But at the same time, you don’t want it to get like to where your kind of instincts tell you, hey, something’s wrong, like this is a human where there’s something wrong with it. You don’t want it to be a human. And it is it’s actually pretty surprising that there’s this there is this gap where people clearly identify this as like, hey, this is a being I identify with. I understand his body language and everything, but it’s also clearly not a human. >> Yeah. >> Yeah. And you want to be in that space. And then where you are in that space kind of like depends a bit on like who you ask, right? People have like a different threshold here. >> Uh so we’re trying to hit kind of like in the middle of that and ensure that for as many people as possible, this is just like an incredibly easy to understand product, but at the same time that it’s not creepy. And I think adoption here, by the way, talking

[00:54:01] about scale, adoption is so important and adoption of new technology usually takes some time because there’s just like this knowledge barrier, right? There’s a barrier to entry. Even using a phone, there’s a barrier to entry. >> Yeah. >> This interface is just so natural. Like there is no barrier to entry. >> Yeah. >> It’s something you just talk to like a person. It’s, you know, what’s incredibly cool to me is that there there are like 50 things around the house that I don’t know how to do, >> including the freaking the way to backwash the pool. Like that all this crap. The robot can in real time access the information, learn how to do it, and just do it. >> Yeah. From the >> there’s I can’t do that. It would take me an hour to study. And there’s no laborer that’s going to come into the house and do it for under like 400 bucks. And so it’s like like there’s so many things that are in that category where I’m not trying to replace a human being. I’m doing something that there literally was no other option for because the knowledge is obscure. And

[00:55:02] there’s so many of those things around a house now like resetting the the the water heater keeps going out and the reset process. But you can look it up. The robot can look it up >> and just go do it. >> And you know it’s like >> this is micro units of work essentially, right? Like you need five minutes of You need hyper special hyper specialized micro units to work. >> Yeah, but you need like 5 minutes of it every now and then and it’s super high value to you. >> Yeah. >> Um and it’s hard shop back. You know the shop back, you can run it forward or backward. There’s a manual there. You could read the manual. I just want to get like this crap off the garage floor. The robot will know how to shop back works because somebody else’s robot, one of the other 10,000 has already done it. >> Make my perfect teriyak salmon on the grill. >> Yeah. Some obscure mixing some food. Which brings us to something there you talked about earlier that we kind of dodged scotch for grammar and safety. So um I do hope to make you a perfect salment >> but I have to do it myself. I’m not going to let the robot do it. Uh because

[00:56:02] that’s one of the things we’re actually not doing when we’re launching now and that is due to safety >> because what I worked so hard on right for this decade is to make robots that are safe intrinsically. And what I mean by that is just like if something goes really wrong and accidentally it hits you, >> that might be painful, but it’s not going to be likely to severely harm you, >> dangerous, right? >> Yeah. And once you pick up a kettle of boiling water, there’s no more guarantee that you are safe, right? >> Mhm. >> So we generally avoid any kind of dangerous objects so that we can ensure safety in the beginning. Now, of course, over time, as the AI improves and we get more and more certainty on all behaviors being safe, we will allow cooking and other things. So, we’re doing internal projects on this, but we’re not going to be rolling it out to the customers in the beginning just due due to safety concerns. >> Yeah. >> Um, >> yeah, cooking cooking and safety is a >> it’s a real problem is not a easy thing. >> I mean, it’s a real problem for humans,

[00:57:01] too. It is. Um, but there’s the notion of intrinsic safety. This is incredibly important and then is the safety of the AI. And this is the reason we have a white paper out on this that I recommend if you guys are interested read it. But why we have started very early betting extremely heavily on role models >> and and role models >> world >> world models >> role models they are of course >> the currently best known path towards AI. Um but even more importantly for us like shortterm as we progress here on like the data collection and model training they give us this incredible opportunity to automate evaluation of models including safety and red teaming and all these things. >> Mhm. >> So you can think about like if you train a new model and now you want to know if it’s better than the previous one and you can deploy it to all your customers and you can get some vibe check a few days later and like hey are people more happy now? It’s generally how it’s done. >> Yeah. You don’t want to do that with a physical system, right? You can’t do that with autonomous car either.

[00:58:01] >> Yeah. >> What the world model actually is, it is a model that is able to generate what will happen if you take specific actions. So you can think of it like a video model where you ask it to do something and then it actually gets not only like the question of what to do, it gets the actions to do so and it gives you back not just the video but how the world feels like the forces everything for the robot. So it’s essentially like the robots in the matrix. >> We take the robot, we put it in a world model >> and it doesn’t know that it’s in a world model. It thinks it in the real world and it does its things and we ask it to do the things we’re usually doing around the house and we see what it does >> and we can put in lots of automated checks to ensure that like both that it’s performing better but also that is not doing anything that could be deemed unsafe. >> So it’s really like this incredibly important and powerful evaluation tool and that’s start to solving the problem. Right. Do you think that’s why you guys

[00:59:00] and figure and Tesla are getting monster valuations? Cuz is is is the valuation just purely hey we’re going to sell 10,000 20,000 then 200,000 or is it no the world model is such a unique asset and so valuable in thousands of different ways and that that becomes a you know very much a self-feeding um barrier to entry and and that could also explain like do you do you plan to productize that core capability? >> Yeah. So to me it’s this back to like our mission is to create an abundance of artificial labor and that that goes across both the digital and the physical. Mhm. >> So yes, it will be productized. >> Still a bit out, but like yes, this will be productized >> because it seems like no matter how much factory capacity you build, it would be it wouldn’t be till like 2028, 2029 that you could diversify into all these like microsurgery and and warehouses and you know, drones, all that. But that same world model could apply to those much

[01:00:00] sooner, but you’d have to somehow get it into the hands of I think revenue from the robots will dominate forever. Um I I do think like the real physical world has way higher value than people think. >> I mean just for folks to realize right we’re at $ 110 trillion global GDP and labor is half of that right so the TAM total addressable market here is like 50 plus trillion dollars >> just just if you keep doing what we already do. Yeah. >> It’s going to be so much bigger. >> You’ve attracted some incredible early investors. Do you mind just sharing who who’s come into your cap stack? >> I think maybe like uh we we have some big classical ventures like Soft Bank, Target Global, >> Nvidia, >> EQT, Nvidia, Open AI. >> Uh so there’s some good names in there, a lot more. >> That’s damn good. I think it’s becoming increasingly clear, right, that the bottleneck in society to super intelligence is not better algorithms or uh scraping

[01:01:03] the internet in a more thorough way. It’s better data. It’s yeah, it’s better data and then you need a robots to generate this data, but even more importantly, it’s the physical parts, right? You need more data centers, you need more power, you need like and like to do this you need more labor and like and it’s kind of like this is this bootstrapping problem. And if you just break down the pyramid and you say like super intelligence consists of this incredible amount of data and it consists of this substrate of compute and power, then you see that humanoids is a solution to both of them. >> Mhm. And if you just do the math, you’ll see that you’re probably not going to get there without that. You you’re just you’re just running out of these like basic constraints. And I think humanoids will be surprisingly useful surprisingly fast. >> Okay. >> Not not perfect, but it’s going to be surprisingly useful surprisingly early. >> I have a question on on behalf of Sel is

[01:02:01] our typically our third no >> moonshot mate here. I have to ask cuz I have to ask on his behalf. It’s going to >> so so See is constantly saying why two arms? Why two legs? Why not six arms? Why, you know, why we need to have a humanoid form? I mean, in the kitchen, wouldn’t it be better to have an extra pair of arms? So, what’s the definitive answer to him? >> Well, I think first of all, he’s kind of right like humanoid isn’t the only thing that will work. Um I do think that and we’ve looked a lot like I don’t know of any form factor that is as general as a human in in doing kind of like any kind of labor in any kind of environment. Uh and we’ve tried to simplify we’ve tried to increase complexity like the human is pretty good machine. Um >> so if your goal is to just be as general as possible then you need a humanoid. Now if your goal is to transfer knowledge from humans it’s a lot easier if you have a humanoid. That’s the most important part of the equation there. >> It’s very important. And then >> you’re not get transferred to a to a

[01:03:01] sixarmed robot. Learnings from a human >> is at least harder. And then >> I mean the world is made for humans. >> That’s what Jensen says, right? Like it’s brownfield deployment. It’s very true. >> And then I think lastly, >> it’s just do you want to live with a sex sixlegged robot in your kitchen? But I I view humanoids as kind of like the pinnacle of general technology. But there is this kind of repeat pattern through history of this happening with like 0 to1 novel products. So if you think about the say the computer started with big mainframe computers solving very specialized tasks. >> Uh the equivalent in robotics would be industrial robotics. Now, now comes the PC or even before the PC like the kind of like VIX or Ataris or whatever like more general computers and this gets produced at such a scale that it just becomes generally available

[01:04:02] and now it’s super high quality and it’s incredibly reliable. It’s got this huge ecosystem and it just becomes the best way to solve any problem even though and here’s the argument against humanoids, right? It’s already complicated for a task. Even though it’s overly complicated for the task, when you take your beautiful apple here and you write a word document, I mean that’s the most complicated typewriter I can think of. Like you humanity like mastered nanocale chip manufacturing for you to have a typewriter like but it’s still actually the cheapest most reliable typewriter because it’s just like made at such a scale. >> Humanoids >> exactly the same. Now if you see what happens to computers now because the market has become so big it starts to actually become segmented again and now you see you can carve out niches in computing and they’re still so large that it has scale so now you get specialized compute for AI specialized compute for all kinds of things right yeah >> and this is because it’s become so big now the same will happen in robotics so

[01:05:01] we will get to where we have Star Wars there will be different drones doing different tasks and they will look kind of like more specialized ed to like my repair drone with like six arms and like scissor hands and I don’t know like it’ll get there but you have to go through this humanoid phase first. So let’s just say humanoids is a face. >> I my f my favorite robot is still data from Star Trek. >> It’s a great robot. >> Yeah, it’s and it’s kind of the closest thing I think of to what you’re what you’re building. You know, lovable, happy robot that you can give a hug to. Do you have a do you have a favorite robot? Uh, well, I wouldn’t have thought of data, but now that you said data, it’s that’s top of the food chain. Uh, everybody loves R2-D2 because for some reason R2-D2 has no voice, even though they have voice technology everywhere. >> Squeaks. >> All those visions though are built around what Hollywood could easily get on a set. >> Yeah. I think I think the humanoid form factor though has there’s another aspect that you you kind of touched on, but

[01:06:01] >> when I bring it into my house, I have a vision of what it can do and what it can’t do >> based on humans, right? >> And so I ask it to do things that are rational and not irrational because I know what a person could do. >> If I had a six-legged thing that Seline came up with, >> I’m not quite sure like should it be able to climb on the roof and fix the shingles or not. I don’t know like what this thing’s capabilities are. So it breaks the whole kind of like comfort zone of >> the thing that does surprise me though about the uh the robots are unbelievably coordinated between themselves and there’s some good demos of this at MIT they’re just mind-blowing but when you have two movers trying to take a couch up the stairs and they’re like it’s like the stooges right they’re like oh move a little left a little when you see the equivalent act with two robots they’re just in phase and they just do it seamlessly so I think I think there’s a very high probability that the standard in the home is going to be like four or sex if you get the price point down a lot. But they work so well in concert with each other. It’s almost it’s almost a crime not to have that that teamwork

[01:07:02] synergy. >> Huh. It seems like a bit much to me. But in terms of maybe I could like if I need to have a movers like you know I can ask my Neo Gamma and he’ll invite some friends over. >> But you’re not taking everything into account Peter because you have to remember that by the time you have this many robots in your home everyone’s homes are really freaking big. Yeah, we have an abundance of labor. >> You mean like your your your house is not going to be this small. >> So going to be huge. >> So labor is going to is going to continue to demonetize and and democratize. >> Everybody, there’s not a week that goes by when I don’t get the strangest of compliments. Someone will stop me and say, “Peter, you’ve got such nice skin.” Honestly, I never thought, especially at age 64, I’d be hearing anyone say that I have great skin. And honestly, I can’t take any credit. I use an amazing product called One Skin OS01 twice a day, every day. The company was built by

[01:08:01] four brilliant PhD women who have identified a 10 amino acid peptide that effectively reverses the age of your skin. I love it and like I say, I use it every day, twice a day. There you have it. That’s my secret. You go to onskin.co co and write peter at checkout for a discount on the same product I use. Okay, now back to the episode. Um, let’s go someplace that I’d love your insight on, which is China. So, when I think about the robot industry, you know, I’m tracking 50 plus wellfunded humanoid robot companies in different stages around the world. Um, majority are US and China. There’s some in Europe. You started in Norway. Um there are some in India uh parts in in in Japan uh and uh in Korea but China by far I think is is dominating and what I see there with the robot Olympics and special robot villages is pretty

[01:09:01] extraordinary where the Chinese government is really accelerating this for obvious reasons. You know they need access to lowcost labor to continue the manufacturing boom. they needed for supporting their elderly population. Um, how do you think about China? What do you think of the work coming out of China? >> Well, I think first of all, I think we need the same thing here. >> We don’t kind of like realize it maybe as much, but of course, we need the same thing. >> Um, >> I think the Chinese ecosystem is incredible. I I mean, >> I don’t know anywhere else in the world where you can can go and like develop hardware as fast. It’s just right. you you need something and you go over and get a machine on the corner here. uh something broken and you just like go over the street and like buy some new components. There’s someone like there’s someone doing a reflow over on the street corner over there and like it’s this incredible ecosystem that I think uh the the the bay I said a bay now but I know it’s also like I mean the Silicon

[01:10:00] Valley like the hardware bay in Jensen area is also a bay but Silicon Valley bay uh this bay uh we have a long way to go if we want to like really get to the same level of rapid iteration of hardware. Um, so that’s just incredible. I think the manufacturing part is incredible. It’s just so much process knowledge. >> Mhm. >> And I think this is highly underrated. Like, you know, think about magnets. >> I do. >> We have, so do I a lot. Um, we have great material scientists that know how magnets work and can design very good magnets, but then we lack that guy that knows that yeah, you do all of that stuff that they told you in the books, but you know, after 2 hours you have to stir to the left, not to the right. Um, uh, like there’s like there’s just so much of that, right? >> Yeah. >> And this is just so disseminated in China. There there’s so much process.

[01:11:00] How did that evolve in China and not here? What’s the cause? >> Top down incentives, >> just funding. >> I I think it’s it’s the government saying you’re a robot city. You’re a neodyinium magnet city. You’re and just capital and people and just, you know, communistdirected, but then allowing companies to build on top of that. >> I mean, is that what you see as well or not? >> I’m not sure. I think like the the Chinese startup community is very alive, right? >> And the capital is quite alive. And I feel it it runs very similar to kind of like more of like what we like to think of as like the bay here. I mean, I used to take a group of investors every year to China and we would go and visit Shenzen and Shanghai and Hong Kong and Beijing and meet with BU and 10 cents and and uh Huawei and the leadership of

[01:12:00] all these and there was a super vibrant entrepreneurial community, right? The the mindset was 996. You’d work 9:00 a.m. to 9:00 p.m. 6 days a week and that was a great lifestyle. and you considered the 1.3 billion people in China your market and the 300 million in America your market as well but there was a falloff um after 2019 and there was a real dip in that in that ecosystem I think it’s beginning to reemerge but I think the government is really pushing hard on supporting AI and you know humanoid robots is a embodiment of AI it’s they’re they’re obviously you know this they’re cleanly meshed Um, so I do think there’s a lot more support that the US government needs to give to us >> 100% >> hardware companies. >> But I think like what I wanted to say was also just like I think the >> most likely the most genius thing they did was the the economic zones like the free economic zones. >> Sure.

[01:13:00] >> Like it’s not that people here don’t want to build stuff and we want to build stuff. >> Yeah. >> It’s just it takes too long and costs too much and it’s too convoluted. Right. >> We do it in spite of the challenges. Yeah, I think the US should just like spin up some free economic zones. Like here you have like expedited permitting and >> well in California in particular be a no-brainer to do that. That’s the the simplest best idea ever, but I don’t know what it would take to get it through. >> No, I mean but this is some of like what people are working on these days, right? Like if you look at like mass stream project crystal land for example, it’s very similar to to this kind of like free economic zone uh in the US. I think there’s another problem you need to solve too though which is that you know the US just did software software software for you know forever and we were not only not doing hardware we just didn’t do chips like chips >> not forever we’re in Silicon Valley. >> Yeah. Where’s the silicon silicon? What’s up with that? >> There was a phase in between here where people kind of like they lost the way they lost the plot and now we have to find >> the venture community got all messed up

[01:14:00] too cuz they wouldn’t find if you had a physical component in your business plan. in the back. Well, I’m looking for the next Meta or Google. I don’t really care. Like, yeah, hardware. >> Well, all I’ve been doing is for I’ve been keeping this company for 10 years, so I can Yeah, I can I can go on and off about like >> go on and on cuz it needs to be how how much people are afraid of hardware. Yeah. Right. >> Yeah. >> But >> it’s going to kill us if if if we don’t find a solution. >> I think also it’s going to like kill VC to be honest. I think people like like if you look at return of venture, >> it used to be incredible, right? If you got to be an LP in a venture, you’re like, “Oh man, I’m like set, right? I’m gonna make the big bucks.” >> And now it’s 10 years return. It’s >> double. And now it’s more like a philan philanthropy thing. You want to fund startup entrepreneurs because venture doesn’t really make that much money. Um, and I think it has a lot to do with >> except at link you guys are doing you guys are doing amazing. We are doing >> That is good. That’s >> the you you guys touch the hard stuff. The point is if you don’t touch the hard stuff, we touch the early stuff, right?

[01:15:03] So, it’s first checks into companies that then are scaling rapidly versus companies that are doing >> we’re doing first checks into hard stuff at the seed stage, but we’re not doing hardware. So, I’m I’m as much part of the problem as we were like what if somebody came to me with a seedstage hard physical device problem and we very rarely will fund that and that’s a dysfunction, right? Well, you should look at what’s the biggest companies. They all have hardware. >> Yeah. I mean, listen, Elon cracked the code on that. I mean, he’s been able to just make hardware sexy and has generated incredible returns. >> Yeah. I think Jensen says it really well, right? They want to work on the really hard problems. Yeah. >> That are super painful >> that you are uniquely capable of because you know that your competitors have to go through the same pain or more and they’re not going to be willing to take as much pain as you. This is how you

[01:16:00] win. And >> things here are actually defensible, right? The moat we have on hardware, that’s years. >> The moat we have I’m incredibly proud of our team by the way. We’ve accomplished some things that are so amazing on such a budget. So we’re we’re way ahead of everyone else on what we’re doing on role models. Yeah. So let’s say we’re three months ahead, >> right? Because like way ahead and that is way ahead. >> You’re incredibly rare. Uh and and all props to Elon. He’s he’s incredible. But but Elon’s pathway to getting to hardware was through PayPal. >> Sure. >> A couple hundred million. Burned it all himself. Got down to near bankruptcy. was almost dead on both of his big, you know, Tesla and SpaceX. Barely pulled it out and then made them huge. But the VCs were not touching it. >> Yeah. He had to borrow money in 2008 on the in a divorce with SpaceX having its third failure to borrow money for >> funding model. >> You know, we I was very lucky. I had a

[01:17:00] very early um very good like early founding investor. Um I said like the company didn’t start in a garage because we’re not Silicon Valley. We started in a barn because we were Norwegian. Yeah. >> Um and at some point 2 years later he sold the farm. So we had to move. He sold the farm to fund the company. >> Huh. Right. >> So So you you would this company in Silicon Valley wouldn’t exist without a Norwegian investor. >> No. We wouldn’t exist because we wouldn’t like we wouldn’t have had the the runway, right? Uh operating this in Norway was just incredibly cheap compared to operating. your initial Norwegian investor. Did he or she believe that they were going to make a huge amount of money or did they do it because they’re passionate about your your vision and your mission >> or are they believing in you? >> Yeah, I think it’s all three. All three. Yeah, it turned out pretty well. >> Yeah. Yeah. But >> but it wasn’t here. That’s that’s the point. >> But like I mean uh there are different phases, right? Uh if you want to scale something, you have to come here. >> Yeah. >> I think you can do deep research in other parts of the world. There’s talent everywhere, but really kind of like hypers scaling that and like getting it

[01:18:01] across the finish line, >> you know, that’s here. >> Did you consider LA, Austin, Florida versus here in Palo Alto? >> Yeah, we even had manufacturing for a little time in uh in Texas in Dallas. Um there’s just there’s there’s >> the water something >> like the talent pool. It’s a talent >> Yeah. Like there’s talents talent everywhere but like the density of talent and you know there’s different types of talent because when you have like a zero to one field like this in the beginning you have a lot of like really passionate people they’ve been working on this all their life and they’re so good at in this case like humanoid robotics right and I remember back in the day if you went to the humanoids conference like everyone could fit around multiple table >> and those people are still the ones that are also these companies right so um And those people they don’t know how to make a great product. They don’t know how to don’t know how to scale that to a million or a billion devices. They don’t

[01:19:02] know how to write like the incredibly good uh APIs for the software to support the ecosystem. >> They know this thing and they do deep research and now your field kind of comes of age and it’s time to actually do this because the timing is right and we purposefully stayed very small for the first seven years just doing core technology. Now suddenly you get access to this talent pool of people that just go from field to field that is the hottest thing right now >> and just do it again and again and again and again and that’s Silicon Valley right like >> but there’s been an incredible inflection point in humanoid robotics I remember you know we had the Avatar X-P prize right our ANA avatar X-P prize that had teams build robotic avatars that um you could teleresence. >> Mhm. >> And I remember the finals. We had good teams. I know some of your team members here were parts of those teams, but it’s

[01:20:00] come, you know, a,000x since then and really in the last 5 years, right? It’s is it been the is it is it been the the AI models that have made that? What’s what’s caused the inflection in the last 5 years? >> The AI is clearly part of it. There are things we do with AI now that we couldn’t do 5 years ago. I do think we saw kind of like the the bad breadcrumb crumbs and we were like on the path already then but it wasn’t kind of working yet. >> Uh and I think it’s just you hit this kind of like critical mass of like accumulation of like innovations that has happened in hardware. Um I do think it’s important to note though that it’s hard to see what is like real innovation and not in any field and especially in humanoid robotics. So I want to just point out again that like you can go on YouTube and find things from the early 2000s that look better than most things you see today that human robotics companies are doing. >> Mhm. >> So you can’t just make a beautiful robot that looks good. You have to actually make a robot that is safe that you can

[01:21:01] actually manufacture at scale for a very affordable price and that still is capable. Right. >> And I think that that’s been the main unlock and the challenge like you you need to get those things right. >> Yeah. >> And that just takes takes a lot of time. I think the the neural nets are light years ahead of anything anyone would have predicted 5 years ago. And then the hardware, you know, the the Nvidia chip that it runs on is getting pushed as fast as any innovation in history because the demand is through the roof. So that part is well understood. On the physical hardware side, what’s something that you use today that you couldn’t have used 10 years ago? Like >> Yeah. What what’s improving in the motors, in the in the harnesses, in the electronics, batteries? >> Yeah. So I think mostly it’s been on like the motors and material science side. So we make our own motors including not only the IP for the motor but also the manufacturing and automation for all this and everything that goes chain is so brilliant. >> Yeah. You literally make your own motor like why the wires.

[01:22:00] >> Yeah. So >> holy crap. >> We do it kind of special. It’s the 1x version of this. So motors is one of the things we really innovated in. And this is actually how I started. Like when I sat down a decade ago, the first thing I did was to design a different kind of motor. Okay. >> Um, and the motors we have now in Neo, they are five and a half times the world record in torque. >> Wow. >> And that’s why we have something that’s so powerful. >> Mhm. >> That we don’t need gears. >> We can just pull on these tendons to like loosely simulate uh loosely simulate um human muscles. And that’s why it’s so light. It’s also why it’s so like driveable compliant. It’s why it’s so cheap to manufacture. >> Um like everything kind of comes from this. Now, of course, when you have these motors, yeah, then you can start using tendons, but then you need to sink a lot of time into figuring out how to use tendons. And then comes all the material science to have tendons that can last millions and millions of millions of cycles. >> Yeah. >> And these are really hard research

[01:23:00] problems, right? They’re not even engineering problems. They’re hard research problems. >> And we spent so much time figuring all that out. uh you can’t make the motors that we make without doing some pretty significant innovations in electronics and how you do um power amplification and and in general just motor drives. Um so that kind of like there there’s a lot of things that come together. You couldn’t have designed the motors we do today without some of the innovations that had happened in magnetics. Uh and of course you couldn’t have done it with our AIL either. The first thing I did back in the day when I sat down was to program a network to learn how to make motors. >> Oh, you’re kidding. You made them. You designed the motors via AI. How long ago was that? >> Uh, it’s bit more than 10 years. >> Wow. Okay. >> But I mean, it it wasn’t transformers, but it doesn’t matter. >> Yeah. Well, yeah. For that kind of use case, but it was a neural net. Yeah. Wow. Uh Burnt, you think about robots in

[01:24:00] the world probably more than anybody else. What’s your vision 10 years from now? What are we seeing? What does abundance in labor enable that goes beyond people’s initial reaction to how I would use a robot? I think like first of all what will happen is actual abundance means everyone can have whatever they want but not only can you have whatever you want you can have whatever you want in a sustainable manner >> because sustainability is something we lose when you cut corners to shave costs right if you actually have an abundance of energy and labor why would you not do things sustainably um and then I think the next frontier that comes after just in general like building out the infrastructure across the globe that allows everyone to have an incredible quality of life is how do we solve the remaining really hard problems in science. And I think this is not going to happen without humanoids because you need to build particle

[01:25:00] accelerators. You need to build enormous biotech labs where doing experiments experiment >> you need to do all the experiments right and >> also I think like it’s kind of like almost existential to us for human happiness. Yeah, >> I don’t want like the godlike AI in the sky to be directing all of the planets inhabitants around with their glasses to do experiments for it to solve science. >> That’s not the future we’re aiming for. >> Yeah. >> Uh we want to have this beautiful symbiosis of like co-invention between man and machine. And >> yeah, that particular use case is so acute where, you know, Dennis is working on the full cell simulator to try and close the loop, but you know that you’re going to need people to mix a huge number of chemicals to truly unlock longevity and health and chemistry. And you know, the humanoid robots can do the work because everything in the lab, >> not only can they do the work, I think this is a common misconception. Humanoid robots will do a lot of the work initially, but once it gets to a certain

[01:26:01] scale, the humanoid robot will make the automation system that will do the work. >> Mhm. >> Because humanoid robots will not be machining new parts with a Dremel, right? You will use the CNC machine. >> Uh humanoid robots will not be moving car chassis around by like carrying it with 30 humanoids. Clearly, this does not make sense, right? We have existing automation system and we will build more. What humanoids will do for you is to build all of these automation systems and get them up and running and then cover the remaining gaps that you currently today can’t do with humans. >> Yep. >> How are you going to do >> how are you going to do in a vacuum? >> I want I want my my Neo Gamma to help me set up my space station or mine my asteroid on. Um I think first of all we have a huge advantage u because the robot is so light and kind of like I guess Elon’s working on this but payload to orbit is still expensive. Um, secondly, most of the stuff we have actually works pretty well in space. >> Uh, we have to do some stuff with the epoxy on the motors. That’s not going to

[01:27:02] be very vacuum hard. Um, >> if you want to train in zero g, one of my companies is a company called Zero Gravity Corporation. These parabolic flights. >> Yep. >> Yeah. We flew Stephen Hawking in zero g. Maybe Neo Gamma should come next. >> That would be great. And I actually do think it’s like there’s a real use cases for this. And one thing is like building a base on Mars or whatever, right? But even before we get there, just in orbit assembly, >> yes, >> is this extremely high value task. And I think there actually we will use teleop. >> Mhm. >> And the reason I’m saying that is just like the the cost of mistakes is so hard that you want to use like the smartest most expert humans you have. And until we get to super intelligence, that will be a human. And you have people in orbit, you have robots outside, very low latency, and you can teleoperate in a very natural manner as if it was your own body how to do all of these inorbit assembly tasks. And they can be incredibly complex and you can still do them with very high accuracy and you’re

[01:28:01] not endangering people. >> Yeah. >> And of course, when you’ve done this for a while, you have the data to automate all this, which is very interesting. >> Yeah. That’s where your weight advantage would be really amazing, too, because you can take five, six, seven of these. >> And the energy efficiency, >> you must be you going to have to somehow bleed off your heat, right? Right. Hey, it’s really hard. >> Yeah, that’s right. >> That’s uh >> You must be looking to hire people. >> We are. >> Uh what kind of what kind of uh people watching are you interested in potentially hiring? >> People who are just really missiondriven that really believe in the beauty that will be a world where we have an abundance of labor >> and like to solve really hard problems. people have really also can demonstrate that they’ve solved incredibly hard problems because that’s what we’re doing here, right? Everything from material science all the way in the bottom all the way up to the foundation models at the top. Um, and I think what we offer is just this incredible

[01:29:00] place to work. >> Not with respect to work life balance and any of this. We’re not quite Chinese, but it’s a hard problem and we’re in it to win. but probably the place on the planet with the most experts across all different disciplines in science. So if you come here as a mechanical engineer, you will learn so much about AI, about electrical engineering, about batteries, about material science, everything else. And like doesn’t matter which discipline you come from, right? You will learn so much from the people around you. And I think also that’s one of our biggest strengths how we really always work in this multi-disiplinary groups >> and we find the good solutions between the disciplines where it’s like hey you don’t need to do that that’s kind of costly in manufacturing I can calibrate that away. >> Yeah >> or like you don’t need to calibrate I can this doesn’t cost me more. Yeah. Oh, I can see that actually when you’re walking around the building here. You know, Dean Cayman’s lab >> uh in New Hampshire is very very similar where he’s the Segway inventor and >> everybody’s just happy. Like all the MIT people that we know that work with him,

[01:30:00] they’re just happy. And the reason is because when you do software, you’re largely behind a workstation all day. You’re sitting whatever. But when you’re doing physical things, you’re moving around a lot more and you’re you’re building and making and it’s just it it energizes you all day long. It’s just such a fun work environment around. It’s so obviously tangible just walking around talking to people. So, it’s it’s a good it’s a good lifestyle. >> And it helps when there’s a lot of robots walking around with you. >> Yeah, for sure. And people go to 1X Technologies as the website to >> That’s true. >> to go and find out what positions are open. >> Yeah. >> And follow follow us on X and you’ll learn more about us. Pretty pretty active there. And uh >> for sure. And one thing I’m excited about to to announce is you and Dar and the Neo Gamas are going to be at the Abundance Summit in March. >> Yeah. Can’t wait. Yeah. >> Meet a lot of great people. >> Yeah. So, our theme this year is uh the rise of is digital super intelligence and the rise of humanoid robots because the two are going together. Spot on. >> Yeah, I think so. I mean, it really is.

[01:31:01] >> It really is. And um without making any promises, I’m hopeful we’ll have a number of the neoamas there. uh sort of like interacting and and sort of living and and hanging out with the Abundance members. Yeah. >> How do they get there? Do you buy them an airplane seat? They just walk off? >> Yeah. >> You don’t box them up, do you? >> Yeah, we’re we’re down in LA. So, >> yeah, we’re probably going to drive down to LA. It’s easier than getting them on a plane. >> Do you put them in the seats and strap them in? >> Yeah, we do. >> Actually, at this point, they’re starting to sit into the seat themselves, so it doesn’t strap itself in yet, but that’s coming. Um, it’s it’s an interesting story. Interesting. It’s funny funny story here though because we we put one of the first robots on a plane back in the day. Uh, we were rushing back home from China. Uh, it’s a proper proper startup story where we were like, it’s way back in the day. Um, we were running out of money and we hadn’t kind of like gotten to where the product was good enough to raise more money. So, I took the entire team and we went to China and we lived in a hotel for 5 weeks designing and manufacturing

[01:32:02] kind of like >> as we go. No, design designed until late in the night and in the morning you walk down to the machine shop. You help uh get them some information. You get some new parts back and we just kept like iterating on this electronics market. It’s magical, right? >> Yeah. >> And then we have to go back and we’re just like, okay, we to rush back on the plane to meet some investors. So, we check uh we we take the robot and we fold it up, right? Yeah. And we put it in a briefcase >> and when it goes through the the scanner >> and you can see the guy just goes all white and he’s like shaking in his hands when he’s opening the bag and we’re like, “No, no, it’s just a robot.” And he’s like, “Yeah, it’s a robot.” >> That’s hilarious. >> That’s awesome. >> Uh well, yeah, really really thrilled. I loved your your TED talk and excited to have uh but Neilogama there hanging out with all our abundance members and hopefully um you’ll be ready to make

[01:33:00] some uh some sell some robots. So, in the early days of of uh making getting them to the home, uh no promises, but you’re going to have sort of an application process to get to get the robots in and start to build data data assets. When when do you think you’ll be ready to take pre-orders and orders for Neo Gamma? >> Going to be kind to my team and not say a specific date. >> Okay. >> But it is happening this year. >> Okay. >> It’s this year. >> This year, 2025. >> 2025. >> Yeah. Now, we’re going to talk a lot about this in the pre-order, >> but the most important thing we do here is is expectation manage. >> Yes. >> This is incredibly early, right? >> Yeah. >> And what you’re buying here is kind of a ticket to be part of this transformation. >> Adopt a Neo into your family. >> Help us teach it. It’s going to be a lot of fun. It’s going to be useful. >> I love that framing. That’s perfect. >> It’s going to be useful, but it’s not going to be perfect. It’s going to be a lot of like rough edges. And we’re going to treat you really well. We’re going to

[01:34:00] figure it out together and it’s going to be an incredibly fun journey and that’s kind of like the early adopter program that that we’re launching this year. >> Yeah, you’re going to have a long waiting list. You know, we need we need millions and millions of these and we need to get the price point down and then uh I mean when when you think about the constraints to human happiness globally. Uh, a lot of them are going to be solved through regular AI, but another big chunk, most of them are related to houses and food and physical happiness. >> Give the jobs that are dull, dangerous, and dirty to the robots >> and and create a lot more of the things that make people happy, the parks and the and the homes and the, you know, all of the bigger homes and >> better things to play with. It’s all constrained by that inability to manufacture through the lack of the humanoid You know, >> let me let me ask you a numbers question. So, I interviewed uh Elon at FII Summit. You’re going to be there in October as well. And uh and also Brett

[01:35:00] Atcock and they both gave a number around 10 billion humanoid robots by 204. Do you believe that number? >> 10 billion by 2040. >> Yeah, I think it’s probably roughly correct. I think it might happen before. I think it really comes down to what kind of like artificial constraints we put on how we scale. Yeah. Um at that point you have to actually really think about like how are you refining rare earths? How are you mining more aluminium? How are you ensuring that you like get your labor bootstrapped really well with with robots into labor? How do you build out a power infrastructure? We need more chip fans by the way. >> Yeah. I mean like it’s we’re we’re not going to be able to build 10 billion humanoids without way more chip fabs. >> We can we can help with having robots build this out, >> but I do think that timeline depends a lot on how permitting processes go and like how much

[01:36:01] we kind of allow oursel to scale and how fast. But I do hope we get there. >> Yeah. I mean, for reference, there’s like a a billion automobiles on the planet. You know, you would think there’s more, but >> I don’t sure how many iPhones or Well, there on the order of 8 billion smartphones on the planet. >> I’m really glad you said what you just said, though, because the numbers are so wildly out of balance. You each each one of these robots uh uses about full GPU, could probably use two, and if you’re talking about a billion of them by 2040, we’re only making 20 million GPUs a year. And then TSMC has 66% market share now in the fab. So they have literally one point of failure for the entire economy that we’re trying to build. And so we’re desperately short on the fabs. >> And and that’s if you just go one layer deep. >> Yeah. >> Like look at ASML >> behind. Right. So like the supply chain for chip fabs. >> That’s even more brittle. >> Yep. I’m really surprised that we’re not

[01:37:01] moving much faster given that Elon is right in the middle of it. Elon is or was in Washington that we’re just letting this bottleneck >> how long have we been talking about magnets? >> How long have we been talking about magnets? >> We’ve been talking about magnets for a long time, right? That’s the problem that like only China can really make >> high magnets. >> Yeah. Yeah. Yeah. >> Uh and it’s not just the rare earth, it’s the process to produce, >> right? Yeah. >> And I think now finally like people are opening their eyes and like wait a minute this is actually a real problem. >> Uh >> we meet we meet with a lot of government officials and they’re completely unaware of these bottlenecks and and it’s funny that if you point them out there’s still no reaction. You know it’s like but it’s it’s so acute and so urgent but it’s I mean you’re you’re in a perfect position to actually identify those bottlenecks. So, it’s really great that you said it on this podcast because then we can take that material and say, “Look, >> look, he would know, you know, like this. This is what we need. This is going to be a crisis very quickly.

[01:38:00] >> So, yeah, >> thank you for the tour today. Thank you for the work that you’re doing. Uh, super grateful. Uh, excited to have you at the Abundance Summit with your with your team of robots. Um, if you’re interested, it’s abundance 360.com. Check it out. Uh again, it’s 1x technologies uh.com to come and learn about the positions here and uh following you on X >> or even easier. 1x.te 1x.te. And by the way, the reason you named the company 1x, I think that’s worth closing out as the story here. >> Well, you know, there’s all of these videos on YouTube robots. Yes. >> And there’s always like this 8x or 4x in the corner. Uh and all we do is real time because we build proper robots. >> There you got it. What you’re seeing is real 1x speed. Um, and uh, we had fun today with Neil Gama. >> And it’s also amazing because you if you apply for our careers and you come here, you get to be a 1x engineer. >> Okay. Well, a real pleasure, my friend.

[01:39:02] >> Awesome. Thank you for the day. >> The things I get to do because of this podcast. >> We’re having fun. >> Oh my god. Yeah, this is so awesome. Every week, my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There’s no fluff, only the most important stuff that matters that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important metat trends 10 years before anyone else, this report’s for you. Readers include founders and CEOs from the world’s most disruptive companies and entrepreneurs building the world’s most disruptive tech. It’s not for you if you don’t want to be informed about what’s coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmmandis.com/tatrends

[01:40:00] to gain access to the trends 10 years before anyone else. All right, now back to this episode. [Music]