I am blown away by how far you’ve come. >> The things that you can do with neural nets now just like completely blow my mind. Every year to year, the whole business looks completely different. >> It’s amazing to me how you you accumulate data and the data becomes this incredible barrier to entry, this incredible asset. >> The one thing that’s important here is >> once one robot learns how to do a task, >> every robot fleet knows it. And humans still operate like this. >> When do we start seeing robots building robots? >> We will put [music] robots on our Baky lines this year. Listen, this is like going to be the largest economy in the world. It’s going to be super impactful business. It’ll lead to like a ubiquitous goods and services for anybody [music] in age of abundance. And um it’s going to be a super fun business, too. It’s like going to build the sci-fi future we all want. What you’re seeing is every major group in the world will get in the space. You have to. You have like no choice. >> When are we going to see the first figure in [music] the customer’s home? >> My best guess is I think >> now that’s a moonshot, ladies and gentlemen. So Dave and I are in San Jose at Figure
[00:01:01] headquarters. Uh we just did a podcast with uh our friend Brett Adcock. Extraordinary. And >> check it out. >> Check it out. So >> we’re here. >> Yeah. Figure one. >> This is the original. >> Yeah. >> Still somewhat functional. >> Yeah. It ran the first uh large language model, the first uh neural net. >> They built it in under a year. Brett actually was screwing these things together himself and it was all about gathering telemetric ga data so they could build this. >> Here’s figure two. Uh much more beautiful, much more functional. Uh running uh neural nets across the board dumping all the C++. Can it do uh you know live long and prosper? >> But I here we go with uh figure 3 is the workhorse right now. We just uh did a tour. I mean, probably, you know, saw a hundred of these walking through the hallways on test stands, uh, cleaning dishes. >> Was good. They added a flexible toe, too, so it can go down like this. And
[00:02:00] before it had this this clunky clunky foot here, >> and figure three has uh the palm camera. >> Palm cam. >> Yeah. >> They cut about 30 lbs off the weight and 90% of the manufacturing cost. >> Wow. Crazy. >> Yeah. >> Amazing. >> Yeah. It’s It’s the perfect height between the two of us. >> Yeah. >> Welcome to Moonshots, everybody. I’m here at Figure headquarters with Brett Adcock and DB2. Uh Brett, it’s been uh it’s been about 18 months since we did a podcast on Moonshots together. And I I am blown away by how far you’ve come. >> Months in AI time. That’s like a decade, >> dude. Welcome to Figure Headquarters. What do you think? Yeah, it’s extraord I mean just describe we just went on a tour. Uh you got 300,000 square feet 400,000 square feet under development here. I mean there are figure three robots walking down the halls. Uh there’s fully autonomous robots I guess running Helix 2. You just released Helix 2 today.
[00:03:00] >> Today >> I got it while I was flying up here. Uh we have uh these robots doing everything from kitchen tasks to packages to manufacturing of different type. Uh I mean how many robots do you think we saw? >> Seriously, >> I wasn’t counting as we hundreds. Maybe not a thousand. >> Yeah. >> Hundreds. >> At least at least 100 or so. >> Yeah. Well, there’s a lot of partial robots out there, too. It’s hard to >> picking up figureheads. [laughter] How many hands do you think we saw? There’s more many more hands on. the hand line, the headline, the uh the torso line. >> Actually, picking up the head was the most surreal. >> This is where the pelvis is made. Yeah, for sure. >> Yes. >> Uh pretty amazing. You know, I still remember uh during my first visit with you, you know, full disclosure, my venture fund is invested in two of your earlier rounds. Super proud of the progress it’s made that you’ve made. I still remember uh your figure one putting a curig cup in a coffee maker and that was a that
[00:04:00] was a big deal because it was done with uh with neural nets and not c not C++. >> I mean that honestly was like I think that was a big inflection point for us. I feel like the you know I think a few things we needed to really run down is can you build electric humanoid? It’s like low cost. It’s capable like a human like just the hardware side of things. The second thing is can you figure out a way to not code your way out of this problem? How do we use a neural net to learn those like human type representations >> and the new task? And when we were doing the curig task, it was a basically bimmanual uh neural net running on the robot which is now like evolved into Helix. And I was able to basically do the whole kind of like you know it was a smaller task but was a few minutes longer of like you know like picking up the curig cup like opening the coffee putting it in running it and it was the first time we saw like true instance of kind of like neural nets world working on a you know a bmanual humanoid robot. >> Yeah. And uh that was when we were like, “Okay, we have to just go all in on neural nets.” The whole stack needs to be neural nets to make this work.
[00:05:01] >> And that started like and that was basically 2 years ago now. And then you guys saw Helix 2 today. >> Yes. >> Which is like the basically like the the best um release we’ve ever had. So, we’ll we’ll run a clip of Helix 2 while we’re describing it, cuz what we saw was, >> uh, figure 3 running Helix 2 in full autonomy, >> uh, going into the dishwasher, >> picking stuff up, putting it away, >> uh, not pre-programmed. >> And I I loved I love the human elements of it, like using its hip to close something and its foot to raise the dishwasher. >> That’s the neural net difference, though. You get you get unexpected behavior, you know, both good and bad, but but things you could never code up. >> You could never >> like your your career went the software company, VTOL company. Now, this has got to be the first neural net platform. >> Yeah. The like we would like the things that you can do with neural nets now just like completely blow my mind. uh verse code like we could have never have done >> a quarter of the stuff that you saw today with the whole body uh with
[00:06:01] manipulation with things that you like you know there’s there’s only so far you can really push like code heristuristics into a human on robot it’s just a dead end. >> Yeah. >> Uh it’s just not going to work. >> Yeah. >> Yeah. It’s amazing to me how you you accumulate data and the data becomes this incredible barrier to entry this incredible asset. >> If you were writing all this in CC code that code would be you’d have millions hundreds of millions of dollars invested. you would not want to mess it up. With the neural net, you can say, “Look, hey guys, retrain it from scratch right off the run.” It’s just a completely different approach. And that’s why people are way underpredicting how important or how quickly this is going to evolve cuz it’s a completely different paradigm. >> Well, we’ve like lived through it. I mean, like we, you know, I think maybe a year or two ago, we had like few like several hundred thousand lines of C++ code. >> Several hundred thousand. >> Yeah. Handwritten code. >> Yeah. Probably 100 bucks a line to write it. >> Yeah. Very expensive. Very hard to like test and like get out reliably. Um and like also hard to like model all the different behaviors that we would need to test the uh like the the curtain. >> Yeah. >> Um and then you know we removed a
[00:07:00] majority of all that in the Helix one uh where we still had a lot of like lower body control uh being run in basically the control stack in C++ >> and then today we uh removed the remaining 109,000 lines of C++. [laughter] >> Uh all neural net >> all neural nets today. Uh that’s a full body and that took it from like being able to do really good tabletop manipulation like you saw the the cur coffee uh the work we do logistics like all that being done neural nets we we’ve been showing like amazing progress there but getting the whole body to get out of there and move dynamically through a scene uh while manipulating and planning is just a whole other like we basically spent like a greater part of a year refactoring the helix architecture to be able to enable this to work. >> Um you’re talking about now like moving through space like a human. Yeah, >> having like control of the full body, >> eye, hand, foot, leg coordination, everything is >> sensor data in cameras, tactile, we have camera, pal, palm cameras, >> uh, basically doing inference on board the robot fully embedded and then be
[00:08:01] able to output torqus into the motors and do that, you know, a few hundred hertz >> uh, you know, in in terms of like, you know, that pling control and do that reliably on very difficult tasks. These are bmanual tasks where it’s grabbing and holding things, planning, moving the body, getting things out of the way. Uh making like uh errors and replanning and fixing this. Uh all done with the neural net now end to end over like a a pretty long like for us it’s like you know it’s a it’s kind of like um room scale autonomy. So we can like now like finish the whole room which is important. And next we’re going to graduate to like basically the full full house. >> That’s one of the things really obvious when you’re walking around looking at what everybody’s doing and working on. you visualize a robot company having lots of people working on microode or actuators or batteries or whatever, but there’s just a huge number of people out there at workstations. They must be working on the neural nets cuz you know it’s it’s just got to be such a a dominant part of what makes the thing actually look and feel human and you know the motions are so smooth and >> and you know everybody when they think
[00:09:01] about the history of robotics they kind of chart these line charts but it’s not it’s not like that. It’s a disruptive change from dropping that last 100,000 105,000 lines of C code >> to moving to an all self-organizing neural approach. >> Completely different >> future. It is like we make these like technology like progress steps and I think it’s been very apparent here like every year to year the whole business looks completely different. >> Yeah. um in large part of trying to get the hardware, hands, like all this stuff in a good spot and then um you know uh be able to be basically have like more range of motion and speed and torques like a human and then be able to get like you know we’re we’re all in on neural net so it’s been like you know what is the right data set for that for pre-training and post- trainining uh do we have the right you know training cluster do we have the right models uh and then deploying those really well on the same humanoid hardware that’s like a full loop like we’ve actually designed figure [clears throat] 3 >> to to run like if you say like what is the guiding principle of figure 3 more than anything else. It was just designing for helix. How do we design this to run like helix on? How do we get
[00:10:01] helix a bot? >> So counterintuitive >> everything just the so build around the neural net. >> We built the we’re like looked at the neural net and we said how do we like fit like this into a humanoid robot and what are the best sensors? How should it run? What does the operating system look like? Middleware, firmware, embedded software like all of it is encapsulated in this like view that we need to go all in on real nets and do humanlike work. about to release the 2026 version of my humanoids metatren report. It’s a deep dive looking at a 100 different robots in development right now. A deep dive into 10 of them including figure uh 150 pages. You can check it out at Substack for my paid subscribers. Anyway, super pumped. This is a field that’s moving at exponential hyper exponential speeds. So in the beginning you had partnered actually with OpenAI on software and you made a you know departure from OpenAI and I mean I guess are >> not quite accurate but like >> okay well you can you can you can you know >> I mean I think you know I met Sam and
[00:11:00] Open team and they were just really interested in getting into robotics and it was like in their early like you know master plan is to get into like basically shipping home robots and um >> they you know really wanted to um kind of like you know basically uh basically work on a very intimate like relationship. Uh they ended up leading our like co-leading our series B along with Microsoft and we started working on basically like a collaboration agreement to help like work on next generation models for humanoids and um you know we were like super like big then still are on like how do we like language condition the whole stack uh how do we use like an LLM in a lot of ways is just like this like world model. [clears throat] It really understands like in the weights like basically like what things are, what it should do, has a lot of like good semantic understanding. Yeah, >> we’re trying to how do we tap that in for the humanoid? How does it how do we learn from this? >> Yeah. >> Uh at scale and some of those representations. >> Um and it just like the partnership just didn’t work. >> Like my our team just ran circles around them. >> Yeah. >> For basically better part of a year and
[00:12:00] it just came to a point where like uh it just made sense to just wish we were we were just doing all the work ourselves internally. We had a whole team here, a lot from like some of the best like labs in the world and we were putting out like uh work after work. The cur coffee stuff was done by us. All the stuff was done internally and at some point it just didn’t make sense to train other folks on how we basically build AI models internally for embedded systems like a humanoid. Did it turn out that that LLM matters at all in physical >> like you could start with an open source LLM and >> is it like a VA like vision language action model you’re >> Yeah, we basically like uh I think the LM is definitely a certain piece of this um uh like we basically want to like uh like take it like the semantic grounding like a like a VLM. >> Yeah. Like the common sense, you know. >> Yeah. Like how do we like understand from this? So you know >> uh which we have in you know the helix today super critical but like getting to a point where we can >> um understand physics in the robot and have it like uh really be able to plan and reason at fast dynamic speeds
[00:13:01] >> uh was something that nobody in the world’s ever really done before >> and I think that that’s the work that we I think have been excelling at and really love is just like how do we get us like understand physics. I think I think most of our audience probably knows this, but just to rewind the tape, you know, the LLM, GBG2, GBT3 built entirely on text data, scraped right off the internet. >> Yeah. >> And then they supplemented that with a ton of other data also in text form. >> Uh, and that creates this this machine that has tremendous amounts of common sense. And if you ask it, hey, do you know how to play soccer? Says, yeah, of course I do. >> But then you try and install it in an actual physical moving machine, it has no idea what it’s actually doing. >> Yeah. I mean like we basically need to like touch everything in the world >> and we have this like really high dimensional robot that has like you know 40 plus degrees of freedom. So like uh and so like you know on the surface area just the math around this is like like the dimensionality space is really high. So you have like 40 motors they all can spin 360°. >> Yeah. >> So the amount of states the robot can be in like positions is like 360 to the^ 40. So there’s more states of the humanoid than atoms in a universe.
[00:14:00] >> That’s a lot. >> So you’re not going to simulate those one by one. >> Yeah. Exactly. So like um so the question is like then then I need to like understand these fine contact dynamics of like I need to grab this water bottle like where do I position my elbow, pelvis, like torso, head like fingertips. How do I plan the you know to grab this is you know and then how do I put pressures on there and how do I understand those representations really well >> um you know from observations now into actions that I’m doing at test time >> and um this is not in like knows none of this knows this is a water bottle. Um it probably knows like I need to grab it from the side. Um, and then but like all this like uh like you know uh like all this implied physics that we need to do here just we have to go train models to go do that. >> It’s actually kind of weird because it thinks it knows how to do it too. You know the the LMS feel like they can do things you know intuitively and then they they completely fail. [clears throat] >> I mean you can we’ve done this you can I’ve done this. You can zero shot the LMS inside a robot. We do it we still do it actively. They just can’t do anything >> just for fun just to watch them fall.
[00:15:00] >> Yeah. I’m like, you know, I’m kind of interested. Like a project I’ve been doing is like, can we just like uh uh the other day I was like, can I zero shot? Like I’m working on this um new AI lab that I founded uh recently called HARK. And we have this new AI model here that is like it’s just completely >> Wait, you founded a new AI? Well, rewind the tape here. What? >> A new AI lab. Um >> I sent you this. Did you? I did. I Yeah. >> Yeah. working on and we have like some new AI models and we actually put one of them into the figure robot >> uh like this month. Um, and I was like, okay, let’s just zero shot. Let’s give the LM, you know, let’s give the the model uh this is like a multimodal model. Let’s give this access to just like basic commands like basic like XY coord like basically can we give it like uh acceleration and XY coordinates for navigation like basically a joystick. Can we give it a digital joystick? >> Yeah. >> And I asked it to like find the exit sign and just like go go to the get out of the building and it just u and unfortunately it was going the right direction and ran into like a clear
[00:16:01] glass wall. >> [laughter] >> um kids. Exactly. So, we like we’ve really stressed this. It’s uh it just doesn’t work. Like you’re missing so much um like world understanding of like what’s really happening? How do we move my body? Like we’re thinking like, you know, it’s pretty simple to grab an object maybe with a stationary robot, but the robot we have for humans are are moving the pelvis and head and torso and hands and arms. Like when you’re reaching out to grab something over table, your pelvis is moving backwards. like it’s like it’s like very um very difficult to command like very high >> robotic physiology you know out of China this year some of the government employees said we’ve got a robot bubble they have 100 I don’t know if you saw that that that article that came out we have 150 plus robot companies in China and I mean there’s a lot going on there >> you know in the US I would say maybe there’s 10 serious players I mean two or three who are extremely serious including figure. Uh there’s but there’s
[00:17:00] a lot of potential humanoid robot companies. I was just at CES and saw you know I mean it was a humanoid robot explosion. >> Yeah. >> And then as many or more hand companies which is interesting. So I go back to sort of the early 1900s uh when there were like 250 car companies and like uh two or 30 hund tire companies and then this massive consolidation occurs and GM and Chrysler [clears throat] uh and Ford sort of buy and consolidate. What’s what do you think is going to happen with all the robot companies today? >> Um I think it happens in every industry like this especially in deep tech. This will all consolidate down to a few groups globally. Do >> you have a guess? Is it a a triopoly? That’s the right description. Is it, you know, more than 10, less than 10? It always seems >> far far less than 10. >> Far less than 10. Yeah. >> Globally. >> It always seems in the US anyway to settle down to two, three or four. But the borders are not obvious. Like with cars, cars are cars car, right? And and
[00:18:00] actually you had cars and trucks and those were kind of separate for a while. Well, you also have different designs like I want the plush interior. I want the sportster. I mean, >> and that I think I wonder is it going to be are robots going to be differentiated by their vertical application, their personality? >> Exactly. There’s so much more variety possible in robotics. >> Yeah. I think everybody’s just like taking for granted how difficult this is. >> Yeah. >> Like this is like you have to go out and build like basically like pretty novel uh very very difficult hardware. >> Needs to be relatively cheap. Then you got to figure out how to make neural nets work on it. And then you got to make neural networks work on at scale. And then you got to manufacture at scale. And then you gota get these products out reliably that all work every day without any human intervention. >> Yeah. >> And um you know I think we talked a lot about this like how we’re doing like K Cup coffee work. Like I haven’t seen any single human in the world do that >> uh or able to do that today uh globally and that’s been two years. >> Yeah. I mean by the way a lot of the video we see is is actually uh tell operations. I think I wonder if people realize that a lot of the robot
[00:19:00] companies are teaoperated versus fully autonomous. What we saw just [snorts] walking around here was uh was a 4-minute long fully autonomous operation on Helix 2, right? >> I’ve never like I’ve you know I’ve built a lot of businesses in my day. I’ve never seen um so many companies with a human in the back commanding the robot and putting out updates uh in my life. I’ve I’ve just never seen it. I’ve like I uh you know when I started first started thinking it was stuff was coming out, but now it’s like every week is somebody just telling a robot and putting out a video and it’s just >> it’d be the equivalent of like I’m I have a self-driving car company and there’s a guy in Tennessee driving it and we’re like marketing as like there’s no humans in it, self-driving. We’re putting out teasers. We we’re in a lot of cases now there’s companies selling a service like um so I think like uh I mean if you want to do this right you you got to you got to believe in neural nets all the way down the stack you got to basically build for general purpose uh purposeness
[00:20:00] >> so the parameters that make that’s going to define the success successful top two three four uh the neural nets manufacturing >> okay I would say like I would say what’s impressive today is not manufacturing you could probably solve You know, we’re pushing on manufacturing hard, but you can probably solve general robotics with 100 robots. Um, you impress like a full end to end robot that is generalizing to an unseen place. Like you can drop it into an Airbnb >> and be able to do long horizon work with neural nets. >> I mean, any long horizon work in unseen places. >> What do you define as long horizon? Hours, days? >> I would like to see days of work. Yeah. >> Full time as days of work and at least at the very least. And um we’re like so far from that. >> You have like robots out there doing like karate and jumping, which is like uh these are like pre-programmed openloop behaviors. They they’re not impressive. We we do that. We’ve done that stuff here. Like uh you know what I mean? We’ve done like the like the openloop behaviors. It’s just there’s
[00:21:00] just like you know any college kid in a dorm room can do this with a uh with a robot and um >> yeah. So I think like uh that plus telly operation telly operation is not impressive. You could build a shitty hardware and still tell operate it and put out videos. That is not hard. >> Yeah. >> Um what’s hard is to do full end toend neural network in unseen places or or generalize to this. >> And then um if you can solve that then the next step is like how do you get that out at scale? >> But we are still in the like who can solve general robotics phase of the humanoids phase. And it’s just not impressive if I can build >> 100,000 robots right now that like need teley operation or can just only do open loop replay. Like it’s just not not cool. Like we we we we if we like Brett, your only job is to build 100,000 robots right now. >> Yeah. >> We have the capital to do it and we can do it. But like the what we really want to solve is like I can I can give you 10 robots and they can go into inseam places and do real useful work. Like that’s what’s going to differentiate. >> So iterate that until it’s right and then mass produce. >> Uh yeah, you basically want to be bringing up mass production in parallel cuz like building like high rate
[00:22:00] manufacturing for humanoids is going to be super hard and you’re going to like have to go through like a lot of iterative design process. So it’s what we’re doing now. we’re bringing up higher volume manufacturing as we’re like learning how to build true general purposeness. Uh so but my view is like if you think about these like these like level bosses that happen that will like that will hurt like that you need to graduate to um you need to graduate to doing like you know first very short periods of like neural network which we haven’t seen a lot of in the world >> today I don’t think there’s anything over a minute long in the world that’s doing neural nets continuously today >> in human >> that’s amazing >> everything’s cut all the films are cut or teley operated it’s pretty crazy so >> I’m I’m really glad you’re telling us that >> yeah I mean like that’s you watch any video you want you want to see it uncut you want to see done with neural nets it’s like not teley operated ated like um and then you want to see stuff we showed you here in person today that like that are running for hours and hours and just like we we run these robots with no >> the kung fu videos whether they’re toy operated or fully autonomous are are actually fascinating and scary when you see them >> but the technology around there is not great I mean you’re basically putting somebody in a mocap suit
[00:23:00] >> you’re having some guy like do karate chops or walking around >> and then you’re running that open loop you mean you’re running that blind you’re just hitting a replay button >> right >> and you can do that with a very simple like RL neural net You can basically do mimic on this >> and it’s it’s super simple like there’s like open source code for this you can do with like basically one GPU on your desktop and you can do with any robot and every robot has a very tiny amount of computer. These are like these are single million parameter models are very small uh you don’t need a lot of memory and um they’re very simple to execute. What you really want is good closed loop of control where it’s reasoning like at over like kind of like 200 hertz >> or 200 times a second. Sure. And it’s dynamically responding to the scene. Yeah. >> And that is literally uh a million times 100,000 times harder than doing open loop. >> What the human uh sort of cycle time is >> I mean it hurts. Oh gosh. >> Much lower than that. >> Yeah. >> I would imagine. >> One thing we’ve seen about a robot is we can like balance on one leg like better than a human. >> Yeah. >> We just have like much better uh like faster like dynamics. Can we talk about the speed of development here? So 2025
[00:24:01] um I’m just trying to imagine you put out this beautiful uh you know post every week on on X about the progress in the robotics field and what and what’s going on here at Figure and it just constantly you know locomotion was a big uh a big step forward excuse the pun for for Figure just seeing it walk and then run very naturally. Um what else was was significant in 2025 for you? >> I mean we launched Helix in 2025 about this time last year about a year in now. >> I think that was like highly significant like we basically figured out how to run like basically like longer like over long periods of time neural networks on a robot. How do we get the data for it? How do we train models? How do we deploy to test time? How do we get like you do use you watch like package logistics? I think I saw it running there. It’s like running for >> it’s been running for days now. >> And it’s just like uh it’s a neural net all the way down the stack. It’s learning how to grab packages um kind of like you know individualize them uh find the barcode, position it down. It’ll
[00:25:00] even pat the package down so the barcode reader below can see it and scan it. And it’s doing that on very high like uh have a high like accuracy and assume that high speed. So >> high speed though. That’s the part that Yeah. No, it’s visually crazy. >> Yeah. Because a lot of what you see in robotics is it’s as fast as a human would do. Yeah, we like our last like we see air now like last room we did we did like we had one air over 67 hours continuous >> over 67 hours >> over multiple robots >> this thing is is >> crazy it’s doing an operation every second or two so 67 consecutive hours of that is a lot if you had to guess at >> so I would say I would say helix is a big one and then figure 3 >> figure 3 is a huge step change for us in hardware >> if I could what do you see then going in 2026 here we got the you know next 11 and a half months what are you excited about >> we will build like our entire road map around Helix 2. Now >> we will basically now Helix 2 can like go from like doing the logistics use case stationary to walking and moving and basically do like long horizon full body control. >> Uh so that means the and we basically have now integrated all the sensors tactile camera palm into the stack and
[00:26:00] we’re seeing like um improvements overall in the policy layer. So we’re get we’re getting better and faster about like basically like taking data and basically running it on on board the robot now. So I wanted I wanted to ask you like what defines Helix 2 because you’re probably incrementally improving the neural net every day. Yeah. >> So what is the [clears throat] >> a couple big steps? One is we basically have an integrated basically a fully learned um what we call like system zero which is our controller into the robot. So the robot has a full body reinforcement learn controller in it. Okay. >> Uh so basically now we have like we have like literally no code running on that robot. So it can like it can it can basically move the whole body uh itself using a full like uh basically like learn controller inside of uh inside of Helix. We call it S0. >> Has anyone else ever done that before? That’s got to be >> um There are reinforcement learned controllers out there. Like a lot of the karate stuff you see and things like that or that, but nobody’s really integrated that in the whole body for learned manipulation and perception. And nobody’s showed that actually working with like moving around and doing things that we saw today. >> Yeah. >> I actually don’t even know if anybody showed it stationary standing and doing learned policies. Actually, probably not
[00:27:00] in the world. So like getting it integrated into a stack now that we actually use uh going forward. I think one of the things we learned at like we were in BMW last year and we were there for like we did six months like we deployed like our figure two robots every single day. >> Yeah. >> The biggest thing we learned there is like the stack we had I think about 80% of the things we got right and 20% of the things we got wrong. Meaning like the things that we got wrong on we didn’t want to scale. It was working. The robot ran every single uh every single workday and we it worked. >> Um but then we learned like okay I don’t want to ship 100,000 robots in this like architecture stack. It’s just like too hard to scale. >> Yep. >> It’d be like too brute force. >> Yep. >> And so we basically worked on basically for almost basically a year now on like okay what is the idea architecture where we can go out and accumulate large sets of pre-training data. Yep. >> Put in the robot and it can just like do this do this work and we we emerge generalization from this and that’s what you’re seeing today. >> Helix one had the C code in it still. So what defines Helix 2? Helix one had a lower body controller >> that was still written in C++ and
[00:28:01] everything else full upper body was full neural nets. >> Okay. >> And so we basically completed now the full body. >> Okay. >> Uh and then in doing so we also like did some work on our system level system one level where we integrated all the sensor modalities now from the hands >> and the rest of the robot into the stack. So like for example, we now have tactile sensors in every fingertip that we’re using on Figure 3 as well as palm cameras >> uh to understand how we’re like uh we’re sometimes accluded and sometimes we want to better basically better understand how we’re grasping items. So we put a bunch of stuff out about we’re picking pills and stuff out of pill cartridges that you like literally oluded from from the hand. Your hands like literally in front of the head camera, but we still really want to understand where we’re going. Yeah. >> So I think um so so now with figure with Helix 2 we basically have a full stack end to end with neural nets and we feel uh we can we we feel confident scaling the pre-training data set uh into Helix. I even go as far as like we’ve designed Helix 2 for the pre-training data set >> and then we’ve designed >> and then we designed the robot for for Helix 2. So we’ve like we’ve designed
[00:29:00] everything around data. >> Yeah. >> And how do we get data at scale? If you’re in the neural net game, it’s like a data it’s a data play. It’s like how how like high quality and diverse. >> So it’s experience. It’s just gathered experience in the field in all kinds of circumstances. >> It’s weird. And I know everybody knows this already, but it’s accumulating and it never goes away. It’s it’s it’s incredible progress. Well, yeah. You teach somebody how to scuba dive or how to play piano and they have that knowledge. They live, then they die, then you have to teach somebody else. This is completely accumulating. The reason why I think it’ll be like a very few human groups is like the one thing that’s important here is that >> once one robot learns how to do a task. Every robot in the fleet knows it and humans don’t operate like this. I wish we did. >> I watch my kids like I like kids like learn how to do stuff and they just don’t listen. >> I wish I wish we wish we did. >> So 2026 predictions. What’s your what’s your boldest predictions for figure? What is your goals for this year? What do you imagine? Yeah, I mean we basically we’re spinning up Baku like production enormously right now for
[00:30:00] Figure 3. >> So you said something like uh a a robot every 30 minutes you expect? >> We’re we’re trying to get there in the near term right now. >> Amazing. >> Um which you guys saw we walked through Baku today. What did you guys think of Baku? >> Yeah, a lot of humans. >> I wish everyone could see it. I guess it’s all secret. You can’t you can’t through there. >> Um we haven’t there’s a lot of IP there cuz like you see exposed boards and actuators and stuff. Maybe we can we can mix it in here. But so whenever there’s a lot of humans, when do we start seeing robots building robots? >> Um, we will put robots on our Baku lines this year. >> Okay. >> Um, and then phasing like basically like phasing humans out of there will be a combination of getting more robots there and doing more high volume like automation >> over in Baky. >> Okay. So that’s the first 2026 objective. >> We want to scale up we want to scale up robots Baky for sure. Uh the second thing is we want to scale out robots in industrial commercial workforce. >> Yep. Uh so we have like multiple clients now we’ve signed uh they are like buying or leasing robots from us and we are
[00:31:00] going to get those out at scale in 2026. We have we know exactly what like where we’re going geography wise what use cases are going to be deployment schedules. Uh we want those to be figure 3es. So we’ve just retired end of last year figure 2s and now we’re basically building the arsenal of figure 3es out you know as we’re scaled up manufacturing to get them out into the world and run every day. We like the commercial workforce because it really helps harden our ability to run robots every day. Like what we’re here to do is like we’re here to build robots and run them in the in the world and they run 24/7. >> Your ideal customers who I know a lot of people would love. >> Um we have to be frank like so much demand for customers. We have like we’ve talked to 50 100 customers or so in the last like 6 to 12 months. >> Um we really want to be like kind of allin with a smaller group of customers and really uh spend time with them uh integrate well into their facilities and you know do well. We’re still at this like um we’re still early right. We don’t have like thousands of robots right at these places. We want to as fast as we possibly can, but like once we get to this certain like I mean we
[00:32:00] could probably ship like I think I think we could ship an enormous amount of robots into the current customer base we have now. Like um so we see like you know we’re kind of good now for the next like two or three years in terms of like we have so much demand like we like they’re kind [clears throat] of waiting for us to like ship at scale. >> Leasing versus sale. >> Um >> is it? >> Yeah. We have like a service >> we have like a you know we really like the leasing model. uh humans are leased. >> So um you know thought of it that way used to be a bot at least these days. >> Yeah. You lease humans. So we like uh we lease humanoids today. Um you know we won’t be like we’re not opposed. I think what really matters is trying to figure out how to find the right distribution to get robots out of scale. >> Like it’ll really help us get really good at what we do. Like it’s one thing to like show a demo or whatever else, but like you know when we had robots at you know in a commercial customer last year at BMW like it was just it taught us a ton about like running it every day fleet operations safety like uh repair and maintenance like there’s a lot of other things that need to come like uh come come through on the ecosystem that we need to get right. So I say second
[00:33:02] thing is like getting robots out of scale commercial customers and then um the last thing which is arguably the most important for us is we want to solve general robotics. >> Yeah. Uh we want to basically like the analogy is like we want to build a human in a bodysuit that you can just talk to that has like common sense reasoning you can communicate with that has like like basically almost like you know like almost perfect memory what’s really happening or what’s going on in your life um that can maybe talk to you almost be your companion and then go off and do things that you would like like you would an everyday human would want to do and I would expect them to get up to speed on those tasks um at or faster than a human can. Is there two different models then driving it? The the VLM model for the body and the physics and the embodiment versus an LLM for conversation and memory. >> We believe we believe this all comes down to like one model at the end of the day that is uh one omni model that is uh trained early in uh pre-training that helps fuse all this together. Um but uh
[00:34:00] but yeah, you could think of it like we need to have speech, we need to have like language condition policies, we need to understand physics really well. Um we need to remember things and be able to recall that easily. Um we need to have some sort of personality on the robot. I think one thing that um you’re going to see more and more is we really want to make this robot something you can spend time with. >> Yeah. And um we’ve been really focused on getting the core building blocks built. But like over the next year or two, I think you’ll see us um I think I just want a robot in my home I can talk to like remember things. Talk to my kids. My kids come home like sad from school or something. I want the robot to understand that have the EQ like self-awareness to to see that talk to them. Like I think all this is like something we want to spend more we’re spending more time on now internally. Is it is there already a big model where it’ll have different like depending on the task you’re doing it’ll run different parts of the neural net or does it >> we have like one neural net now that’s like that’s basically there’s no like libraries of neural net that we pull down. That’s interesting. >> So there’s no like a dishes neural net or like uh or like logistics neural net that you saw here. >> Yeah. Because it because you know at scale like if you teach the thing every
[00:35:01] physical motion massive number of combinations >> the storage is actually dirt cheap >> but the processing is very expensive. >> Yeah. Even better we’ve basically seen that we’ve seen positive transfer now with all this data. >> Yeah. >> Like coming in the robot can journalize better with more information even like more knowledge is better. It does it does cross like playing piano makes you a slightly better soccer player. But also, you don’t want to run the whole parameter set for piano playing when you’re playing soccer. It’s an interesting little hybrid problem there. >> Yeah. You don’t want you don’t want to nuke it. Yeah, for sure. Um, >> yeah. I mean, that’s where we try to build best in the world models here and build a great team that can ultimately deploy robots that are useful. I think showing like, you know, like this type of usefulness like either it’s like, you know, a lot of the stuff you saw today and a diversity that is super important for a humanoid robot needs to be able to do everything a human can, which is like and you know, the distribution curve. It’s like, you know, we probably do like billions or trillions of unique very unique things in the world. >> Yeah. One one of the things you said on on our tour that totally tells me we’re
[00:36:00] on the right track is that you’re using normal GPUs for the training like everybody, but the inference time compute is on super super fast dedicated nonH100 non you know GB300 hardware. >> Yeah. >> Which has got to be you know at least a factor of 10 or 100 cheaper and faster. >> Yeah. It’s also it’s also running fully on board. >> And it’s running fully on board. So we can basically like do like very fast uh inference and policy deployment. Yeah. >> And it’s also not sucking down the entire power of the robot. >> Yeah. Yeah. I mean you also have an issue where like you know um we’ve also run models offboard the robot but if we lose communications or have some >> I wanted to go there. >> Yeah. You know what I mean like if you like lose internet it’s like hard to do work and it’s like mostly we hit on on supply chain batteries and comms. So on the comm side, um do you imagine we’re going to be you’re going to be running like a 6G network on there besides Wi-Fi? What’s going on in batteries these days? >> Yeah. >> We have uh so from a network perspective or you know uh a communication back to
[00:37:02] the robot. We have Wi-Fi on board. We have a 5G and SIM card uh eim on board. So we can the robot you can text the >> uh James can have a network outside of like a Wi-Fi condition and then we also have Bluetooth uh on board. So almost like a walking phone or something like that you would think of. >> Um so we want I mean you really want like connection at all times but you also want I mean ideally you want connection all time but you also want the robot to be able to perform work without a connection. >> So you really want like a lot of onboard intelligence um you know that we basically uh in case you lose internet the robot’s like not bricked. Um I mean humans for the most part can do work without their cell phone. [laughter] >> Not teenagers that’s going away. >> Not teenagers. Okay. So, so batteries, I mean, they’ve improve been improving. What’s the battery life right now? I love the charging mechanism, by the way, for those who don’t know. You’re charging basically through your feet. >> Through your feet. No connector. You just stand charging. >> It’s really cool. >> What kind of battery life you getting? What do you expect in two, three years
[00:38:00] to get for battery life? >> Um, >> so today it’s what? >> Yeah, we run basically around like four to five hours per like full full charge on the battery. And uh um if we’re you know if we’re starting at full battery life um >> and then and then through full depth of discharge and then we can like charge wirelessly about 2 kilowatts through the feet uh inductively. So it’s about we have about a 2 kilowatt hour battery pack. So it’s about an hour or so for a full charge. >> Mhm. >> On the robot. So we can do like you know four or five hours on an hour off. Um >> that’s great. >> And yeah it’s great. I think like um I think folks are overindexing too much on [snorts] how long the robot can run on a single charge. >> Yeah, I don’t think I don’t expect there that many tasks. >> Take like a few like a few hours in. You’re not like you know go take a little break like do this stuff. So it’s like I think there’s um you know ample time to do opportunistic charging. Maybe send another robot in. Uh we also can charge like we basically can put like this little thin mat uh anywhere in the
[00:39:01] world like it could be like a conveyor system or wherever else could be at home in front of the kitchen. You can just charge there while doing work which is really cool. >> Um so you don’t have to like have any wires or things like that. You’re pulling from the >> Well, I think one of the greatest value ads you’re doing right now is people are overindexing on all kinds of weird things because they they’re you know they’re physical beings and they’re watching the robot do physical things and they’re saying, “Oh my god, can you believe it can sprint now? Oh my god, I can do a backflip now. Oh my god, I can do and you’re like, well, it depends whether you programmed that in C >> or you teller operated it or did it actually learn this? >> Yeah, I think most of those are open loop. They’re just like replay buttons. >> Yeah, exactly. And it’s it’s just so hard. So when people say, well, how long does it run with a one charge on the battery? You’re kind of relating it to your cell phone, >> but it’s not it’s not relevant in the inflection we’re going. >> Yeah. I think you just got to like the summary here is just like I need to see like real open like uh I see like real closed loop control of a robot moving around touching and moving things like a
[00:40:00] human would. >> Yeah. >> And that’s where the that’s where the hardest problems all sit and that’s where we’ve seen um we’ve seen this huge wave of like humanoid explosion like I think you said out of China like this. [clears throat] >> Yeah. But we’ve seen this like very steep drop off from getting to that point next which is even like show me a minute of the robot doing curig or something like that >> uh uncut uh closed loop >> real time. >> Yeah. Like and I just like you just haven’t seen that. And I think um I think you will and I think there’s like there’s then there’s like a lot more levels to go from there. Um and that you know that took us two years to go from like a few minutes of tabletop manipulation with neural nets. >> Yeah. >> To a point where we can do like kitchen work like you know like room like room wide room scale autonomy and that was two years of working seven days a week. We’re here like a lot of nights uh getting there. So it just gives you a little sense of like you’re not going to do that in six months from there. So I think like that there was a lot of both hardware low-level like software firmware embedded system sensor and then
[00:41:01] like neural net and then data all of that came together to build this like we we couldn’t have done this work on I we couldn’t do the same work today on a robot that we could go buy today off the shelf >> you vertically integrated I made the choice to vertically integrate but supply chain how much supply chain ties back to China >> um I think in like the next like I think by summer we’ll have almost none of our uh supply chain in China >> anymore. >> And are you do you buy into the US versus China sort of AI and robot competition? How do you think about that? >> Um I don’t like I just like I I spend a decent amount of time in China. I love it. China is great. Like I go there and it’s uh I know you’re like you’re watching TV here in the US and it’s just like this massive conflict and battle and everything and then you go to China and everybody’s just like trying to help and win and trying to work and collaborate and uh feels like a startup incubator and >> it’s like a oneway a fun oneway competition. >> It just feels like everybody’s on team human >> team humanity to go win and it’s so
[00:42:00] great. then you come back here, you’re like poisoned with all this like stuff online and like articles and television and it’s just like it’s not like that when you’re like on boots on the ground and going to do this. It’s like let’s go as like as one uh and go win. Um, and I just like love that spirit of like trying to like just progress this technology as a giant lever arm for humanity >> to bring [clears throat] like like you know um to bring abundance basically for everybody and just make it make it like a sci-fi future we all want to live it which is like >> oh my god it is that is we want to speedrun Star Trek is what we talk about. >> Exactly. It it’s like >> I see figure on the moon figure in orbit figure on the ocean [clears throat] floor. >> Um 100%. So, so the equivalent like uh you guys make your own actuators, motors here and part of that is because you want the exponential growth effect, but part of that also is the supply chain just doesn’t exist to give you the parts here. >> Improvements I mean you’re talking about the you just and I were talking about this the improvements made between figure 2 and figure three. >> Yeah. >> Because you have all of the ability to iterate in terms of speed and cost. I mean the the figure the numbers that you
[00:43:02] shared on cost I was like a 90% reduction. >> Yeah, we reduce cost like crazy on figure 3. >> It’s crazy. >> I think we listen we vertical integrity because we have to. It would be great if we can go off and buy like motors and we can plop them in the robot. >> It doesn’t work like that. It’d be great if we can go by hands and just like like you know screw them on to the end. It just literally doesn’t work. Yeah. If you go through the engineering work to basically understand how we do coms and power and sensors and failure cases and thermals and um you [clears throat] know low-level firmware and embedded software like it just like there’s like one of the cost something breaks and reliability something breaks in that equation and you’re like left with like hopefully the vendor fixes it or you die. Um it just doesn’t work. None of the stuff like the the technology readiness of these things are really low. >> Um we would have loved to have gone out and like bought all of stuff in the early days. We tried and we basically just failed at all of it. So we had like okay we need to go design ourselves >> and then now we manufacture like we do all final assembly and everything here >> and we we do that you know in some cases
[00:44:01] because like nobody knows how to do that well. uh we do that a little bit for IP like we really want to control that here and understand like what uh we have like people have access to and then um we also want to get good at making a lot of robots like like what we need to get good at longterm is like um probably a few things getting data at scale that can run neural nets >> and then uh you know basically doing helix really well and then making a lot of robots >> and then getting those things out at the world at scale like a pretty simple equation during the day >> it just feels like the journey of getting figure up and running must have been so much harder than it would have been in China. But then once you have everything built in house, all the actuators, you know, training the neural net and everything in house, then you have a massive advantage versus anything going on in China. Cuz if you’d been locked into a supply chain, it only has certain models. >> It’s like it’s like even if we used like an existing supply chain for all this stuff, the robot wouldn’t be able to do what you saw today. >> Yeah. It >> just can’t do it. If you go out and buy like a robot, a human robot off the shelf today, >> we can’t get it to do this. >> Yeah. >> We’ve like we’ve we’ve we’ve boughten robots off the shelf. we’ve looked at
[00:45:01] them like you just like you can’t get them to do this work. They don’t have the right sensors. They don’t have compute. They don’t have thermals. They don’t have the right the the hardware, the hands, the head, like the all of these are built around a neural net stack. >> Yeah. >> Well, that’s the new new thing, too. The neural net is is incredibly integrated with a specific hardware. >> If you watch folks that are trying to buy these robots off the shelf, like say from China, >> they’ll they’ll come they’ll end up retrofitting them themselves with these giant backpacks. They’ll have like power, they’ll have compute there, they’ll have thermals, they’ll have a wire hanging out. They got to hook that into the back. probably has it own local battery. They’ll hook it in the back of the robot. Uh like they have to like take it and they have to like overclock >> and it’s just like it’s just like a um it’s just the wrong way of doing this. >> It’s it’s like a hard thing. It’s like a it’s like buying a it’s like doing rockets and like buying a rocket and you know here we’re put stage two on the side or something like that. It just doesn’t really work at scale. Um it works for like uh in the early days like hobby grade like demonstrations and things like this, but if you really want to do robotics at scale, you’re going to have to go design yourself. >> Yeah. looking at the companies coming out of China, Unitry, uh, Engine, AI,
[00:46:02] and so forth. Do you have any which are the ones that you’re most interested in, excited, uh, as friendly competition, if you would? >> Yeah, I think one thing that’s great about China is there seems like it’s like, as you mentioned earlier, like this explosion of like really great talent and robots coming out the door, um, >> and great great entrepreneur work ethic there, right? >> It’s awesome. Like, it’s great. And I think um, I think it’s just good for humanity and like this needs to happen. I think the thing that we like I I think we’ve not seen is we’ve not seen any like closed loop like AI like control from these systems at all. >> Yeah. >> We’ve seen a huge lack thereof of that stuff. Um I mean industry is like here’s the here’s the robots we’ll sell them and they’re doing a ton of like like basically open loop like local >> they’re hand hand controllers. >> Yeah. So I think like but like doing that is very it’s almost it’s orthogonal work from designing the like the system the right way for full autonomy. Um but there’s I think you know if we to think about like who figure really competes with as our main competition it’s certainly China. So like as a whole um and you know um
[00:47:00] >> for manufacturing I mean for human human labor >> I think just for humanoids like we really don’t see anybody else besides China as a real competitive threat today. >> Fascinating. Do you rumors about Apple getting into the business? M >> they cut down their car, you know, their car project and the rumors are that they’re heading towards humanoids. Have you heard that? >> We’ve heard this. We’ve had um every major I’ve been in conversations with every major tech company in the world last 12 months. Um >> and then Nvidia and Google and even Sam. I mean, everybody’s making noises about >> Meta, Amazon. Yeah. >> Yeah. >> Listen, this is like going to be the largest economy in the world. It’s like half a G roughly a little under half a GDP of human labor. >> $50 trillion. >> Yeah. This is like the next great place to be. And I think um it’s going to be super impactful business. It’ll lead to like a ubiquitous goods and services for anybody in age of abundance. And um it’s going to be a super fun business, too. It’s like going to build a sci-fi future we all want. It’s going to feel like it’s going to feel like it’s going to
[00:48:00] feel like 2080 up in here. Um so what you’re seeing is every major group in the world will be will get in the space. >> Um you have to you have like no choice. You have to >> major group being Apple, Microsoft, Google, >> every I think every major player that wants to do this. I think the thing that >> I think it’s gonna be hard is like we’re we’re doing like rocket type difficulty and design here. >> So, it’s like if you know Meta is doing like building rockets, you’d be like that’d be crazy. >> Yeah. >> And there’s I I would think maybe the humanoid is probably up there with like rocket design. It’s certainly harder from an engineering perspective than when I built Archer uh you know, building like electric aircraft and that that was hard. Yeah, >> that was a very these are 6,000 pound aircraft, 12 motors, six independent battery systems. We built our own control stack and embedded systems like um it’s a very did all the structural design ourselves, things like this. Uh so I think um I think it’s probably up there with like some of the hardest hardware on the planet and you you just have got to be all it. >> Let me ask you about that cuz we were talking backstage at Abundance 360 last year and you had a the basic tech stack
[00:49:00] had six layers of competency. You could probably rattle them off the top of your head actually, but yeah, >> this is for our turf. for for robotics prior to neural nets I guess. So it applied to Archer and Figure. Sure. >> But what what were they again? It was >> I mean like Archer was basically like a flying aircraft. Uh so you know I basically build electric vertical takeoff and landing aircraft right? >> Um that is basically like a sorry it’s like a flying robot is what it meant. Um that has um you know basically has battery systems on board has electric motors. Electric motor is just basically a stator rotor gearbox. Pretty simple. We have a little bit more sensors in our actuators than that, but like uh for the most part like and there’s like you know there’s encoders and stuff in there and things like that. Um you have a like basically like control software like how do we like control this thing and make it move around and in the case of Archer and figure it’s very overactuated system. So Archer has like 24 degrees of freedom. We have like propellers like you know tilting we have like u we have pitch on the on the blades like we have um flaps on both the the tail and the wing uh tail um in the wing. So um and then you know figure we have over 40 or
[00:50:00] so uh on the system. >> Uh you have embedded software on board and sensors. So how do you get the compute sensors and embedded software all talk to each other? >> Uh then you have like structures. >> Okay. >> Um so those are like kind of the core ingredients of like a robot or something like physically moving through the world. >> Yeah. So then my question is you know traditionally the employee base would be like experts in one two three four five and six. Yeah. >> And like they’d be really really good. >> Yeah. So then you come in and overlay this with Helix and you’ve got this massive neural network thing. Is that is that a seventh competency or is that something that permeates the other like or or did you take all of your microcontroller experts and start training them on neural network like the like the next thing archer is like how are you going to like plan and you do it through a pilot or we have my aircraft midnight is a piloted for passenger aircraft. So like who’s doing the planning and uh like you know basically higher level like basically higher level behaviors in the stack it’s all the lower level control and code what to do >> uh and here at figure it’s been it’s been changing over time but now it’s entirely neural nets with helix too. >> Uh so it’s like who’s going to I mean
[00:51:00] what what is the highest level behavior telling the rest of the stack what to go do. >> Yeah. >> Where’s that come from a human it can come from a joystick. Uh it can come from an open loop behavior which we see like we talked about before or it can come from like a neural net that’s like doing the planning and reasoning. So like the, you know, the kitchen demonstration you guys saw today and we released >> like the what’s telling the robot what to go do next and what to know to pull the rack out of the dishes and to go grab the cups and not the not the coffee cups but the the water cups. That’s a neural net making the planning. >> In the case of my aircraft at Archer, it’s a pilot determining like when to take off uh when to hover and when to transition into full flight >> and then how to sort of descent different type of neural net. >> It’s a human biological neural net. Yeah. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands [music] of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every [music] development sprint with the Blitzy
[00:52:00] platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today. [music] >> Let’s talk into application layers. So, uh we’re seeing uh your movement into the home uh besides the industrial base and such and healthcare is going to be a big part of this elder care helping people stay healthy at home. Uh by the
[00:53:01] way, you just came through Fountain >> through Fountain Life. >> How was the experience for you? Thanks for referring me. >> Yeah. >> Um it was great. I went down to a clinic a couple weeks ago. >> Which one? >> Orlando. >> Yeah. Headquarters. >> Um I didn’t know what to expect. You know, I’ve done like >> I’ve done like full body MRIs and the CT scans and blood work before, but like got there was basically a full stack. I mean, you know this, but like it’s a full stack. >> Everything measurable about you. >> Yeah. Exactly. Like >> 200 gigabytes of data. >> Exactly. And um spent like 5 hours there, left, got the download like last week. It it was just it was unbelievable. Like it what was great about it was that I could get basically comprehensive understanding of like like my body, what’s happening, but also somebody there reporting it out and I talked to me through how how we understand it, what to do next >> and a plan >> and basically build a plan from there. >> Um it was it was great. Like uh I actually purchased it from like uh I purchased it as well for my my my parents and things like this. I think it’s just a great gift. >> Yeah, >> Dave, we need to get you there too. Why Orlando and why not somewhere else? like
[00:54:01] >> I I was I was on the East Coast, so I popped down uh to Orlando and um so it was just easy for me. >> So yeah, we got New York, Orlando, Naples, uh Dallas, Houston’s opening, Miami, and LA. Yeah. Anyway, back to the back to the conversation here. I can imagine uh this is going to up the value of health in the home a lot, right? So one of my visions of the future is uh you’re constantly being monitored for your blood biochemistries, what your protein levels, your vitamin levels and so forth and that’s being uploaded to figure in the kitchen cooking your meals ideally suited for what you need in that moment and then the whole elder care side. How do you think about about that? Yeah, it’s like so my like I growing up I grew up I I grew up on a farm Midwest and then my parents got into like uh independent assistant living like like 15 years ago. So um I kind of kind of grew up around like senior care a little bit in my life. >> You got into that business? >> Uh yeah, my like my uh my parents own and operate senior housing like senior housing facilities in the Midwest.
[00:55:00] >> Uh >> so wait, they’re still in Illinois? >> Yeah, still still Midwest. >> Yeah. Wikipedia says your hometown has 2,000 people in it. >> I grew up in like uh like Miko, Illinois, man. I think it was like 1,800 people when I grew up. And yeah, like middle of nowhere. We had like no like um no traffic lights, like no fast food. Um [laughter] it’s a dry town. It was just like a whole different world. >> Oh man. >> Yeah. >> Do they have parades for you when you go back home? And >> man, it’s just uh like [laughter] parades going through the street. >> Can you imagine that? >> Yeah. So you understand the the value of of a fully autonomous human robot. >> Yeah. We got to put like um I’m like really passionate about figuring out how to let like be able to ship robots into uh into senior care and letting people age place aging place at home. >> Like even like you know it’s like it’s you know hard it’s hard to get people to move into assisted independent living facilities. >> Well, how’s that work? So you’re sold out you know 3 years into the future. You can’t make them fast enough to keep up with the demand. And then you’ve got
[00:56:01] BMW, you’ve got a bunch of industrial use cases, but then you’ve got this inh home and you’ve got >> really like I’ll like uh give you my I’ll I’ll like you know level with you on how I think about things. >> We’ve been spending the last like three and a half we’re about three and a half years old. >> Mhm. >> Trying to figure out how like what the right recipe is in the first instance of like what a general purpose like architecture would look like for humanoids. We believe we found it internally and we understand what that is >> and we believe we know how to make robots now and put them out and we’re going to run them really hard this year. Uh we’re going to run them for >> You showed us uh what do you call it? The grid. >> Yeah, the grid. >> Uh can you can you describe what we saw in >> the grid’s like my favorite place here. It’s like uh it’s one of we have like four buildings on campus. It’s one of our buildings here and we have the facility outfitted uh that we’re going to expand like hundreds of robots into two that will run 24/7. And um it has like a little mission comp command post like that’s like second story like kind of like a 007 uh like situation room and
[00:57:02] you can see every robot there and it’s going to be doing both home and commercial work force. We’re spinning up like right now the facility just got open like this week. You guys saw it’s like squeaky clean and we’ll start shipping figure 3es into it like this month. >> So model homes, model factories, model operations. >> Well, so within within mission control you think of like watching the robots but the robots also have their own vision which transmits back. So it’s more like, you know, in the combat movies where back at the home base, they’re watching the invasion or whatever, you’re seeing through the eyes of the soldiers, >> you got all that data coming back into mission control, too. So if if the robot, you know, there’s 200 how many in there at any given time? A couple hundred. >> 250 300 >> 250 300 robots building a house or doing all that video and telemetry comes back into mission control as they do it. >> Do you believe that AGI requires embodiment? There’s a lot of conversation that’s been put forward on that note. >> I think my my definition like I think like um so I get I’m getting a chance right now to spend a lot of time on both like the
[00:58:00] physical AI and also digital AI at Park. >> Mhm. >> So kind of both you know like both a bit and I think um when I like talk to AI today or use it I just feel like it’s so dumb. >> Mhm. >> It just feels like you’re starting like a new chat. You’re like basically asking them for like knowledge retrieval. It’s like an advanced Google search engine. Um, you know what I [clears throat] view is like I I kind of like think about like um we want to build like the future. We want to build like Jarvis or we want to build like Jess. >> I want this thing I want to talk so bad. >> Yeah. [laughter] I want it to talk to me. I want to reason. I want to have like perfect memory. I want it to be able to um touch the world both digitally and physically. I want to be able to be general purpose be able to do things like for me think about reasoning through things. We have um we have HARK now designing CAD from scratch. It’s going out and finding you ask it to go build a CAD thing. I asked it to build uh basically a monster truck for my son in CAD and it’s going out. It’s like finding a CAD package. It’s installing it. It’s opening up. It’s like learning how to basically build CAD in the parameters like it needs to look at for building monster trucks and it goes off
[00:59:00] and does it does it. And we can do that in under an hour now. Um fully end to end and just um >> clean sheet >> clean sheet from a single prompt and it’s using tools and computers like a human can. And we’re going to give it all the same tools like we’re going to give it all the tools that Figure uses for like uh for for CAD for FEA all this different stuff and it’s going to learn all these and >> was that the inspiration for HARC? the fact that you know there’s a lot of LLMs out there doing a lot of things but none of them are really connected to to CAD and you have so much experience you know from your >> my inspiration for Hark is I feel like we’re like chasing like all all the big frontier labs are chasing this like very abstract version of like um like reasoning um >> well specifically anthropic wants to dominate coding and code self-improvement and then OpenAI wants to dominate >> I want to dominate like a sci-fi AI future I want >> Jarvis everyone knows Jarvis >> Jarvis Like I want the smartest person in the world with everybody. >> Yeah. >> Um we have >> because of the vanoyment like the idea that that these things go out into the solar system and then ultimately out in the galaxy and start making themselves
[01:00:00] out of raw material. >> Nobody’s doing this. Everybody’s like copying the other frontier lab that’s copying their frontier lab. Like nobody’s building true multimodal systems that really can reason and understand and have persistent memory. Yeah. >> And like that can go out and touch the world and do things. That’s my my version of AGI is like I can do what humans can do and humans are just not sitting there giving me Google search answers which is what we have now. It’s terrible. And in one aspect it’s great cuz like this new alien technology like dropped on the planet in 2022 and we’re like trying to figure out what to do with it. And but the other aspect is like the there’s so much the models can do now. There’s such an overhang in the product capabilities and you know we’re um we’re understanding that better now at heart. or a better not figure. And I think we’re just we’re like abstractly getting to a place where we’re building like synthetic humans at scale. >> And these humans can be both digitally like work on the computer, use tools, they can physically be there, but they’ll be able to like reason with you, talk, have memory, understand you, and they’ll be able to go off and do anything a human can.
[01:01:00] >> Have you been tracking Claudebot now? Maltbot? >> Yeah, I’ve been tracking Clawbot. It’s really cool. >> Yeah, they they renamed it to Maltbot. I think it just like shows you how complacent a lot of the Frontier labs have been. >> Yeah. >> Where you have like such incredible >> capabil >> capabilities that can be with with a very simple harness and very simple like markdown markdown files and very [clears throat] simple tools you can give it >> uh on on the back of Opus or whatever whatever you’re going to use can do like magical things for the world. >> And we’ve had that for like for a long time now. Like not like it just it wasn’t like they went out and built a new AI model for this. They basically just put some harnessing and some, you know, MCP and APIs around this and it like basically went out and can like basically be your executive assistant. It’s really awesome. >> And there’s there’s a huge area here to give that to every person in the world and make it easy. >> Yeah. >> And we’re doing some model development now at heart that is like I I think truly state-of-the-art. Um, and I’m excited about that. And we’re also doing some of that now in the physical world. We’re a figure. So we have this like
[01:02:01] digital versus like physical thing that I’m seeing on both. And I’m just like so excited about this future even the next like 12 to 18 months. >> Um the next 12 to 18 months I think will be like the largest AI transformation we’ve ever seen. >> Yeah. >> And getting back to your point about like what do we do with healthcare and robots? >> We’re going to make a ton of robots. >> Like we’re spinning up resources right now both at Baky you’re seeing now and future Bakyu to basically be able to make like millions of robots. How long before these robots are your physician, your surgeon able to actually support uh all you know the complexity of a medical procedure? >> I think from a hardware perspective in 2026 we’ll be able to do uh like from a hardware work >> what surgeons can do. >> Yeah. >> And I I think I see no you know given where we’re at with our road map and things like that with with figure I see no reason we can’t do it. >> It’s pretty fast. >> Yeah. That’s pretty fast. Um I feel pretty confident by the end of this year you’ll have a hardware system that you know you could basically if you could
[01:03:01] like a teley operator or something like that you could like basically be able to do like real surgery. Um depends what type but I think like like most things >> and then the AI system is just layering on top of that. >> Yeah. Then you got to get the brain to work really well uh at these things and like you know this has got to work at the highest level of like performance. Let me ask you, but federated learning gives you an incredible amount of >> I think we’re like >> I think we’re very close to this work. I think we’re we’ve already shown if we can get the right data and the hardware if the hardware can do it like if the you know the the simple like hack is if you can tell the robot to do it we can learn it. >> Yeah. I mean that’s that’s a really important point people need to understand. If you can telly operate the robot if the mechanical systems the motors the you know the fidelity can be done. >> Yeah. We’re just like and then we’re like dumping on telly app. But like tellup’s got one good a couple good things where like it’s a really good testing tool. >> It proves out >> and it proves out the hardware and if if you can’t tell operate it, you’re not going to be able to learn it. Meaning if there’s restrictions in the range of motion or payloads, you pick up some heavy robot can’t do it during tele operation, it’s not going to be able to do it in a learn policy. Um so so I
[01:04:01] think if you can tell, you can learn it from a hardware perspective. I think we’ll be able we’ll be there in terms of like more dextrous type things we talked about here. And then I think I think what we’ve already shown is if we can get the right data for it, we can get the hardware to basically do anything it’s capable of. >> And then you can add infrared, ultraviolet, you can add all kinds of additional sensors into the system. >> Yeah, for sure. I mean, we have it now. We have it with tactile like with like the palm camera is a good example. Like um palm cameras >> and we’ve been we’ve been now we’ve been now seeing a boost in performance. >> I do. >> Uh a lot of cool things we’re in a cabinet now. We can use as soon as soon as the on the tour as soon as I heard it was like duh totally makes sense. >> Great. Yeah. >> I mean how many times a day are you like reaching? >> We’re kind of you know blind putting your phone down there to get the camera to look at it. I’m sure we would have evolved an eye right here if it were physically possible. >> It is interesting question for you. You’ve got the cameras in the head again mirroring a mirroring a human and the hands. Why aren’t there cameras rearfacing or 360° facing? There are
[01:05:00] >> I just bought I just bought a amazing drone, the anti-gravity drone. Have you seen it? >> It’s the VR headset. It’s got 360 above, 360 below, backwards, forwards, >> and it’s extraordinary. So, what how do you think? >> We do. We have We have it on the robot. >> They all have backward facing cameras. >> Okay. I have to ask this question for our our >> If you just like uh you know, go over and look behind them. They have cameras in there. >> Okay. Do they I’m seeing [snorts] it rotate here on on on here. >> So, one of our moonshot mates, Selma Mal, you might know him. He’s the my one of the co-founders with Ray at at Singularity University. He’s like, “Why in the world are there only two hands? Why don’t we see robots with like four hands or six hands?” >> So, for to put that to bed. >> Yeah, we get asked this a lot. It’s like why not like um you know, why not like superhuman and all these different things which is all the questions. I think um my summary to this is like our goal is to be able to do what humans can and then you want to do it the the cheapest and like lightest possible way you can.
[01:06:00] Um like the like the lighter the better for safety, the cheapest uh is obviously like very important. Uh all those will affect manufacturability and scale. Um, when you start building things that are better than human in a lot of ways, like if it can, you know, run a three-minute mile or if it can do backflip, if it’s got like like, you know, bunch of arms, it’s going to make the robot really heavy. It’s make it really costly. It’s going be really hard to manufacture. >> And and then your question is like, okay, when I look at like the logistics use case, I don’t think you can actually have four arms or six arms and move any faster. >> The line is like relatively, it’s like, you know, maybe a meter or so in depth. Uh, you got to kind of get a package. The package needs to be roughly in the center of the conveyor system. So the scanner below it can scan it and put a label on it. Um so uh you know in in that case we basically have another three to 5x in terms of speed and the actuators that we can run. The software is not enabling so it doesn’t know how to do it yet. Uh so we can run like three to five times faster than what you saw today. Wow >> cuz we the whole body to run. Yeah. We
[01:07:00] can run the robots at like we look at like in terms like radians a second maybe traditionally look at RPMs. >> Yeah. >> We look at radians a second here. We have another three to five times headroom in the actuators that you’re seeing now. >> I would love to see it real quick. >> The thing is the cost of a mistake like you when you’re unloading the dishwasher >> at the current rate of speed the cost of mistake is relatively low. You start running 3 to 5x faster and you like that thing glitched that plate is moving fast. >> I just don’t know if it’s really needed. Like I you’re going to get a really expensive robot and it’s going to be like less safe. It’s be harder manufacturer and then you’re going to have like a you know over time you’re going to get the robots down to 10 $20,000. So you’re going to have a $10 to $20,000 robot there and you’re going to have a really expensive robot, let’s call it $50,000. >> That that and like and cost is really a function of manufacturing volumes. >> So you really want to build like the car. >> Well, that’s why going after the industrial use case, it’s such a like the home needs like every like this for this. >> Well, the home is the home is huge in the end, but if you’re running three to five times faster than what we’re seeing right now in the home and you you know, you kick the cat or something like that, that’s not great. in the industrial use
[01:08:01] case, everything is kind of taped off, you know, and it’s it’s >> I remember I was interviewing you for my next book, which comes out in April. Here it is. We are as gods. We’ve talked about this, but I’m super excited about and of course you and and Figure uh are prominent in the book. >> Um because this is godlike. I mean, it’s extraordinary. We’re giving life to new systems. Uh I was interviewing you about how many and what the price point is. And I want to just double down on that because the numbers are pretty staggering and they make sense. So if you’re actually getting the price down to $20,000 a robot, I haven’t heard 10,000 a robot, but 20,000 a robot. You’re leasing a robot for like 300 bucks a month, $10 a day, 40 cents an hour. And then the and you ask the question, okay, if it’s really 10 bucks a day, how many would you own or would you have? >> Yep. >> You end up with a lot of robots. So, what’s your estimate on the number of robots on planet Earth? Uh, 2035, 2040.
[01:09:00] Where do you think that’s going? >> I mean, I think it’s relatively straightforward to think that every human should have a humanoid to do all your work and then we should have maybe an order of like 5 to seven maybe 10 billion in the commercial workforce. >> Mhm. >> Um, so I think I think like I think if all goes well, I think you could basically build tens of billions of humanoids on a planet. >> Okay. Yeah. >> I mean, you’re basically building like a replica of a human that’s really cheap that works 24/7. >> Yeah. >> And so like there is really no um and then you know we will be at a point I hope in 24 months where the all the robots will build all the robots. >> Well that’s where I wanted to ask about scale because you said you know we’re going to ramp up to millions a year. You’re like well one per person on the planet is 8 billion. So millions per year really isn’t that much. Yeah. >> So then you’re like, okay, the self-improvement loop is going to be incredible. >> Yeah. You also need like, it’s funny, we talked about this, but you also need like tons of working capital. If you put a billion robots on the planet, >> even if there, let’s call it $20,000 a piece, you’re talking 20 trillion of
[01:10:00] working capital. >> I mean, they’re not that many. There’s a billion cars on the planet right now, is not like more than that. >> But if you tried to if you tried to build them in 5 years, it took it took 80 years to accumulate those cars. Some of those cars are 30, 40 years old. billion cars on the planet, but we have like we make a billion or more cell phones a year. So like >> uh and I think this is more cell phone like where it’s personal like you’re I don’t we even go back and forth on like if your robot breaks do you want like a brand new refurbished robot or do you want the old robot you used to have cuz you’ve known it and you understand it and it’s got a personality. It’s like I think it’s going to be with you. It’s going to know everything about you. >> Why would you just have a personality transfer? You could, but I think there’s like some inner workings of like I it’s got like all the, you know, scratches on it that you know it. It’s just it’s like your thing and it’s got like a little bit of a feeling. But yeah, there like for sure like I think that’ll be fine. But >> let me ask the geeky finance question though just just before we lose the topic here. >> True. >> So if you have an all neural networkbased system, it can learn at an incredible rate. The technology is adv advancing remarkably. You look 24 months
[01:11:01] in the future, the demand is on the order of billions, not millions. uh like you said to build that out in one iteration. You know, use the the cell phone as an analogy, but Apple had 15 years to profitably ramp up production to a billion units a year. >> Yeah. >> You and so the demand is there to do it in one year. >> Yeah. >> But you would need a trillion dollars like some some insane amount of capital. >> But that’s no longer an insane amount of capital. I mean, we’re seeing >> I think you can >> So what do you do? you leave the world starved asking for the robot for 5 years or do you raise the trillion dollar you look at like credit card receivables or car leasing these are trillion dollar markets you’re in terms of financing so I think the financing market’s there for this >> um what do you do I think like one one is you got to solve the neural net game like you have to be able to scale with neural nets and you have to solve pre-training and you have to solve a generalization so you have to solve for a general purpose robot that is like that is like table you have you have to solve this >> that’s why we’re so obsessed with like trying to solve it here figure if you don’t solve that none of this matters
[01:12:01] [clears throat] >> the Second step is you have to have robots in the loop like uh building other robots. >> Uh so those two things have to be solved and you have to design the robot in order to make sure it’s it’s uh it can hopefully design itself >> at the end of the day. So like we like um there’s a bunch of stuff we’re putting in place in terms of like manufacturing execution software, the lines, all the design of it so we can at scale have humanoids go in building other humanoids and give them off the line. Yeah. And um so I think like I don’t think I think this is like a and it took us a while to kind of you know I think these adoption curves are shortening and shortening. >> Mhm. >> And I do think if we could solve a general purpose humanoid robot today that could do everything you wanted. I think we could ship a billion of them today. You think say again I think we ship a billion today. >> Yeah. Totally agree. >> So so basically it comes down like can you get the neural nets to work at scale? Can you get the models good enough to generalize? So this is scale real general purpose call like a general purpose robot like a human and suit. Yeah. >> And then can you get robots in the loop build other robots? >> Well, the other thing is that’s really compelling is like the the neural net is
[01:13:00] the only IP you need to protect. So as long as you have the federated learning coming back to the mother ship and all the training is happening centrally like you know that the Star Trek Genesis project, right? You got a little capsule. It has basically the the germ of like you could ship literally a box to Kenya that’s like here here’s the figure box. It opens up and it starts making a figure manufacturing plant right out of thin air in the middle of Kenya. And if there’s capital there to bring the resources to it, then that’s how you get infinite. >> We talk about that the innermost loop is is energy and uh and and and AI and intelligence >> and and you know like local mining for the materials or whatever, but it’s completely self-contained. But the key is that you just unlocked that capital that wanted to build something productive >> while all of the IP is still flowing back >> 100xing the GDP >> to train the neural net centrally. >> Yeah. 100x the GDP of that that jurisdiction. There’s latent capital all over the world. >> So we talk about you know there’s a lot of fear out there in the world about losing jobs to AI and to robots and the
[01:14:01] reality is the conversation has shifted now to well no this is going to create massive abundance and universal high income and that happens if in fact rather than the company hiring a robot to replace me if I hire a robot to go out and do my work for me >> Mhm. And in fact, it’s able to get triple my salary because it’s working three shifts. >> Yeah. >> And it’s doing that for me. And then it earns enough to get a second robot working for me. >> Yeah. >> And so the question becomes, where is that capital captured? And is it inside the hyperscalers? Is it inside of the individual? So that’s going to be the interesting uh conversation coming up. How do you think about that, Brett? >> I mean, we’re going to sell robots at scale. You’re going to be able to deploy as many robots as you want to whatever you want to do. >> Yeah. >> Just do whatever you want. like no no no instruction manual. What do you want it to do? >> It’ll it’ll learn it. Uh it’ll research the internet. It’ll use digital tools if it needs to. It’ll talk to you. It’ll reason. Um future’s going to be >> safety and privacy. Let’s talk about
[01:15:00] safety in the home and privacy in the home. You know, there were lawsuits uh over the last years with with Google and Amazon of it’s listening to you in your bedroom and so forth. How do you address uh safety and privacy >> or is it just is it gonna happen? It’s just too early because we’re not >> I think they’re just like really hard questions answer like in one one go cuz like there’s a bunch of different safety implications here that are like just uh it safety is like probably the number one thing to tackle to get robots into the homeless scale. >> Yeah. Um there’s like a semantic understanding of safety like if there’s a you know a candle lit and I knock it over by accident or if there’s like a you know boiling pot of water if I hit it like just understanding how to be safe in an environment where humans are at. >> There’s actually the um intrinsic safety of like can the robot be with humans and animals and pets and uh be safe like those those like that has to be solved. We can talk like at length about like how we’re going to solve those problems. And then you have the whole privacy, cyber security, other aspects of this
[01:16:01] that need to be like uh like uh with good intention like like how do we solve those problems. Um we are working on on all of those now. They are very difficult uh things to go get right. Um >> I do see a path where we can build intrinsically like really safe robots around people and and pets. >> Yeah. >> Um we have a plan for how we’re going to do that. I mean they could be safer than humans >> by a large margin just like autonomous cars are safer than humans end of the day. >> Yeah. These have like super human perception. We can see basically all around us at all times. We’re always on. We’re always computing like what to go do. We’re um you know so I think you know assuming nobody’s trying to be like like you know mean to the robots or things like that. I think we should be extremely safe around everything we’re doing. And then as privacy like you know these are going to be in your home. So u being upfront about what where what data we’re collecting and where that data is going and how we’re um uh keeping that
[01:17:00] data private and encrypting that data is all this is like super important. We have a >> we have an entire uh team on cyber security here in house on both mostly the product and commercial side corporate side that are working through like how do we think about this at scale. Yeah. Right now they’re they’re great. from like the big uh the big companies have been doing this for a long time and we we think about as the corporate side as well as like the product side on the robot side as well. Yeah. >> Your facility here uh which is your sort of prototype manufacturing facility 50,000 robots a year you imagine >> that facility can support about four lines. Each line can do about 12,000 units a year. So a little under 50,000 >> That’s the next step up do you think? >> I mean we’re building like thousands of robots right now. Um, so we’re like that’s the big push we’re doing, right? I mean, you just you saw it today. Like that’s the the figure C stuff we’re doing off the lines today. Um, you know, and then there we want to go to tens of thousands and then hundreds of thousands and millions. I think we need to take those like steps as a company to go do
[01:18:00] that. Uh, this facility will top out 50,000 a little under 50,000 units a year at full full capacity. So kind of think about it long term like our you’d probably be low volume when we look back in five or 10 years and be like >> do you think you might franchise out the neural net and the circuitry around it you know because all these other people are saying oh I’m building a robot that cleans industrial pipes I’m building a robot you know all these different form factors >> no just >> I think it’s super unsafe >> I think we see these robots out there like this I think like uh they’re around humans we don’t have like we don’t own the hardware we don’t know what they’re doing it’s like our neural net in it like I think it’s um >> interesting >> yeah I think it’s like a it’s similar to archer when we were doing archer like uh building archer out like I think um it’s like it’s like a safety critical system >> uh especially like archer and since there like uh licensing out to their folks and stuff like that is like very problematic. I think here it’s like the same thing like humans even done right like we have a fiduciary duty to our civilization to build like really safe humanoid robots at scale >> and uh like just giving this AI system or even hardware to anybody that would want like this is like uh not something we will entertain. So then when do you
[01:19:01] branch out into other form factors like uh you know things that work underwater things? >> I don’t think I think the the amount of I think in the future everything that’ll move will be a robot. >> Mhm. besides humans. >> Um, >> and within that, I think humanoids will dominate the plurality of all robots and it’ll just be so big a percentage of them. Like the other robots will be like niche and expensive and done like like superduty trucks that you have like out mining. Uh, they’ll just be like uh made for specific areas maybe underwater as you said or something. >> Or like heart surgery, you know, you’ve got or or brain surgery. You’ve got these very very fine tuned. It’s like a robot controlling a robot. >> I think you’re left with like very expensive equipment that’s very siloed. Like you really want to build a general purpose machine that can uh that can learn across a variety of different tasks and have that transfer learning. >> Um I think that’s extremely important here and that needs a very high variety
[01:20:00] of rich data. This is only going to help the robot system get smarter and better. Yeah. >> So my view is um I think just it’ll be like humanoid robots on humanoid robots everywhere in the planet. Um and there will be other robots there, but it’ll just be like a niche businesses. >> When I was pulling up here, I I posted uh your video that you released on on Helix Studio today and they asked the community >> uh for uh questions and they just blew up with a whole bunch of amazing questions. So one of the questions is uh do you have a blooper reel? >> Uh and can can folks see it? And then what’s the weirdest task someone on your team has tried to teach it to do >> uh and it absolutely did not work. And that’s from Ben Casper here. >> Ben Casper. Nice. Uh the weirdest task um and it did not work. Uh well like weirdest task. Listen every >> jogging was interesting. >> Well okay jogging was fun. Jogging was cool because we like really had a steerable jogger and a lot of this work
[01:21:02] in like running has been like again open loop but we had a steerable RL controller we could do. Um another one which I actually have a gift for you that kind of goes I have a two uh two figure dead mouse hats. >> What’s that mean? >> Uh we basically uh we opened uh at Red Rock uh late last year uh at a dead mouse concert and had robots on stage. Uh, so we generally don’t venture out into weird of stuff. There you go. Nice. And we actually dead we had dead mouse at the last two holiday parties, I figure. Which is like fun. And um we we generally are pretty much like how do we design something really useful, but we’ve had some pockets of time to uh to do to do fun stuff like this a bit. So I think like having robots on stage at Dead Mouse and Red Rocks was just fantastic. >> I I flew in for it. It was just it was unbelievable. >> And you had them on stage. >> We had them on stage. We had a several figure twos on stage just jamming. We had them all synced so that synced to the music as it danced uh which is really cool. So what it heard it was like moving towards um which is on stage
[01:22:03] last year at the Abundant Summit but Figure wasn’t with you. So need to get you back there with Figure in the loop. >> Totally. >> Yeah. Uh for sure. So uh when are we going to see the first Figure in a customer’s home >> is the next question. >> Yeah. We we want to we want to I want to ship robots when they’re really ready. >> I don’t want to ship slop. >> Best guess. um >> young earliest latest window. We we we pro I think last year I said in you know in this year in 2020 um in 2025 we uh in 26 we launch >> we launched a robot to do like end to end home work like an alpha testing like in my home to do like full >> like mopping cleaning >> full scale like you know long horizon work >> figure you and your daughter >> putting stuff into >> we’ve done like pockets of work really well like we’ve done like dishes and laundry and all this and we can like you’re seeing some that’s going to tie together now but like I want to do it across like days and weeks of work and I want to be able to drop it into somebody’s home it’s ever seen and also
[01:23:00] make that really work well >> and I want to be able to talk to it and I want to be able to understand me and be able to remember things and be able to show us stuff. I’ll be able to walk through a room and show it like almost like a visitor you have at your house for a week and like understand like what to go do. >> 27 28 29 my best guess is I think >> you know I think like well I’ll tell you what we’re we’re working till midnight every night to solve this problem. It’s like it’s like it’s like we are here every weekend, every night to try to figure out how to solve this. This is this is question we want to Charles general robotics. This is kind of where we want to head. Um I think by end of year we will we’ll be able to put a robot into an unseen home and be able to do fairly long horizon work. And then you want to measure how many like human interventions you have. Is it every it’s once an hour or is it once a day, once a week, once a month? >> And I think we’ll do that. I I think that would be a huge accomplishment for us. Uh I think we’d be on the path of solving general robotics. And then I think next year you’d be on a path to where you could ship them into users homes and start like making sure they work well. Um so I think anybody that
[01:24:00] tells you like hey we’re going to ship them or teleyop them in the home or we’re going to ship them in at scale in a year like there’s you you’ve got to ship in small quantity and they got to work well and then you got to work out the problems and you got to then ship again. You have to have an iterative design uh road map which we have here. We need to learn. So it’s going to work well at one. This is going to work well at 10 homes. It’s going to work well at 100. It’s going to work well at a thousand. It’s gonna be 10,000. It’s going to be 100,000. It’s gonna be a 10 million. So, I think it’s going to be like um uh super >> exponential growth curve. >> Exponential growth curve. So, >> there anything to worry about there in terms of time to market is cuz you know the industrial use you like I said you’re sold out for years to come anyway. >> Is competition going to come in and grab market before we we >> work [clears throat] today and the work we showed two years ago has never been done in our mind whether any other human or company in history. >> Yeah. And so that if that’s the marker, it’s whenever somebody can do the Kurig test >> for a couple minutes with uncut film and I can like watch it closed loop do it with ban even not even just standing.
[01:25:00] >> Um that’s you’re two years away from where we’re at. >> So I think we’ll see like we’re trying to push and continue to pull ahead but I think hopefully by next year >> by next year >> we can basically really show like real general purpose inside of the robot. Maybe even as soon as this year like um I mean listen it could happen in a couple months. We are we are we we have the right stack now. We are we are um we are building data sets at at scale like so quickly. We are spending so much time and money on this internally. We just launched our new B200 cluster within like Nvidia helped Jensen helped that went live um like uh like this year. >> How many how many GPUs in your >> we we just we are going live. We have we have 3,000 B200s that like that are going live and we have another set of uh much larger GPUs that we plan to put out uh here >> and training or >> we just use it for pre-training pre-in do it here physically here. >> Uh we no we do not use it physically. >> A lot of power. >> Yeah. A lot of power. >> So Jay Crate asks a question to the science fiction geeks amongst us. So
[01:26:00] what’s beyond the three Azimov’s laws for you? >> Have you thought about that? Have you thought about sort of fundamental laws to program into your robots? >> I think you really want to put these rules down into tune the kind of nonviol memory uh on board the robot >> at the at the chip level sub level. >> Yeah. Um >> I mean you must have thought about that. >> We’ve been thinking about this quite a lot like um >> and you know it’s in in one hand we still want to solve like general purposeness. Mhm. >> On the other hand, we don’t we also want to figure out like once once we’re like close there, how do we also get all the supporting things ready to go? And this is one of those. It’s like it’s like safety needs to be there, like privacy needs to be there, fleet operations, like the reliability of the robot, the like um maintenance plan for like how we’re going to service this and everything in the business model, all of it in financing, all of it need to be packaged, ready to go. So, we’re working through all these now. Um I don’t know, it’s funny. It’s like it’s like you know Azimov got a lot a lot of things right and I feel like a lot of the three like
[01:27:02] you know like these foundational rules for how do we treat hum is like um you know we we have our own spin on this that we I won’t like publicly tell today but like that but like you know the goal is like to do good work and and uh >> is this something everyone learns internally in corporate training and memorizes and all that? Uh it’s something that we want to put we we put and are going to continue to put on all the robots. >> So you have a newborn >> uh you have a newborn child. >> Oh yeah. >> So the question here from KK says when would you trust figure to hold your newborn? >> That’s that’s an interesting >> So the new the figure 3 is soft. Uh, it looks like it’s designed for the home, but it’s all still about >> hard. I think this is the same. I like this question a lot because at Archer, >> I always say like until I put my me and my kids and family on the board, it’s not safe enough to fly anybody. >> Yeah. >> And um like I wouldn’t do that today at Archer and uh I hope soon I can do that.
[01:28:00] Um I figure here I think it’s the same question is like when I feel safe enough to have a robot in my home. >> Uh >> well, you’ve had in your home but >> but like you know I’ve been there. We we’ve had folks there and you know um we monitor it. >> Um I think we’re like truly safe >> and um we’re not there now and I think that’s a great bar for us to hit. It’s like a when I can put a robot in my home fully autonomously end to end around all my kids. I think that’s a point where I would trust it. Um, I think that’s a point I would say like this is ready for everybody and it’s a good it’s a good like heruristic for us to really try to hit and that’s our goal here is to be able to put it like you know like free reign in my home to go do and now we like you know we’re there with it. We babysit it and like we watch it and it works good. I’ I’ve showed videos of the robot being kids like with the robot like there but like you know I think we’re doing it in a safe way. Um the robots have been totally safe. Uh which is great, but like one is we need to build like a system safety architecture that’s really really fall tolerant and redundant in real time and we we’ve done that and we’re doing better job of that in the future. And two is you have you
[01:29:00] just have to build a safety track record for this. >> Mhm. >> Like there’s there’s nothing better than like actually proving this thing is can be safe. >> Well, it’s a nice barrier to entry too if you you know kind of take the Apple road to it’s got to be a great out of the box experience. Well, that means not stepping on the cat, certainly not dropping the baby. And then the cyber security side of it too, not transmitting everything back and having it posted on the internet. >> But if you get that reputation, which it sounds like of all companies I’ve met, you’re perfectly positioned to get that reputation. >> Yeah. >> And don’t make a mistake along the way, then everybody just says, you know what, >> I’m going to choose a figure robot cuz I just feel it’s the same way people feel about the Apple brand with cyber security. >> Yeah. >> So, >> I think I hope people walk away from this knowing that like general purpose robots are coming. It’s like it feels very close and then there’s a lot of other things around there that like like uh that you have to get right to build this at scale. >> Your main message you want to get across here to everybody watching? >> I think the [clears throat] main message we feel every day if people are excited about like AI and robotics is that this is going to happen really soon. >> Yeah. >> And it’s happening. I mean >> people don’t have I don’t think people have a clue of how fast
[01:30:00] >> this transition time is going to be. >> I mean just go to our YouTube and watch our videos for the last two years. They’re like and like watch them side by side. That’s dramatic the change every single year. I mean, you saw it in person and our robots now are, you know, have been in like, you know, customer sites and things like show more, but like >> it is hard to feel cuz you don’t see it every day, but at some point you’re going to walk out probably in San Francisco be the first and you’ll see more humanoids than humans. >> Yeah. >> And I think that’ll be an amazing day. >> Right now, I’m driving in Santa Monica. I was just, [clears throat] by the way, we just did a podcast earlier this morning with Kathy Wood, who sends her best. >> Oh, cool. >> Huge fan of yours. uh invested in me at both Archer and Figure and I’m uh she’s great. >> Yeah, >> she feels the same way. >> She does. Very proud to be an investor in Figure and I was telling her, you know, uh when I’m out with my kids right now in Santa Monica, we do something like counting the number of Whimos >> uh that we see and we’ll see like 10 Whimos and then the Cocoa robots, the little ground robots like like the Starship bots and such. I mean, they’re all over the place.
[01:31:00] >> Crazy. >> Um and it’s interesting, right? because uh you the first time you see it, you’re pulling out your phone, you’re taking a photo, it’s really cool, and then you take it for granted and then it’s in your way. >> Yeah. >> Right. So, >> my wife and I uh anytime we go like we had like last last weekend with date night and uh took a Whim Mo downtown and it was just it was it’s just so unbelievable. the experience feels like uh it’s you you just as a you know as an engineer like working on these like hard projects I feel like the like the like the amount of engineering work they had to go do to put it together safely. Google, >> well, you control the controlling of the music and the lights and the environment. Like if you take a New York City cab and you get in the back >> and it’s like this smoky hell >> and then you get into a Whimo and you use the app and you you turn it into your little paradise. It’s like job taking the product, right? I mean, Larry Paige saw the product win the DARPA Grand Challenge back in 2005 >> and committed to it and uh and brought the team. >> I mean, I think it’s been like 16 17
[01:32:01] years. >> Yeah. and and just they stuck with it, you know, and Astroteller at X basically built it out and then Whimo is an amazing amazing product. >> They’ve been like undeterred for like 16 17 years like don’t worry about it, we’re just going to make it and they did it and it’s unbelievable. >> Yeah. No, amazing. >> It’s very inspirational. >> Kudos to them. >> Can I ask you uh my geeky uh sci-fi meets geopolitics question dour? So, I just got back from Davos on Friday. Today’s Tuesday, so nine nine time zones away. Um, and the big topic at Davos, of course, is Greenland. And all the Europeans are saying Greenland could never possibly be mined. It’s impossible to extract minerals from this frozen cold tundra. >> Yeah. >> And we have some family mining operations in Minnesota where it’s not nearly as cold, but still pretty damn cold. >> You don’t have, you know, mild thick ice sheets. >> We do not have mild thick ice sheets. But I think if you’re talking about a billion and then 8 billion robots and you need the materials and that’s
[01:33:01] the only constraint and you have robots that can operate, >> we can mine asteroids, buddy. [laughter] >> Seriously, >> you think we’re going to be doing asteroids before Greenland? >> No, we’ll do Greenland first. >> But you think Greenland is viable? Like I’m I’m not talking about 20 years from now, too. I’m talking like if you want to build a billion robots in say six years from today, >> it’s a$50 trillion marketplace. >> Yeah. So, I mean that demand that that drive. >> Don’t you think you’d find a way to get through the ice with, you know, given, you know, a million robots working on it? >> I’d hope so. Yeah. I think we’d find like maybe better like maybe better physics, but definitely better engineering solutions for this. >> Um, and then we would be able to put unlimited amount of capacity of humans at it. >> Yeah. >> Through humanoids. >> Yeah. Yeah. >> That’s what I’m thinking, too. >> Yeah. >> Cuz the machinery, the machinery that I see is like it’s massively automated. It’s still driven by people. is still operated by people. It doesn’t need to be. >> Yeah. It’s just like um it’s crazy this works, right? >> Yeah. >> Like the humanoid like just like the neural nets, it’s like it’s just um
[01:34:00] >> it look it’s just >> The thing is when you when you make it work on unloading the dishwasher, people don’t realize how close that is to working on every other task. >> The dishwasher and like folding laundry and these things that we’re already doing are like so hard. >> Yeah. >> They’re like such hard tasks. Like you have like these compliant materials that are all changing with you dynamically. Uh everything’s not in the right same place. uh it’s like very different than being on a conveyor system or manufacturing something like that and they already can do it today. >> Yeah, >> we can do it and now it’s a matter of like doing it better >> and doing it like you know higher reliability across more diverse you know AC across the distribution what humans do every day but like that’s a data play. The thing is if you if you achieved that goal by hacking together 100,000 lines of C++ and tea operating it it would look the same but it would be nowhere near as conquering every other problem. But if you did it purely, it’s nothing but a neural net and it’s purely trained. That means you’re within a millimeter of every task you could possibly define. >> You feel like the millimeter here is just data. Like the only difference of why I can do the logistics and now I can
[01:35:01] learn like you know towel folding or I can learn like dishes or whatever we end up showing in manufacturing. Literally it’s just data. >> Yeah. >> It’s just data goes in the neural net. Now I can do this work because because the robot hardware doesn’t need any updates. I just use a new neural net weights on board, you know, I think like we’re just bound by data now and I think that’s like the it’s like um you know it’s like not a trivial thing to do to create the right pre-training set for this at scale but like we we have a bet that I think will work and um we’ve been deploying that at scale for the last 3 or 4 months. >> Yeah. >> Um and I think um well stay tuned. I mean we’re we’re working through it. So like uh and I hope this will lead to really um uh I think you’ll see a lot of positive transfer emerge from a robot that’s able to like generalized to a lot of things. >> Yeah. >> Amazing. One last thing before we wrap up. I would love can uh can we pull the camera in close and maybe give us a tour of figure 3? >> Yeah, let’s do it. >> Thanks for the closeup and intimate
[01:36:00] tour. So figure one. >> Figure one. Uh so we basically uh one cool thing about figure one is we we designed most of the system in house. We didn’t care about looks. We cared about unblocking the AI and control scheme. It’s like something they could use from a software perspective. >> Um so we designed and walked this robot in under one year. So I incorporated the company. We think it’s probably the one of the fastest times in history. >> That’s a lot of parts. >> Did you draw Did you draw this by hand? >> Uh David uh basically our design lead uh designed this. Yeah. >> Um not as prettiest robot, but I think is I think it has like um it had what we needed, which is like a functional robot we can get up off the ground and start using for like all the AS policy deployment. We did the curig kup with this robot. >> And I moved the hand. >> You can definitely move it. Yeah. >> Are you sure this going to be a collector’s item someday? You break this thing you bought. >> So it’s heavy. >> Um yeah, it’s about maybe 130 140 lbs though. >> It’s not not that different from that. >> Yeah, not bad. >> All aluminum. >> It’s all aluminum. >> All CNC. We need CNC aluminum most of
[01:37:00] the structures. Mhm. >> Yeah. >> Uh and then what else should we know about this before we moved to figure 2? >> Um we basically we uh we didn’t we wanted to care about speed. So we didn’t really care about wiring some electronics uh like a lot of the design. It was mostly just like get a functional humanoid robot out so we can do development on. >> Um so we did that. We built a few of them. Uh we did a lot of like we did our first neural network on this robot which is like I think was was was phenomenal. We did so much development with it really quick. We als integrated now into figure 2. So >> you got the cost down I I know from two to three by 90%. What was the cost from here to there? Probably another 90% >> about the same to be frank. Yeah. A lot of it was machine parts and we moved out the tool parts the three. >> Uh so um >> so you got you’ve got two cameras here. >> Two cameras here. We have a back camera. We have a unit. Yep. We also have the cameras right here in the torso pointing down. >> So, we see where the feet are at in case
[01:38:00] you have a box included. >> Come take a look at the at the camera at the back of the robot here. One second. >> The camera pointing down. It’s right there in the public. See? >> So, back here you’ve got what’s going on here. So, there’s camera ports here. >> Yep. We have basically have a camera backward facing camera. We have different ports for like debugging uh if we don’t hook up like a you know cable to it. And we can also turn the robot on and off from here. >> Amazing. >> Yeah. Uh and then the basically we moved all the wires internally to this robot. We all the structures is exoskeleton. So all the exterior loads like my almost like my my aircraft the archer. Sure. Uh the skin the outside housing took all the loads. We do the same thing here. >> So the outer shell uh took all the loads. Um we we have our second generation actuators. We had our third generation hands uh that are on this robot. We have like more cameras on board. We have about I think double or triple the amount of compute and about double the battery battery capacity on board. >> Yes. It’s be and and the degree of beauty went up. >> Yeah. Like uh >> Yeah. significant. >> Yes. Uh David did a good job making this like uh much much like more presentable. Uh >> so putting is venting heat out the
[01:39:00] armpits just like just like >> So yeah, it actually sucks the air in here and push it out through the torso and the body. >> Okay. What’s going on in the back of the >> Yep. Those are like we basically have these different paddings on the knees and some parts of the arms to basically make it so that um if you basically got your finger stuck here safety maybe would maybe it would hurt it but wouldn’t like cut it off. You know what I mean? So like uh so similar to maybe what you see like a car door. >> Sure. uh today. Um and here’s the workhorse. This is our Yeah, this is our figure three. Um >> yeah, so we basically uh a couple things. We made the robot like much skinnier and lower mass, but kept all the speeds and torques the same. So it’s just as powerful and just as fast, but also like kind of skinnier and less space. This is about 135 lbs. >> 135. >> This is about 150 a little over 150 lbs. >> Um we >> carrying weight. How much weight can >> different hand? >> About about uh 20 kilos. >> 20 kilos. >> Yeah. Um completely different hand. The hands have a glove, tactile sensors, compliant material on it for better grasp and also a camera. Uh all the parts basically or most of the robot is soft wrapped. You can see it kind of up
[01:40:01] here squishiness to the the chest and different parts of the robot. Uh we have like no more like uh like very few pinch points in the robot. Um what else? Uh we reduce the cost massively. We have a better thermal system, compute system. We increased also compute on this robot as well from the last generation. We have new feet that have a toe. Um you might think of the toe as like >> Yeah. No, it’s a major part of the it’s helpful for like um it’s a passive toe um on the foot, but you might think of this like it helps to walk better, but it’s not just that, but when we get down on our, you know, get down here, we’re on our toe box. >> Uh really helps basically get the range of motion. Without that, you might need more joints. Another thing, >> talk about the face cuz this is a big question of, >> you know, do you develop do you show facial features or not? And you went What do you think? What do you had to Westworld or you had to robot? >> Wow. I mean I I’m >> I I it it comes across it’s beautiful, right? Hide to be a beauty and it comes
[01:41:00] across um sleek, but it could have like a negative like little dystopian feel with a black face. So we have three screens on the robot. This is powered off. We have a main screen. We have two screens on the side. And then obviously a bunch of cameras and sensors in the head. Um so on the screens we basically can do anything. You could watch a Netflix movie or [laughter] >> other way the brain >> look into my eyes. >> Yeah, definitely. Yeah. Whatever you want. Like kids get bored. It’s like throw some up there. Yeah. [laughter] >> So the brain is read in here which makes a ton of sense to me. >> Yeah. >> Um and it’s where the Romans ancient Romans thought >> you basically need a lot of onboard computation. There is nowhere else to put it right now. >> Yeah. Exactly. And also it’s easier to vent the heat from here too. And then you just put all the sensors up here. And that just totally makes sense. >> I guess I could put a latex uh face over the head if I wanted. Yeah, you can like basically put a silicon face and put hair on it. We’re good to go. We’re good >> Yeah, [laughter] we also have other outfits. This is one of our logistics spots. Um, same robot basically, but we’re able to outfitter it with different types of soft goods. And we have another robot here. It’s uh that we basically uh have also put to work
[01:42:00] that’s wearing a jacket. This is like cut resistance. So, they all have different um different traits. Uh some of these gloves are also better for grafting different materials uh that might be say dusty or maybe it’s a piece of sheet metal or it’s slick. Do you think it would operate in zero G? You just need a better training set. And >> I think so. Yeah. I think we would really love to run in space. >> You’re going to populate the >> I’ve got my zero G airplane. We could we should we should take it upside. >> Yeah, let’s get these things on there. >> Oh, that’d be a great test. >> Well, look, we’re going to build data centers in space very soon. Someone needs to assemble them. Like zero G is the the operating. >> And then we’ll get other planets, too. It’ll be super important. >> Yes. Yes. >> And then we’ll disassemble the moon and the asteroid belt and we’ll use it for materials. [laughter] 100%. >> Oh, Alex, I love you said that. >> Let’s do it. >> If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you’re a subscriber, thank you. If you’re not a subscriber yet, please consider
[01:43:00] subscribing so you get the news as [music] it comes out. I also want to invite you to join me on my weekly newsletter called Metatrens. [music] I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you’d like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrends. That’s diamandis.com/metatrends. Thank you again for joining us today. It’s a blast for us to put this together every week.