I still remember the first time I met a humanoid robot and attempted to have a conversation with it. The excitement was real. There’s something undeniably fascinating about a machine that looks like us, moves like us, talks like us. As someone who works with robots daily, I understand that initial spark of wonder. (I also know it very well that most of the interactions after a while ends up as a pure disappointment but I must write about this another time.)
But now, watching billions of dollars flood into humanoid robotics, I find myself increasingly uncomfortable.
Let me tell the last thing first: I’m not against robots helping people. I’ve dedicated my career to research how to build machines that can make human life better. What troubles me is that we seem to be rushing headlong into producing thousands of human-like robots without asking the fundamental question: is this what people actually need?
The investment in humanoid robots has reached fever pitch. Figure AI’s valuation jumped from $2.6 billion to $39.5 billion in a single funding round. China has committed more than $10 billion in government funding, and Goldman Sachs projects the global humanoid market could reach $38 billion by 2035. Tesla plans to produce 5,000 Optimus robots in 2025 and 50,000 in 2026. Agility Robotics has built a factory capable of making 10,000 Digit robots annually. Figure AI believes there’s a path to 100,000 robots by 2029. The costs are dropping too. Chinese manufacturer Unitree shocked the market by launching its R1 humanoid at just $5,900, while most commercial models range between $30,000 and $250,000. Morgan Stanley estimates that by 2050, we could see over 1 billion humanoid robots as part of a $5 trillion market. These aren’t prototypes in labs anymore. These are production plans. Real factories. Real deployment timelines.
Here’s what I keep wondering: when did we all collectively decide we needed humanoid robots? I must have missed that conversation. Or perhaps, and this is what worries me, there never was one. This feels familiar. I don’t remember anyone asking for ChatGPT either. Now we’re told to accept hallucinations, biases, and misinformation as the price of progress. We’re pushed to believe our jobs are threatened, our futures uncertain, and that lowering our standards is just part of adapting to new technology.
Maya Cakmak at the University of Washington surveyed people about humanoid robots in homes and found that most prefer special-purpose robots over humanoids, seeing them as safer, more private, and more comfortable. People want a toolbox of smaller, specialized machines such as a Roomba for cleaning, a medication dispenser for pills, a stairlift for stairs.When shown images of humanoids performing household tasks, participants described them as “creepy” or “unsettling,” with several mentioning the uncanny valley effect, particularly pointing to the black face masks common on this generation of humanoids. One participant described the masks as creating an “eerie sensation, the idea that something might be watching you”. The concerns were practical too: humanoids were described as “bulky” and “unnecessary,” while specialized robots were seen as “less intrusive” and “more discreet”. When a nine-year-old was asked about getting a home humanoid, he replied: “But we don’t have an extra bed.”
And this isn’t new insight. Research has shown for years that highly human-like robots trigger negative emotional responses because they blur the boundaries between humans and machines, threatening our sense of human distinctiveness and identity.
To be honest, some design choices baffle me. Why do home robots need legs if they’re going to operate on flat floors? Wheels are more reliable, efficient, and cost-effective. Why make them so extremely anthropomorphic if the goal is utility? And I genuinely don’t understand why some humanoid robots have breasts. What function does that serve? One panelist in Maya cakmak’s study with motor limitations said it perfectly: “Trying to make assistive robots with humanoids would be like trying to make autonomous cars by putting humanoids in the driver’s seat and asking them to drive like a human”. It was obvious that the better path to autonomous vehicles was to modify vehicles for autonomy, not replicate human drivers. So why are investors convinced that replicating humans is the right solution for homes and workplaces?
The anthropomorphism trap
In my earlier research on ecocentric machine design, I argued for moving away from anthropocentric approaches in intelligent machines. Anthropomorphism is natural; when faced with technologies that look like us, talk like us, act like us, we instinctively respond to them as if they were like us. That’s human nature.
But here’s the thing: they are nowhere like us. And more importantly, they don’t need to be.
We need machines to help people, to make their lives better. The question we should be asking is: are we really listening to what people need? Are we building to make their lives better, or are we building what’s technologically impressive and then convincing people they need it?
Even if we decide humanoids are what we want, the practical challenges are enormous. Battery life requires careful management. for instance, Agility’s Digit runs for 90 minutes with a 60-minute reserve, charging for 9 minutes. Industrial customers expect 99.99 percent reliability, and any downtime can cost tens of thousands of dollars per minute. Safety standards for dynamically balancing legged robots are still being developed, and the traditional approach of cutting power isn’t safe for a humanoid, I mean it will just fall over. As Melonee Wise from Agility Robotics notes: “I don’t think anyone has found an application for humanoids that would require several thousand robots per facility”. The business case for massive deployments simply isn’t there yet.
I can imagine use cases where humanoid robots have genuine potential to help. In environments truly designed for human bodies where retrofitting is impossible or prohibitively expensive, perhaps the humanoid form makes sense. For certain assistance tasks where the psychological comfort of human-like presence matters to the person being helped. But for most applications? Simpler designs with basic capabilities could accomplish a great deal more, faster, and with less risk. People will likely accept modest changes to their homes to expand what these robots can do, just as Roomba owners move furniture. Our homes have transformed around new technologies—cars, appliances, televisions—so why not for robots, if they prove valuable?
To be honest, it’s not easy for me to ask these questions. I’m criticizing something I’ve wanted for a very long time. From my first Sonic Sma toy robot, to watching Honda’s ASIMO lead the Detroit Symphony Orchestra, I have always been impressed, amazed, and looked forward to even better performances of humanoid robots. Now I watch XPeng showcase IRON walking so elegantly that I as a human cannot, and I’m just puzzled. Flabbergasted, really. Was this what I wanted growing up? Was this why I chose to study and specialize in robotics? Were these my dreams? I don’t think so. And even if they were, do I need to stick with them?
I tell my daughter that it’s fine to change your mind. I think it’s fine for all of us to be impressed with what’s happening while also asking hard questions about the reasoning and impact of this hype. Being excited about technology and being critical of how it’s being developed aren’t mutually exclusive. They’re both necessary. And if it is not making sense, it is fine to change your minds my dear roboticist colleagues.
I hope these companies are doing deeper research into what people actually want. I hope they’re listening to people with disabilities, to older adults, to the communities who might actually benefit from robotic assistance. I hope they’re asking not just “can we build this?” but “should we build this?” and “is this the best way to build it?” And I hope we have honest conversations about this technology before we’re surrounded by it and told to adapt. Because we’ve seen this pattern before. Build first, deploy widely, deal with consequences later. Maybe, just this once, we could do it differently.
