Earlier this month, Elon Musk announced a new humanoid robotic servant. He says Tesla will create a prototype in a year.
He said it would be friendly, and navigate through a world of humans, and eliminate dangerous, repetitive and boring tasks. He said it could "go to the store and get me the following groceries."
Either Musk does not intend his announcement to be taken seriously, or he is simply ignorant about the challenges of building intelligent robots. Probably the latter.
Now, before Musk fans start jumping all over me, let me acknowledge that Elon Musk has successfully built two high-tech businesses, Tesla and SpaceX, and that he knows much more about raising money than I ever will.
But those products are quite different from intelligent robots. Cars have been with us for over a century, and modern rockets for over sixty years. These are well-understood technologies and businesses. While innovative, electric cars and reusable rockets are only incremental improvements.
Building intelligent robots is a completely different problem. To date, the only ones that have actually been commercially successful are robotic home vacuum cleaners. The leading contender, iRobot's Roomba, has sold millions.
But even the Roomba is only a simple electric mechanism that's about as intelligent as a bacterium. A robotic helper that can do even very basic tasks requires a lot more smarts than that. And it's nowhere on the horizon.
From his announcement, Musk appears to have in mind something in the direction of "Klara" in Kazuo Ishiguro's book, Klara and the Sun. I reviewed that speculative robot back in March. Such a robot would require more intelligence than anything out there today.
Many people don't realize the biggest difficulty with creating intelligent robots. It's not the physical form of the robot (although making a reliable, functional mechanism is hard enough), but the intelligence.
For proof, look no further than the history of self-driving cars, an example of intelligent robots.
The basic navigation problem has been "solved" for decades. Back in Carnegie Mellon University in 1990, I watched a NavLab van rolling down a trail in Schenley Park behind my graduate student office. Since then, every few years we've been hearing announcements about how self-driving cars were just around the corner.
But between demos and actual products lies a huge chasm. Today, not only do self-driving cars still not exist after billions of dollars and decades of investment, but the experts admit that they are at least a decade away.
And no, you don't get to nit-pick what "self-driving car" means. It still means that I can get in, tell it to go to a destination that I would normally drive to, and then I can go to sleep. That's what's known in the trade as "Level 5" autonomy. Anything less than that is not the life-changing promise that attracts all this investment and press.
If your car can operate only on a few roads, then it's no better than one of those people-mover trains at airports. Those are driverless, too.
Musk himself has been a persistent promoter of the AI software in Teslas, and his claims have fallen flat spectacularly. In 2019 he claimed that fleets of a million robotaxis would be running around in 2020.
The machine learning expert Kai-Fu Lee and robotics professor Rodney Brooks both immediately tweeted that if that were to happen, they would between them gladly eat all of these robotaxis. Needless to say, they never had to eat one, because these robotaxis don't exist.
Musk claims that Tesla can deliver this humanoid robot because it already has AI software in its cars. Really?
First of all, that claim assumes that autonomous car software is a foundation to build a humanoid helper. A home or work helper needs to do a lot more than navigate from point A to point B, so it's a huge assumption.
But even if we swallow that assumption, Musk's claim is not reassuring. Tesla's autonomy features, according to Tesla, "require active driver supervision and do not make the vehicle autonomous."
So, we're supposed to believe that Tesla's software, which is unable to make its cars autonomous today, will tomorrow make a humanoid robot autonomous. That's called "reducing a problem to a bigger problem."