2024 in Robotics: What’s Next?

Javad Amirian
8 min read4 days ago

--

As 2024 comes to a close, it’s evident that this has been a landmark year for robotics, particularly in humanoids and robotic foundational models. While many roboticists anticipate a ChatGPT moment for robotics in 2025, I see this year as a pivotal period for the expansion of B2B robots. Probably the first generation of humanoids should gear up and start working in industrial settings, notably within automotive manufacturing. I imagine this integration and pilot phase will pave the way for more consumer adoption in the following years.

Here are my 2 cents on the current state of robotics and what’s ahead for 2025.

Humanoids on the Rise

2024 was a massive year for humanoid robots and the trend shows no signs of slowing down next year. Humanoid robotics startups are currently promoting an ambitious vision of general-purpose robots capable of performing tasks on par with humans. While significant questions remain about the feasibility of this vision and how other robotic forms will adapt to keep pace, the progress achieved so far is undeniably impressive. Let’s take a look at some of the key players and their latest developments.

February this year, the startup Figure secured a major investment from OpenAI and Nvidia, to position itself as the main competitor to Elon Musk’s Tesla. They unveiled [Figure 02], in August, only 10 months after releasing Figure 01, and it was a major leap in terms of dexterity, processing power, and speed. Their recent demo at a BMW factory showcased impressive capabilities and the company appears confident about starting deployment in 2025.

In another significant development, in April, Boston Dynamics introduced the fully-electric Atlas. While the hydraulic Atlas was a groundbreaking pioneer in the field, it had long been evident that a hydraulic design would not be practical for deployment. Now, with plans to deploy the electric Atlas in collaboration with Hyundai early next year, Boston Dynamics seems well-positioned to capitalize on its first-mover advantage and solidify its presence in the market.

And then, of course, there’s the Tesla Optimus. While, in my opinion, currently seems to be trailing in third place in this race, you would never know what Elon Musk has up his sleeve. What we do know is that Tesla plans to begin deploying Optimus in its factories on a limited scale in 2025, with wider production anticipated by 2026. Let’s hope this is not just another Elon time promise. Anyway, it’s remarkable to think that just three years ago, this robot was merely a performer in a bodysuit!

My favorite is yet, Digit the bipedal robot by Agility Robotics with its amazing “backward” knee design, resembling ostrich legs. This year, they opened “RoboFab” the world’s first factory dedicated to building humanoid robots in Oregon to scale up Digit production to 10,000 units per year. Any guesses on who their first customer is? Schaeffler AG, another automotive giant in Germany.

Digit Robot from Agility Robotics

On the other side of the planet, Unitree Robotics is advancing just as rapidly as its American counterparts, proudly waving the pioneer flag in Asia. They introduced the G1 and H1, showcasing demos that rival those of Boston Dynamics — but at a fraction of the cost. Imagine robots capable of performing backflips for under $20k. Unitree isn’t alone in China, though. Zhiyuan Robotics recently unveiled a mass production line churning out nearly 1,000 Agibot robots. Quite intimidating!

Quadrupeds

Quadrupeds are also on the rise. Boston Dynamics Spot being the pioneer in this field, has already deployed around 2000 robots around the world. And they are already delivering value in inspection, and security tasks. Another key player is ANYmal, which has gained traction in Europe, with several companies utilizing it for high-precision inspection and mapping. However, Unitree Robotics may be the one to watch. At less than half the price of Spot or ANYmal, Unitree’s quadrupeds boast impressive specifications, including a 40 kg payload, 5-hour battery life, and a top speed of 6 m/s. Their latest B2-Wheeled demo highlights remarkable flexibility and reliability in locomotion, further cementing their potential to disrupt the market.

Foundational Models

Delving deeper into the technology, foundational models are emerging as a transformative trend in robotics. This year, we saw a lot of research on leveraging LLMs and Multimodal models in robotics. And with the development of Vision-Language-Action models, we are moving closer to a future where robots can seamlessly perceive, reason, and act in unison.

On this front, we see a research trend in the adoption of “diffusion models” in robotics. After their success in image/video generation (like DALL-E, Sora, etc.), they are now being used in robotics. Diffusion Policies are being used to generate robot action sequences and to handle the high-dimensional space of robot actions. One of the many examples is the ALOHA robot by Google DeepMind on advancing imitation learning for end-to-end learning of robotic arms. Central to this work is Chelsea Finn, who has now spun off the startup Physical Intelligence (PI) from her work at Google and Stanford to commercialize their research. They raise $400M to “create a single generalist brain for robots”. This is likely just the beginning, as the bullish trend is expected to continue into 2025, with more players joining the game, to bring robots to our homes. But we shouldn’t underestimate the challenges of deploying these models in real-world scenarios. As we have seen in the past, the gap between simulation and reality can be vast.

On the open-source front, some exciting projects are emerging. Among them, “LeRobot” from Hugging Face 🤗 is standing out. It aims to provide tools for “data sharing, visualization, and training” of robotic models. Not to forget the central role of Nvidia here, where they are providing the infrastructure for training these models.

Data is the Oil, Simulators are the Engine

It’s important to emphasize that data is the lifeblood of robotics — the oil that keeps the engine running. Unlike in language or pure vision tasks, data in robotics is neither as abundant nor as inexpensive to generate. And we have seen, for the first time, mega scale robotics dataset, Open X-Embodiment, being released, only thanks to the collaboration of 30+ research labs. Yet many machine learning practitioners believe this oil is not being used in the most efficient way. On the for-profit side, we see companies like Tesla and Unitree pushing hard to generate data by employing human teleoperation, I think there is a lot of room for more startups to build their business on generating data for robots.

Simulators, on the other hand, are the indispensable engines driving this progress. Let’s be honest: we would not have reinforcement learning in place without simulators. And pardon me, I dont count imitation learning as RL. Simulators are as valuable as data, if not more so. While I still find Gazebo effective for basic projects, Nvidia has been positioning its Omniverse platform and its Isaac Sim as the gold standard for robotics simulation, since a few years now. Meanwhile, the recently introduced Genesis simulator, which incorporates physics-aware features, and offers generative 3D models and 4D behaviors, has generated significant interest. With more evolution of volumetric generative models, I expect this to be a game-changer in the field of robotics simulation, and close the gap between simulation and reality significantly in 2025.

Briefly on Autonomous Cars

This year brought a mix of progress and challenges for self-driving cars. On one hand, Tesla unveiled its Cybercab prototype in October — the first personal vehicle without a steering wheel — marking a bold step toward the vision of fully autonomous vehicles. On the other hand, there were significant setbacks, such as General Motors (GM) deciding to shut down its robotaxi operation “Cruise” in December, despite investing over $10 billion into the project. This decision has raised questions about whether the timing is right for widespread deployment of autonomous cars.

Cruise reports earlier indicated that self-driving cars have significantly improved safety compared to human drivers (imagine 92% fewer collisions as the primary contributor of the collision). Yet both regulations and public acceptance remain major hurdles. It feels like the industry is entering the “trough of disillusionment” in its hype cycle — a period of skepticism that may take a few years to overcome before reaching the “plateau of productivity.” Still, there is a silver lining that the society’s growing demand for electric vehicles is often paired with autonomous technology, and this could accelerate progress and bring us closer to the dream of self-driving cars.

And the Home Robots?

The year 2024 began with some setbacks in the home robotics space, most notably the blocking of Amazon’s acquisition of iRobot by the European Union. This was a missed opportunity to see Amazon elevate consumer robotics to new heights. Amazon also has not delivered much on its promise to bring Astro to homes, three years after its announcement, the discontinuation of the Astro for Business program in July was another blow to its ambitions in this space.

At CES 2024, Samsung and LG showcased their own home robot projects, hinting at potential progress in the field. However, this area still awaits a true breakthrough. Looking ahead to 2025, I anticipate the emergence of more pet-sized home robots in this category. These robots will likely be powered by LLMs and cloud-based multimodal models, fueled by the explosion of generative AI models. While they won’t reach the level of “Ameca” in embodiment, they could still be a notable improvement over the simple human-robot interactions that amazed us just three years ago.

Still, I remain skeptical until someone delivers a truly valuable home robot priced under $1,000. Until then, I’d rather invest in upgrading my vacuum cleaner robot to a more intelligent one that doesn’t constantly get stuck under the couch and require me to free it.

References

--

--

Javad Amirian
Javad Amirian

Written by Javad Amirian

I work with robots — past, present, and future.

No responses yet