Google DeepMind unveils AI for robotics with Gemini Robotics 1.5

published on 26 September 2025

Google DeepMind has introduced significant advancements in robotics with the release of its upgraded AI models, Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. These models empower robots to handle more complex tasks and even utilize web-based resources to problem-solve in real-time, marking a leap forward in robotics capabilities.

During a press briefing, Carolina Parada, Google DeepMind’s head of robotics, explained how these new models work together to enable robots to "think multiple steps ahead" when operating in the physical world. This advancement allows robots to move beyond single, isolated tasks and engage in multi-step activities requiring higher levels of reasoning.

Smarter Robots, Complex Tasks

The Gemini Robotics-ER 1.5 model equips robots with the ability to understand their surroundings and access digital tools like Google Search to gather information. This insight is then translated into step-by-step, natural language instructions for Gemini Robotics 1.5, which uses its enhanced vision and language processing capabilities to complete physical tasks.

For example, robots powered by these models can sort laundry by color, pack a suitcase based on weather conditions in specific cities, or manage waste disposal by performing searches tailored to local recycling regulations. This represents a significant improvement over previous models that could only execute one instruction at a time.

"The models up to now were able to do really well at doing one instruction at a time in a way that is very general", Parada said. "With this update, we’re now moving from one instruction to actually genuine understanding and problem-solving for physical tasks."

Inter-Robot Skill Sharing

Another breakthrough introduced with Gemini Robotics 1.5 is its ability to enable knowledge transfer between different robots. This means that a task learned by one robot can be replicated by another, even if their designs differ significantly. For example, tasks programmed for the ALOHA2 robot, which has two mechanical arms, can seamlessly be performed by both the bi-arm Franka robot and Apptronik’s humanoid robot Apollo.

"This enables two things for us: one is to control very different robots - including a humanoid - with a single model", said Kanishka Rao, a Google DeepMind software engineer. "And secondly, skills that are learned on one robot can now be transferred to another robot."

Expanding Accessibility

As part of the rollout, Google DeepMind has made Gemini Robotics-ER 1.5 available to developers through the Gemini API in Google AI Studio. Meanwhile, access to Gemini Robotics 1.5 remains limited to select partners.

With these advancements, Google DeepMind is pushing the boundaries of what robots can achieve, blending cutting-edge AI with practical applications to address real-world challenges. The updates signal a significant step toward an era where robots can work independently and collaboratively to perform intricate tasks with minimal human intervention.

Read the source

Read more

Built on Unicorn Platform