
A groundbreaking development has been unveiled at the 2026 Beijing Yizhuang Robot Half Marathon, where Alibaba’s Gaode officially introduced the world’s first fully autonomous embodied robot, known as Gaode Tutu. This four-legged robot is capable of navigating complex environments independently, assisting visually impaired individuals in overcoming obstacles and navigating through crowds, thus bridging the technological gap from laboratory testing to real-world applications.
Traditionally, robots struggled with understanding physical space and human commands. For instance, if you asked a traditional robot to take you to the nearest park for some relaxation, it would likely be confused—it lacks knowledge of the park’s location, how to get there, and what “relax” means. Even if provided with a route, it would struggle to adapt if faced with unexpected obstacles, like road construction. This limitation is a major challenge in the evolution of embodied intelligence toward general artificial intelligence (AGI).
Now, with the release of the ABot technology stack and the Gaode Tutu robot, this challenge is being addressed. Unlike conventional robotic dogs, Gaode Tutu first comprehends your intention—recognizing when you need a break—then searches its memory for the nearest park, breaks down the task into manageable steps, and navigates to the destination. If it encounters an obstacle along the way, it can adjust its path in real-time, even avoiding a group of people without you noticing.
The ABot-Claw, the central system in this technology, signifies a pivotal advancement in embodied intelligence. It transforms the robot from a mere tool that responds to commands into an intelligent agent capable of understanding intentions, planning paths, executing tasks, and self-correcting. This marks a significant shift from the era of individual trial-and-error to a new phase of system-level intelligence.
Moreover, ABot’s Model layer has shown impressive results. The ABot-M0 operational model has set world records across several authoritative benchmark tests, achieving an 80.5% task success rate on the Libero-Plus benchmark, significantly outperforming industry standards.
The ABot-Claw system redefines the foundation of robotic memory. By utilizing core technologies like Map as Memory, centralized dynamic scheduling, and a hierarchical fault-tolerance mechanism, it eliminates the “one robot, one map” issue. Map as Memory allows the robot to create a persistent spatial memory system that supports multimodal perception data, facilitating seamless navigation and task execution.
This innovative approach ensures that robots can share knowledge across devices, allowing new robots to inherit information from previous experiences. For example, if one robot identifies a water cooler on the third floor, another robot dispatched to the same building would instantly know where to find it. This fundamentally shifts knowledge retention from being device-specific to being part of a shared world memory.
The ABot-Claw also facilitates continuous task execution despite unexpected challenges. Its centralized scheduling and cloud-edge collaboration ensure that even if one component fails, others can seamlessly take over. This architecture allows for rapid responses to obstacles, ensuring robots can adapt quickly without relying on cloud processing.
Additionally, the Claw system introduces a reflective self-correction mechanism that allows robots to assess their performance and adjust their strategies. For instance, if a robot is instructed to fetch a drink and discovers an empty shelf, it can generate feedback and replan to find an alternative source, demonstrating a level of adaptability not previously seen in traditional robots.
As robots become increasingly integrated into human environments, understanding and adhering to social norms is crucial. The ABot-Claw incorporates reinforcement learning techniques to enable robots to autonomously learn behaviors suitable for social interactions, such as yielding to pedestrians and navigating crowded spaces. This capability is reinforced by the introduction of the SocialNav model, designed to enhance robots’ navigation skills in densely populated areas.
In summary, the ABot-Claw system is not merely a tool to enhance robotic functionality but a foundational architecture steering embodied intelligence toward AGI. It enables a transition from isolated robot deployments to a universal intelligence framework where experiences and knowledge can be shared and reused across various robot types and scenarios, ultimately leading to smarter, more capable robots that can integrate into everyday human life.
Gaode’s vision extends beyond just the Gaode Tutu robot. With a commitment to open-sourcing their technology, they aim to create a rich ecosystem for embodied intelligence, reducing costs and encouraging collaboration across the industry. This strategic move positions Gaode as a leader in the development of shared intelligent frameworks that will drive the future of robotics.
Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/gaode-unveils-abot-claw-at-beijing-half-marathon-pioneering-autonomous-embodied-intelligence/
