Launch of SpatiXBot Product Suite Enhances Autonomous Navigation for Robots

Launch

At the recently concluded 2026 Beijing Yizhuang Humanoid Robot Half Marathon, nearly 40% of the participating teams managed to operate autonomously without remote control. The integration of Beidou spatial intelligence and embodied intelligence is crucial for achieving precise navigation and stable movement through dynamic centimeter-level positioning and perception capabilities.

On April 22, the spatial intelligence technology company Qianxun Positioning officially launched the “Embodied Spatial Brain (SpatiXBot)” product suite. This suite provides a unified spatial foundation for robots, accelerating their large-scale deployment in scenarios such as inspection, security, and emergency rescue.

The “Embodied Spatial Brain” leverages Qianxun Positioning’s comprehensive spatial intelligence technology. It combines general large models with self-developed specialized models to foster capabilities in autonomous navigation, environmental perception, and group collaboration for robots, transitioning them from remote control to autonomy and from individual operation to collective intelligence.

In the marathon, autonomous running means that robots must independently navigate in open outdoor environments, completing tasks such as positioning, path planning, posture control, and real-time decision-making. This requires robots to possess dynamic centimeter-level positioning and perception capabilities. The “Eye of Space-Time” multi-source fusion terminal within the “Embodied Spatial Brain” enables precise positioning and seamless integration for robots in both indoor and outdoor settings. In outdoor environments, high-precision GNSS modules provide centimeter-level directional awareness, while indoors, dual-camera V-SLAM (Visual Simultaneous Localization and Mapping) technology, supported by powerful edge computing, corrects cumulative errors in real-time, maintaining centimeter-level positioning accuracy even in complex scenarios to ensure stable movement trajectories.

Additionally, the “Embodied Spatial Brain” offers high-precision maps and path planning, supporting autonomous navigation for various types of robots. This functionality enables continuous and stable operation in scenarios such as park inspections, urban security, and complex terrain surveys. After addressing the challenges of accurate visibility and stable movement, effective robots must also possess the ability to understand and make judgments. To achieve this, the “Embodied Spatial Brain” has established a “Training and Inference Integration” platform that links model training with inference deployment, forming a closed loop of data and algorithms.

In practical applications, large models provide zero-sample recognition and verification capabilities, while smaller models enhance reasoning efficiency and recognition accuracy in specific scenarios through continuous training. For instance, in the case of drone inspections over water bodies, the “Training and Inference Integration” platform can quickly identify floating debris such as trash using a multimodal general large model at the project’s outset. As inspection data accumulates, specialized visual small models can be retrained and iterated to reduce errors and improve inference speed and accuracy. Furthermore, the large model calibrates and supplements the recognition results of the small model, creating a continuously evolving data cycle. With sufficient specialized data training, the collaboration between small and large models can ultimately elevate target recognition accuracy to over 95%. This mechanism allows robots to be deployed rapidly in various scenarios, including police patrols, power grid inspections, park management, and emergency rescue operations.

Qianxun Positioning CEO Chen Jinpei stated that the “Embodied Spatial Brain” serves as the neural center for helping robots understand their environment, make decisions, and execute tasks. It provides a spatial perception suite that integrates Beidou and visual sensor capabilities for original equipment manufacturers, offering multi-source fusion spatial awareness. “We will also integrate these embodied intelligent devices, such as robots, robotic dogs, drones, and unmanned vehicles, into specific scenarios, providing training and inference capabilities along with spatial-based skills to accelerate the large-scale implementation of robots in the real world,” he added.

Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/launch-of-spatixbot-product-suite-enhances-autonomous-navigation-for-robots/

Like (0)
NenPowerNenPower
Previous April 24, 2026 2:28 am
Next April 24, 2026 5:02 am

相关推荐