
On the evening of May 8, Figure AI unveiled a new demo featuring two Helix-02 robots that efficiently tidied up a bedroom in under two minutes. The robots performed a series of tasks, including opening and closing doors, hanging clothes, organizing headphones, taking out the trash, and pushing chairs, culminating in neatly making the bed. During the bed-making process, the two robots nodded at each other as if to confirm their actions before they jointly applied force to smooth out the blanket. The entire operation was smooth and coordinated, resembling human teamwork. By observing each other’s movements, the robots were able to anticipate their next steps without the need for a central controller or explicit communication. They successfully tackled the complexities of handling flexible objects.
Figure AI announced that this achievement marks the first instance globally of multi-robot cooperation in locomanipulation using a single neural network. Each robot operates independently, utilizing its own model while observing the actions of its partner to autonomously determine its own movements. Although they appeared to be collaborating seamlessly, each robot was making independent judgments.
Figure AI is currently the most expensive humanoid robotics company in the world. While it emphasizes practical applications, its approach is notably different from most domestic robotics firms and other companies like Physical Intelligence, which focus on wheeled bases with dual arms for efficiency. What is particularly impressive in the demonstration is the robots’ ability to walk on two legs while cooperating to complete tasks. Each demonstration, whether it involved tidying a laboratory table or making a bed, left a strong impression due to the realistic portrayal of human-like movement and collaboration.
However, while these demonstrations are captivating, some critics argue that the apparent cooperation might rely on carefully designed initial states and scene arrangements. What seems like a seamless partnership may simply be predictable behavior resulting from short-sequence task training. The true test of whether the robots can dynamically understand each other’s intentions and work together over the long term remains to be seen.
Recently, Figure AI has been very active: the BotQ factory is producing one Figure 03 humanoid robot every hour, bringing the total deliveries to over 350 units. The Figure 04 is currently in the later design stages, with the founder referring to this as the “iPhone 1 moment.” Additionally, Figure AI is exploring applications in home environments. In the U.S., the robots may be available on a subscription basis, ranging from $400 to $600 per month, occupying a small footprint while being capable of autonomous operation around the clock. Figure AI is now the most expensive humanoid robotics company in the world, and its technological advancements warrant close observation. Collaboration is undoubtedly a core function for the future of humanoid robots, and whether focusing on collaboration is a misstep or an essential pathway remains to be seen.
01 Helix-02 Bedroom Tidying Demonstration
The latest Helix-02 demonstration showcased the multi-task collaboration abilities of two humanoid robots in a shared home environment. The robots, equipped with the Helix-02 system, worked together to reset the entire bedroom in less than two minutes, performing tasks such as opening doors, hanging clothes, organizing headphones, closing books, cleaning up trash, pushing chairs, and most notably, making the bed together. The Helix-02 is Figure AI’s most advanced model, previously showcased for its ability to “kick the dishwasher door open.” The company claims that the Helix model relies on over one million hours of simulated and real-world data training, aiming to create a universal robot platform adaptable for both home and commercial environments.
During the demonstration, Figure AI highlighted several core capabilities: integrated full-body control that extends from upper body movements to full-body actions (locomotion + manipulation + balance), enabling gait, balance, walking, and operational tasks to work in tandem. The robots can handle flexible objects and engage in complex dynamic interactions, effectively managing items like bed sheets that lack fixed geometry and whose states constantly change. The new scene generalization capability allows the robots to adapt to new environments without relying on task-specific controllers or expert strategies, learning instead through experience.
A key aspect of this demonstration was the handling of flexible objects. In laboratory environments, robots typically deal with rigid, geometrically known items, such as cubes, spheres, or simple tools. However, bed sheets, blankets, and fabrics have no fixed shape and can fold, slide, or obscure visibility. While humans rely on years of tactile and visual experience to handle fabric, Helix-02 must continuously estimate the state of the fabric through visual input and action prediction, generating subsequent movements—this demands a high level of real-time perception and feedback control.
Another highlight is the multi-robot collaboration for locomanipulation driven by a single neural network. All actions are autonomously generated by the robots without human intervention or central scheduling. Each robot observes the environment through its own camera, inferring the intentions of its partner and making continuous decisions at a high frequency. This is akin to two people folding a blanket together, relying solely on visual and movement cues to collaborate without explicit communication or shared plans. Working in the same space means that each action impacts the tasks being handled by the other robot. The robots cannot merely follow preset movements; they must interpret each other’s trajectories, gaze, hand positions, and overall postures as dynamic inputs to infer their partner’s goals in real time. This is particularly challenging for flexible sheets without fixed shapes or grab points, as even tiny errors in intention prediction can rapidly amplify with changes in the state of the fabric.
The video also showcased the robots’ rhythmic sense and high-dimensional decision-making capabilities. While moving, grasping, and maintaining balance, they performed delicate operations such as hanging headphones on a rack or using their foot to step on a trash can’s pedal to open the lid. Throughout the entire process, there were no “script switches”; all actions were continuously inferred and decided by the neural network within milliseconds.
02 Robot Collaboration: Unique Technology or Misguided Focus?
The Helix model is not the first to demonstrate multi-robot collaboration. Back in February 2025, Figure AI showcased the initial dual-machine collaboration demonstration of the Helix model. Two humanoid robots equipped with the Helix system completed tasks in the same scene, collaborating through visual perception and natural language instructions. In this demonstration, the robots were able to hand objects to one another and then wait for the second robot to place them in a closer organization spot. Once they had tidied their own area, they would wait for the other robot to pass items before resuming their tasks. At the time, this demonstration was widely regarded as a significant breakthrough in the tech community.
On one hand, unlike previous robots that mainly showcased “single-unit pick and place” tasks, Helix’s collaboration demonstration involved genuine interaction and division of labor between multiple robots. The robots adjusted their movements in real-time based on spatial positions, task prompts, and visual inputs—a rarity in robot control demonstrations at that time. On the other hand, there were more cautious and critical voices. Some technicians pointed out that such demonstrations might depend on carefully designed initial states and scene setups, making the robots seem as if they were “collaborating” when they were likely just responding to predictable environmental changes through short-sequence task training rather than achieving real-time deep collaboration. Moreover, while the two robots can perform surface-level cooperative actions, their interactions remain relatively basic, and there is still a noticeable gap to true dynamic inference of each other’s intentions and continuous collaboration in open environments.
This technological evolution line is carried over into the Helix-02 demonstration. From receiving items handed to them to now nodding to communicate, the two robots can tackle more complex tasks, potentially moving toward genuine vision-driven, autonomous inference, and continuous interaction in the future. Why is Figure AI particularly focused on multi-robot collaboration? Aside from the technical challenges, the tasks themselves are highly complex from an engineering perspective, and the short-term value returns are limited. Most currently deployed robots in the industry exhibit low coupling and indirect cooperation or complete tasks under the guidance of a central controller. However, for Figure AI and other companies aiming to have robots perform real human tasks, achieving coordinated efforts like making a bed, as shown in the video, is unavoidable.
03 Figure AI Approaches Mass Production, Ready for Real-World Testing
Although the Helix-02 demonstration is still a distance from fully replacing everyday household chores, each of Figure AI’s showcases is strikingly humanoid—this not only boosts public interest but may also be a key reason for attracting investor attention. However, whether the technology can transition from the lab to real-world applications will ultimately be tested by market conditions and scalability. The next few quarters will be crucial for validating whether its technology and business model can be successfully implemented.
Recently, Figure AI has made significant strides, the most notable being substantial increases in production line capacity. At its BotQ manufacturing facility in California, Figure has successfully ramped up the production of humanoid robots from one unit per day to one unit per hour—achieving nearly a 24-fold increase in capacity in less than 120 days. This marks a critical step from “prototype manufacturing” to “industrial mass production,” laying the groundwork for large-scale commercial deployment. Since its launch in 2025, BotQ has been positioned as a high-capacity manufacturing plant, with an annual production goal of around 12,000 units (approximately one unit per hour) while preparing for future expansions to a scale of 100,000 units.
Behind this capacity increase is a mature automated production line structure, customized manufacturing execution software covering over 150 interconnected workstations, strict supply chain quality control, and a comprehensive quality system with over 80 final validation tests. The battery production line boasts a first-pass yield rate of 99.3%, with actuator components exceeding 9,000 units produced and over 50 process inspection points ensuring stable quality. For Figure, this “hourly mass production” signifies not just a leap in capacity data but also serves as a driver for real-world application feedback cycles. Each robot produced will return operational data to enhance the perception, robustness, and long-term performance of Helix AI—this is critical for verifying the system’s reliability in real household and commercial settings.
CEO Brett Adcock has compared the upcoming Figure 04 to the “iPhone 1 moment” in the humanoid robotics field. Each generation of robots—from Figure 01 to Figure 03—has seen continuous iterations in hand, foot, tactile sensing, and charging design, while Figure 04 will achieve a breakthrough in design, manufacturing usability, and cost control. In a recent interview, Adcock elaborated on the design philosophy behind Figure AI robots: hot-swappable robots can independently carry out household organizing and logistics tasks, automatically docking with wireless charging stations for continuous operation. The facility employs a “hot-swap” strategy: when a robot’s battery falls below 10–15%, another robot can take over to ensure task continuity.
Another key focus of Figure’s research and development is the “Never Fall” protocol. With the help of the Vulcan reinforcement learning project, even if a single knee joint fails, the robot can maintain balance and limping to a repair area. The testing in the facility includes aging, durability, and repetitive actions, such as squats and burpees, ensuring the robots can be easily maintained in the event of various hardware or software failures. Figure emphasizes modular design with removable fabric covers and replaceable footwear, allowing non-technical users to operate and maintain the robots easily. Figure is planning to offer home robots in a model similar to car rentals, with monthly leases around $400 to $600, occupying just 2×2 feet of space, with automatic charging and no manual intervention required. The robots utilize onboard reasoning and data anonymization to protect privacy while continuously optimizing the performance of the Helix model.
As of the latest public reports, Figure AI’s valuation has reached approximately $39 billion. This was achieved in a Series C funding round completed in September 2025, which raised over $1 billion, setting a post-funding valuation at around $39 billion. Figure employs approximately 500 staff, primarily focused on engineering and AI. The team covers hardware, battery, embedded software, PCB manufacturing, industrial design, system integration, and testing, enabling closed-loop management from research and development to mass production. Adcock envisions a future where the number of humanoid robots exceeds the number of humans in offices, simulating the pressures of unstructured environments while advancing the development of general intelligence. He believes that the seventh generation of humanoid robots, incorporating tactile sensing and mass-produced hardware, will be a crucial cornerstone in achieving AGI.
Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/figure-ai-unveils-groundbreaking-demo-showcasing-robot-collaboration-in-household-tasks/
