1X Launches World Model Allowing NEO Robot to Learn From Video

1X has introduced a world model that enables its NEO humanoid robot to learn new tasks by observing videos. The approach aims to reduce reliance on manual programming and task-specific data collection.

By RB Team Published: Updated:

1X has launched a new world model designed to let its NEO humanoid robot learn tasks by watching videos rather than relying solely on direct demonstrations or hand-coded behaviors. The model builds internal representations of physical environments, objects, and actions, enabling NEO to infer how tasks should be performed in real-world settings.

The system combines video understanding with embodied simulation, allowing the robot to translate visual observations into actionable policies. By training on large-scale video data, the world model supports task generalization across different environments and object configurations. This reduces the need for extensive robot-specific data collection, which has been a major bottleneck in scaling humanoid capabilities.

The release highlights a broader shift toward data-efficient learning in robotics. As companies seek to move humanoids beyond narrow demonstrations, world models trained on passive data sources like video are emerging as foundational infrastructure for scalable, adaptable robot behavior.

News, Robots & Robotics