X-Humanoid Launches Embodied Tien Kung 3.0 as Open, Practical Humanoid Platform

Beijing-based X-Humanoid unveils Embodied Tien Kung 3.0, a full-size humanoid robot built on the Wise KaiWu platform, emphasizing openness, interoperability, and real-world industrial deployment.

By Rachel Whitman Published: Updated:

Beijing-based Beijing Innovation Center of Humanoid Robotics, also known as X-Humanoid, has unveiled Embodied Tien Kung 3.0 – a next-generation general-purpose humanoid platform designed to balance openness with practical deployment. The launch signals a shift in China’s humanoid robotics strategy from demonstration projects toward scalable industrial integration.

Built on X-Humanoid’s proprietary Wise KaiWu embodied AI platform, the full-size robot introduces upgrades across balance control, motion coordination, and autonomous decision-making. The company says Tien Kung 3.0 is the first humanoid of its size to combine high-dynamic whole-body motion control with integrated tactile interaction, positioning it for more demanding real-world tasks.

An Open Architecture Aimed At Accelerating Adoption

A central theme of the Tien Kung 3.0 release is interoperability. The humanoid robotics sector continues to face fragmentation – with closed hardware stacks and incompatible software frameworks slowing commercial rollouts. X-Humanoid is attempting to address those bottlenecks directly.

On the hardware side, the robot includes multiple expansion interfaces that allow developers to integrate different end-effectors and tools without redesigning the base system. The architecture is intended to simplify adaptation across manufacturing, commercial services, and specialized industrial scenarios.

Software openness is equally emphasized. The Wise KaiWu ecosystem provides documentation, toolchains, and a low-code environment designed to reduce development complexity. Compatibility with widely used middleware and communication protocols such as ROS2, MQTT, and TCP/IP allows research institutions and integrators to customize applications without reengineering foundational components.

X-Humanoid has also open-sourced several core technologies tied to the platform, including elements of its motion control framework, world model, embodied vision-language models, cross-ontology VLA systems, training pipelines, datasets, and simulation libraries. The strategy aims to cultivate a broader developer ecosystem capable of iterating and deploying humanoid applications more quickly.

From High-Torque Hardware To Multi-Robot Intelligence

Beyond openness, the company is positioning Tien Kung 3.0 as a practical industrial machine rather than a research prototype. The robot integrates high-torque joints capable of supporting heavy-load tasks while maintaining balance on uneven terrain. Its multi-degree-of-freedom coordination allows for complex actions such as kneeling, bending, obstacle clearing, and precise manipulation in confined spaces.

Millimeter-level calibration accuracy, enabled through coordinated joint control, is intended to meet industrial precision requirements. The physical platform is paired with the Wise KaiWu AI stack, which establishes a continuous perception-decision-execution loop.

At the cognitive level, world models and vision-language systems interpret scenes and break down complex tasks into executable steps. Real-time navigation and VLA-based control manage obstacle avoidance and fine motor actions. A multi-agent framework enables centralized scheduling and collaboration among multiple robots, signaling a move from single-unit operation to coordinated fleet deployment.

Taken together, Embodied Tien Kung 3.0 reflects a broader ambition: transforming humanoid robotics from experimental showcases into interoperable, production-ready systems capable of functioning in commercial and industrial environments at scale.

Artificial Intelligence (AI), News, Robots & Robotics

Skild AI Acquires Zebra Robotics Unit to Build Unified Warehouse Automation Layer

Skild AI has acquired Zebra Technologies’ robotics automation business, aiming to unify fragmented warehouse systems under a single AI-driven control layer.

By Laura Bennett | Edited by Kseniia Klichova Published:
Skild AI is combining its general-purpose robotics model with Zebra’s orchestration platform to coordinate diverse robot fleets across warehouse operations. Photo: Skild AI

Skild AI has acquired the robotics automation business of Zebra Technologies, a move that signals a shift toward unified control systems for warehouse robotics rather than isolated deployments.

The deal includes Zebra’s Symmetry Fulfillment platform, a system designed to coordinate fleets of robots and human workers in logistics environments. By combining this orchestration layer with Skild AI’s general-purpose robotics model, the company is aiming to address one of the most persistent challenges in automation: fragmentation across hardware, software, and tasks.

The acquisition positions Skild AI to move beyond model development into full-stack deployment, where AI systems not only control individual robots but manage entire warehouse operations.

From Task Specific Automation to Generalized Control

Warehouse robotics has traditionally been built around specialized systems, with different robots programmed for picking, transport, or inspection. These systems often operate independently, requiring significant integration effort and limiting flexibility.

Skild AI’s approach centers on what it calls an “omnibodied” model, designed to operate across different robot types without being tailored to a specific form factor. In principle, this allows the same AI system to control humanoid robots, mobile platforms, and robotic arms without retraining for each configuration.

The addition of Zebra’s orchestration software extends this capability from individual robots to coordinated fleets. The Symmetry platform enables real-time task allocation, workflow management, and human-robot interaction, providing the infrastructure needed to deploy heterogeneous systems in live environments.

Together, the two technologies suggest a shift from programming robots individually to managing automation as a unified system.

Orchestrating Mixed Fleets at Scale

The combined platform is intended to support a wide range of robotic systems within a single warehouse. This includes autonomous mobile robots for material transport, robotic arms for packing, and potentially humanoid systems for more complex manipulation tasks.

Such an approach reflects the operational reality of modern logistics, where no single robot type can handle all tasks efficiently. Instead, performance depends on coordination between different systems and their integration with human workers.

By embedding AI at the orchestration level, Skild AI is attempting to create a layer that can dynamically assign tasks, optimize workflows, and adapt to changing conditions without requiring extensive reprogramming.

This model also creates a feedback loop: data collected from deployments can be used to improve the underlying AI system, potentially increasing performance across all environments where it is deployed.

A Push Toward End to End Automation

The acquisition highlights a broader industry trend toward end-to-end automation platforms. Rather than selling individual robots or software components, companies are increasingly positioning themselves as providers of complete operational systems.

This shift is driven in part by the limitations of current approaches. Many warehouses still require significant manual configuration to integrate different automation tools, and retrofitting facilities to accommodate specific robots can be costly and disruptive.

Skild AI’s strategy suggests an alternative path, where existing warehouses are adapted through software and orchestration rather than physical redesign. By combining a general-purpose AI model with a proven coordination platform, the company aims to reduce the complexity of deploying automation at scale.

The approach also aligns with efforts by companies such as Nvidia to build infrastructure for physical AI, where simulation, data, and control systems are integrated into cohesive platforms.

The success of this strategy will depend on whether a single AI layer can reliably manage diverse robotic systems in complex, real-world environments. While the concept of “any robot, any task” remains ambitious, the integration of orchestration and intelligence represents a step toward more flexible and scalable automation.

As logistics operators seek to increase efficiency without overhauling existing infrastructure, the ability to coordinate mixed fleets of robots may become a defining feature of next-generation warehouse systems.

Automation, Business & Markets, News, Robots & Robotics

Humanoid Robot Chasing Wild Boars in Warsaw Highlights Real World Deployment Shift

A viral humanoid robot chasing wild boars in Warsaw has drawn attention to the rapid global spread of Chinese robotics hardware.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
A humanoid robot based on Unitree hardware chases wild boars in Warsaw, illustrating the growing real-world presence of globally sourced robotics systems. Photo: Edward Warchocki / Facebook

A humanoid robot chasing wild boars through a parking lot in Warsaw is not an obvious signal of industry change. But the viral footage, widely shared across social media, offers a glimpse into a deeper shift in the global robotics landscape.

The robot, known locally as “Edward”, is built on hardware from Unitree Robotics and adapted by a Polish team at MERA Robotics. While the scene itself borders on spectacle, the underlying model – combining Chinese manufacturing with local software customization – is becoming an increasingly common pathway for deploying humanoid systems outside their country of origin.

From Viral Moment to Deployment Model

Edward’s popularity stems from its unexpected public appearances, including the now widely circulated incident in which it pursued wild boars in an urban setting. But beyond the novelty, the robot represents a practical approach to deploying humanoid technology.

Rather than developing systems entirely in-house, MERA Robotics has integrated Chinese-built hardware with its own operating software, tailoring the platform for local use cases. This hybrid model allows smaller companies to bypass the high costs and long timelines associated with building complete humanoid systems from scratch.

According to MERA co-founder Radoslaw Grzelaczyk, this approach reflects a broader trend. After studying robotics commercialization efforts in China, his team concluded that Chinese manufacturers offer a combination of availability, performance, and pricing that is difficult to match elsewhere.

The result is a growing ecosystem in which hardware is sourced globally, while software and applications are developed locally.

China’s Cost Advantage Extends Abroad

The Warsaw example highlights a structural advantage that Chinese robotics companies have begun to establish. Firms such as Unitree are scaling production and reducing costs at a pace that is enabling international adoption, even in markets traditionally dominated by Western technology providers.

Grzelaczyk estimates that China may be up to two years ahead of other regions in humanoid robotics development, particularly in terms of commercialization. This lead is not only technological but also economic, as lower-cost systems make experimentation and deployment more accessible.

This dynamic is already shaping global partnerships. European firms are increasingly importing humanoid robots and adapting them for regional markets, rather than attempting to compete directly on hardware manufacturing.

MERA Robotics, for example, plans to import around 100 humanoid units in the near term, using them as a foundation for locally developed applications.

Early Use Cases Remain Unclear

Despite growing visibility, the practical role of humanoid robots in everyday environments remains uncertain. Edward’s viral moment illustrates both the potential and the ambiguity of current deployments.

On one hand, the robot demonstrates mobility, autonomy, and the ability to operate in unstructured outdoor environments. On the other, the task itself – chasing animals in a parking lot – underscores how far the technology still is from clearly defined, scalable applications.

This gap between capability and use case is a recurring theme in the humanoid robotics sector. While hardware performance continues to improve, identifying consistent, economically viable roles for these systems remains an open challenge.

At the same time, public demonstrations and viral content are playing an increasing role in shaping perception and interest. Visibility, even in unconventional scenarios, may help accelerate experimentation and adoption.

The Warsaw incident may be remembered less for the robot’s actions and more for what it represents: a globalizing robotics industry where hardware, software, and applications are increasingly decoupled.

As Chinese manufacturers expand their reach and local developers build on top of their platforms, humanoid robots are beginning to move from controlled demonstrations into everyday environments – even if their purpose is still evolving.

News, Robots & Robotics

Boston Dynamics Integrates Google Gemini into Spot for Industrial Inspection

Boston Dynamics has integrated Google’s Gemini robotics model into its Spot platform, enhancing reasoning and inspection capabilities in industrial environments.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
Boston Dynamics’ Spot robot now uses Google Gemini-powered AI to analyze industrial environments, improving inspection accuracy and enabling higher-level reasoning. Photo: Boston Dynamics

Boston Dynamics has integrated a new generation of AI models from Google into its industrial inspection platform, marking a step toward more autonomous and context-aware robotics in real-world environments.

The update brings Google’s Gemini and Gemini Robotics-ER 1.6 models into Boston Dynamics’ Orbit AIVI-Learning system, which powers inspection workflows for robots such as Spot. The integration reflects a broader shift in robotics toward combining physical systems with advanced reasoning models capable of interpreting complex environments and making decisions in real time.

The rollout is already live for existing AIVI-Learning customers, with the company positioning the upgrade as a foundational improvement in how robots understand and monitor industrial sites.

From Detection to Interpretation

Industrial inspection has traditionally relied on rule-based systems that identify predefined objects or anomalies. The integration of Gemini introduces a different approach, where robots can analyze scenes more holistically and reason about what they observe.

Using the updated system, Spot can perform tasks such as reading gauges, assessing fluid levels, counting materials, and identifying safety hazards like spills or debris. These capabilities extend beyond simple detection, requiring the robot to interpret visual signals and determine their operational significance.

This shift is particularly important in environments where conditions are dynamic and difficult to model in advance. Rather than relying on static rules, the system can adapt to new scenarios, enabling broader deployment across facilities with varying layouts and equipment.

The addition of “transparent reasoning” features also allows operators to review how the system arrives at its conclusions, offering greater visibility into AI-driven decisions – a requirement that is becoming increasingly important in industrial settings.

Continuous Learning in Live Environments

A defining feature of the updated platform is its ability to improve over time through continuous data collection and model updates. The system operates as a cloud-connected service, allowing performance improvements to be deployed without interrupting operations.

This “zero-downtime” update model reflects a shift toward treating robotics systems as evolving software platforms rather than static hardware installations. As new data is collected from deployed robots, the models can be refined to better understand specific environments and use cases.

The approach, however, also introduces new considerations around data sharing. Customers using AIVI-Learning are required to share operational data with Boston Dynamics to enable ongoing model training, highlighting the growing role of data as a core component of robotics performance.

Toward Site Wide Intelligence

Boston Dynamics frames the integration as a move toward “site-wide intelligence”, where robots contribute to a unified understanding of industrial operations. By combining visual inspection data with higher-level reasoning, the system aims to provide insights across safety, maintenance, and logistics.

This aligns with a broader industry trend toward physical AI systems that integrate perception, reasoning, and action. Companies such as Nvidia have emphasized similar approaches, focusing on the convergence of simulation, AI models, and robotics hardware.

In practical terms, the upgraded system enables Spot to handle more complex inspection workflows, from monitoring equipment health to tracking material movement. The ability to interpret gauges and other analog instruments is particularly relevant in industries where digital integration remains incomplete.

The integration of Gemini into Boston Dynamics’ inspection platform highlights how quickly robotics is evolving from task-specific automation to more generalized, intelligent systems. By embedding reasoning capabilities directly into deployed robots, companies are beginning to close the gap between perception and decision-making.

The remaining challenge lies in scaling these systems across diverse environments while maintaining reliability and trust. As robots take on more responsibility in industrial settings, their ability to explain and justify decisions may become as important as their technical performance.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Google Advances Embodied AI with Gemini Robotics ER Model

Google has introduced a new AI model that improves how robots understand, plan, and act in real-world environments, marking progress in embodied reasoning.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Google’s Gemini Robotics ER model enables robots to interpret environments, plan actions, and complete tasks with improved spatial awareness and reasoning. Photo: Google

Google has introduced a new AI model designed to improve how robots understand and operate in real-world environments, targeting one of the most persistent limitations in robotics: the ability to reason beyond predefined instructions.

The model, Gemini Robotics-ER 1.6, focuses on what researchers describe as embodied reasoning – the capacity for machines to interpret visual inputs, plan sequences of actions, and determine when a task has been successfully completed. The update reflects a broader shift in robotics from systems that execute commands to those that can make context-aware decisions in dynamic settings.

The model is being made available to developers through Google’s AI tooling ecosystem, positioning it as part of a growing effort to standardize software layers for physical AI.

Moving from Perception to Reasoning

Robotics systems have historically relied on separate modules for perception, planning, and control, often requiring extensive engineering to connect them. Gemini Robotics-ER 1.6 attempts to unify these functions, allowing robots to process visual information and translate it directly into action.

The model improves spatial reasoning, enabling robots to identify objects, understand their relationships, and break tasks into smaller steps. It can also track objects across multiple viewpoints, combining inputs from different cameras to build a more complete understanding of an environment.

This multi-view capability is particularly relevant in real-world settings, where occlusion, clutter, and changing conditions can limit the effectiveness of single-camera systems. By integrating multiple perspectives, robots can maintain situational awareness even when parts of a scene are temporarily hidden.

Another key advancement is success detection. The model allows robots to evaluate whether a task has been completed correctly, reducing reliance on external validation or rigid programming. This is a critical requirement for autonomous operation, particularly in environments where tasks may need to be repeated or adjusted in real time.

Interpreting the Physical World

One of the more practical capabilities introduced in the model is the ability to read instruments such as gauges, meters, and digital displays. This function is particularly relevant for industrial and inspection applications, where robots must interpret physical indicators rather than purely digital data.

In collaboration with Boston Dynamics, the system has been applied to robots like Spot, which are used for facility monitoring. The model can analyze visual inputs, identify key components such as needles or numerical readouts, and calculate values with a high degree of accuracy.

Reported improvements in instrument reading performance suggest a significant step forward. Accuracy has increased from earlier levels of around 23% to over 90% in some scenarios, indicating that robots are becoming more capable of handling tasks that require precise interpretation of real-world signals.

The model also incorporates safety-aware reasoning, allowing robots to identify potential hazards and avoid unsafe interactions. This reflects an increasing emphasis on aligning robotic behavior with physical constraints, particularly as systems move into environments shared with humans.

Building a Software Layer for Physical AI

The release of Gemini Robotics-ER 1.6 highlights a broader trend toward treating robotics as a software problem as much as a hardware one. As companies race to develop humanoid and autonomous systems, the ability to generalize across tasks and environments is becoming a key differentiator.

Efforts by companies such as Nvidia and others have focused on simulation and training infrastructure, while Google’s approach emphasizes reasoning and decision-making at runtime. Together, these developments point toward a layered architecture for physical AI, where perception, reasoning, and control are increasingly integrated.

The remaining challenge is translating these capabilities into reliable real-world performance at scale. While models like Gemini Robotics-ER 1.6 demonstrate significant progress in controlled evaluations, deployment in complex environments will require further advances in robustness, data integration, and system design.

Google’s latest model suggests that robotics is entering a phase where intelligence is defined less by isolated capabilities and more by the ability to connect perception, reasoning, and action. As embodied AI systems become more capable of interpreting and responding to the physical world, the boundary between digital intelligence and physical execution continues to narrow.

The extent to which this translates into widespread adoption will depend on how quickly these systems can move from experimental demonstrations to dependable tools in industry and beyond.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress

Unitree is bringing its lowest-cost humanoid robot to global markets via AliExpress, signaling a shift toward early consumer adoption of robotics.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Unitree’s R1 humanoid robot, designed for dynamic movement and lower-cost production, marks a step toward broader global access to humanoid machines. Photo: Unitree

Chinese robotics firm Unitree Robotics is preparing to launch its most affordable humanoid robot globally, a move that could test whether the category is beginning to transition from industrial experimentation to early consumer markets.

The company plans to debut its R1 humanoid robot through AliExpress, targeting customers in North America, Europe, Japan, and Singapore. With a starting price of around $4,000 in China, the R1 is among the lowest-cost humanoid robots introduced to date, positioning it closer to consumer electronics than traditional industrial machinery.

The rollout comes as Unitree accelerates production and expands internationally, following a year in which it shipped more than 5,500 humanoid robots – far exceeding most global competitors.

Lower Prices Meet Global Distribution

The R1 reflects a broader push to reduce the cost of humanoid robotics while expanding access through global distribution platforms. By launching on AliExpress, Unitree is bypassing traditional enterprise sales channels and testing direct-to-market demand.

The robot stands just over 1.2 meters tall and is designed for dynamic movement, including running, recovering from falls, and performing coordinated motions. Marketed as “sport-ready”, it highlights Unitree’s focus on mobility and mechanical performance rather than immediate utility in structured work environments.

The pricing strategy marks a significant departure from earlier humanoid systems, which have typically been priced in the tens of thousands of dollars or higher. Even companies such as Tesla have suggested that future humanoid robots could cost around $20,000, placing Unitree’s offering well below that threshold.

The question is not only whether such pricing is sustainable, but whether it will translate into meaningful adoption beyond research labs and demonstration use cases.

Scaling Production Ahead of Demand

Unitree’s global expansion is closely tied to its manufacturing scale. The company has set a target of shipping between 10,000 and 20,000 robots in 2026, building on its current position as one of the highest-volume producers of humanoid systems.

According to industry estimates, competitors such as Figure AI and Agility Robotics have shipped only a few hundred units each, underscoring the gap between Chinese and U.S. production capacity.

Market research firm TrendForce expects Unitree to account for a substantial share of global humanoid output in the near term, reflecting both aggressive scaling and a focus on cost reduction.

At the same time, the company is preparing for a potential IPO in Shanghai, aiming to raise capital to expand manufacturing and research. The R1’s international debut may therefore serve a dual purpose: generating revenue while demonstrating global demand to investors.

From Demonstration to Early Adoption

The launch also highlights a shift in how humanoid robots are being positioned. Rather than targeting a single industrial application, the R1 appears designed as a general-purpose platform that can showcase capabilities and attract a broader user base.

Unitree has previously gained visibility through high-profile demonstrations, including coordinated performances by its robots on national television. The move into global e-commerce suggests a transition from spectacle to early commercialization, even if practical use cases remain limited.

For now, most humanoid robots are still used in research, education, and controlled environments. The introduction of a lower-cost model does not immediately resolve challenges around autonomy, reliability, or real-world utility.

However, it may begin to reshape expectations. If consumers and small businesses can access humanoid robots at a fraction of previous costs, the market could shift from a handful of experimental deployments to a larger base of exploratory use.

Unitree’s R1 launch represents one of the clearest attempts to test that transition. By combining lower pricing with global distribution, the company is effectively probing whether humanoid robotics can move beyond early adopters and into a broader commercial category.

The outcome will depend less on technical capability alone and more on whether users find meaningful ways to integrate these systems into everyday environments. For an industry still searching for its first large-scale application, that question remains open.

Business & Markets, News, Robots & Robotics, Science & Tech
Exit mobile version