China’s Dancing Humanoid Robots Highlight Progress and Limits
China’s humanoid robots performed complex dance and martial arts routines at the Spring Festival Gala, showcasing advances in balance and coordination while raising questions about real-world readiness.
Humanoid robots performing martial arts, backflips, and synchronized dance routines captivated audiences during China’s Spring Festival Gala, the country’s most-watched annual television broadcast. The display, featuring machines developed by multiple domestic robotics companies, provided one of the clearest public demonstrations yet of how embodied AI is advancing beyond basic locomotion.
The performance was technically sophisticated. Robots executed coordinated movements with precise timing, maintained balance during jumps and spins, and operated in synchronized groups without falling. For viewers, the display represented a striking departure from earlier humanoid demonstrations, which often focused on simple walking or limited choreographed sequences.
But beyond spectacle, the performance also reflected China’s broader strategy to position humanoid robotics as a pillar of its future industrial and technological leadership.
A Public Showcase of Industrial Strategy
Public humanoid robot performances have become a recurring feature of China’s technology narrative. Experts say these demonstrations serve not only as engineering milestones but also as visible indicators of national technological progress.
Kyle Chan, a technology analyst at the Brookings Institution, noted that humanoid robots are particularly effective symbols because their capabilities are easily understood by general audiences. Unlike abstract AI systems, humanoids provide a tangible representation of technological advancement.
The scale and coordination of the gala performance offered a specific technical signal: the ability to operate multiple robots simultaneously with stable motion and consistent mechanical behavior. This requires reliable actuator control, precise synchronization, and repeatable performance across units – essential capabilities for future industrial deployment.
China’s investment in robotics reflects its broader industrial strategy. Robotics and AI have been identified as priority sectors in national development plans, aimed at transforming manufacturing from labor-intensive production toward automation-driven productivity.
Demonstration Capability vs Industrial Readiness
Despite the impressive performance, experts caution that stage demonstrations do not necessarily reflect readiness for real-world deployment.
Georg Stieler, managing director of robotics consultancy Stieler Technology and Marketing, emphasized that choreographed routines rely heavily on pre-programmed sequences and repetition. Robots performing on stage are trained extensively to execute specific motions under controlled conditions.
This differs significantly from industrial environments, where robots must respond dynamically to unpredictable conditions, handle variable objects, and recover from unexpected disruptions.
Dance routines primarily test locomotion and balance, areas where humanoid robotics has made substantial progress. However, manipulation – the ability to handle objects with precision – remains a more complex and slower-moving challenge.
Reliability is another critical factor. Industrial robots must operate continuously with minimal failure rates, a standard that humanoid robots are still working toward achieving.
A Signal of Intensifying Global Competition
China’s humanoid robot progress is occurring within a rapidly evolving global competitive landscape. The country has established a large robotics industry, supported by extensive manufacturing infrastructure and government investment.
According to industry estimates, China’s humanoid robot shipments are expected to increase significantly in the coming years. Technology leaders, including Elon Musk of Tesla, have acknowledged Chinese robotics companies as major competitors in the emerging embodied AI sector.
The rise of humanoid robotics reflects a broader shift in AI development. The first wave of AI focused on software systems, while the next phase emphasizes physical systems capable of interacting with the real world.
For China, humanoid robotics aligns with its long-term industrial transformation goals. As Marina Zhang, a professor at the University of Technology Sydney, noted, robotics could play a central role in shifting manufacturing toward higher-value, automated production.
Progress with Clear Limits
The Spring Festival Gala performance highlighted both the rapid progress and remaining challenges of humanoid robotics.
Advances in balance, coordination, and actuator control have enabled robots to perform complex physical routines that were previously impossible. These capabilities represent foundational steps toward real-world deployment.
At the same time, real-world applications require additional capabilities, including robust manipulation, environmental perception, and long-term reliability.
The gap between demonstration and deployment remains a defining challenge for the industry. But public showcases like this serve an important function: they reveal how quickly the underlying technology is advancing.
Humanoid robots may not yet be ready to replace human workers at scale. However, their growing physical capability suggests that embodied AI is moving steadily toward practical industrial and commercial roles.
As robotics companies continue to improve both hardware and AI systems, performances that once seemed futuristic are becoming technical milestones in a rapidly evolving industrial landscape.
China Establishes First National Standards for Humanoid Robots
China has introduced its first national standard system for humanoid robotics, aiming to unify technical specifications and accelerate commercial deployment across industries.
By Laura Bennett | Edited by Kseniia Klichova
Published:
Officials and industry experts gather in Beijing to unveil China’s first national standard system for humanoid robotics, aimed at accelerating commercialization and ensuring safety alignment.
China has formally introduced its first national standard system for humanoid robotics, marking a coordinated effort to structure one of the country’s fastest-growing technology sectors.
The framework was unveiled at the Humanoid Robots and Embodied Intelligence Standardization meeting in Beijing. It establishes unified technical guidelines intended to streamline development, reduce fragmentation, and accelerate the transition from pilot projects to commercial deployment.
The move signals that policymakers view humanoid robotics not as an experimental field, but as an emerging industrial category requiring formal governance.
Six Pillars for Industrial Alignment
The standard system is organized around six core pillars: foundational and common standards, neuromorphic and intelligent computing, limbs and key components, full-system integration, application scenarios, and safety and ethics.
Together, these categories define technical specifications, interface protocols, and evaluation benchmarks. Committee experts involved in the initiative said the goal is to reduce coordination friction between suppliers, lower production costs, and shorten iteration cycles across the value chain.
By clarifying interfaces and performance metrics, the framework is designed to enable interoperability between hardware platforms, software systems, and embodied AI models. It also embeds safety and ethical considerations into early-stage development, reflecting regulatory awareness as robots move into workplaces and homes.
From Prototypes to Scaled Deployment
According to China’s Ministry of Industry and Information Technology, 2024 marked the country’s first year of humanoid robot mass production. More than 140 domestic companies released over 330 models, with deployments expanding into manufacturing, household services, healthcare, and elderly care.
Until now, much of that growth has occurred in a relatively fragmented environment, with companies developing proprietary architectures and evaluation criteria. National standards are expected to impose structure on a rapidly expanding ecosystem.
The framework could also serve a strategic function. As Chinese firms compete globally in embodied AI and humanoid robotics, standardized technical benchmarks may strengthen export readiness and ecosystem coordination.
While many humanoid deployments remain in early stages, the introduction of national standards suggests the industry is entering a new phase, where commercialization and regulatory alignment advance in parallel.
University of Southampton Develops Adaptive Robot Fin for Underwater Stability
Researchers at the University of Southampton have developed a flexible robotic fin with embedded electronic skin that automatically adapts to changing water currents, improving underwater robot stability and efficiency.
By Daniel Krauss | Edited by Kseniia Klichova
Published:
The adaptive robotic fin developed at the University of Southampton integrates electronic skin and hydraulic actuation to automatically counteract flow disturbances in underwater environments. Photo: University of Southampton
Autonomous underwater vehicles are built to withstand unpredictable ocean conditions, but their rigid fins often require significant energy to counteract sudden currents and turbulence. Researchers at the University of Southampton are proposing a different approach: fins that sense water flow and adjust their shape in real time.
The team has developed a flexible robotic fin embedded with electronic skin capable of detecting subtle changes in water movement. The system automatically modifies the fin’s stiffness and curvature to stabilize underwater robots while reducing energy consumption.
The research, published in npj under the title “Harnessing proprioception in aquatic soft wings enables hybrid passive-active disturbance rejection,” reflects a broader push toward soft robotics and adaptive control in marine environments.
Inspired by Biological Sensing
The design draws from biological proprioception mechanisms observed in birds and fish. Birds detect airflow changes through sensory feedback in their feathers, while fish rely on lateral line systems and fin rays to perceive water disturbances.
To replicate similar sensing capabilities, the Southampton engineers embedded flexible liquid metal wiring inside a silicone fin. When water flow deforms the fin, the integrated electronic skin registers changes in electrical resistance. These signals are transmitted to a hydraulic system inside the robot’s body, which adjusts internal pressure through connected hoses to alter the fin’s shape.
Rather than relying solely on active propulsion corrections, the system combines passive flexibility with active hydraulic adjustment.
Reducing Energy Use in Turbulent Waters
Rigid AUVs typically expend substantial energy to maintain orientation when struck by waves or shifting currents. According to the researchers, the adaptive fin significantly improves disturbance rejection.
In controlled tests, the fin reduced unwanted buoyancy effects caused by sudden water flow by 87 percent compared with a similar vehicle using rigid fins. The robot demonstrated improved self-stabilization and maneuverability while consuming less energy to maintain position.
The findings suggest potential advantages for underwater inspection, environmental monitoring, and defense applications where energy efficiency and stability are critical.
Technical Constraints Remain
Despite promising results, integration challenges remain. Scaling the flexible system to larger vehicles and embedding it into rigid hull designs could complicate deployment. Long-term durability of the electronic skin and hydraulic components in harsh marine environments also requires further validation.
The researchers note that more robust actuators and structural refinements may help address these constraints.
The project illustrates how bio-inspired sensing and soft robotics are reshaping underwater vehicle design. As offshore energy, marine research, and subsea infrastructure monitoring expand, adaptive control systems such as this may become increasingly relevant to improving endurance and operational stability in dynamic ocean conditions.
MWC 2026 Marks Shift From AI Apps to AI Native Hardware
Mobile World Congress 2026 highlighted a decisive shift as AI moved beyond apps and into physical devices, from humanoid robots and AI glasses to smartphones with mechanical motion systems.
By Rachel Whitman | Edited by Kseniia Klichova
Published:
Humanoid robots, AI glasses and AI-integrated smartphones on display at MWC 2026 reflect a broader industry shift toward AI-native hardware design. Photo: MWC
Mobile World Congress 2026 underscored a structural change in the AI industry: artificial intelligence is no longer confined to apps running on smartphones. It is beginning to reshape the hardware itself.
Across the exhibition floor in Barcelona, companies presented humanoid robots controlled entirely by voice, AI glasses positioned as daily computing devices, and smartphones equipped with mechanical camera systems that physically move. The theme was consistent: large AI models are evolving from software layers into defining elements of device architecture.
Smartphone Makers Enter Robotics
Several Chinese smartphone manufacturers used MWC to demonstrate ambitions beyond handsets.
Honor unveiled its first humanoid robot during its global launch event, showcasing AI-driven motion control and multimodal interaction. The demonstration included acrobatic movements and coordinated choreography, signaling technical progress in embodied control systems.
Xiaomi, which introduced its CyberOne humanoid in 2022, did not display a robot on the show floor but reported new milestones. According to the company, its humanoid systems have begun operating in automotive factories, performing tasks such as self-tapping nut installation and material transport. Chairman Lei Jun said large-scale deployment in Xiaomi’s factories could occur within five years.
The move into robotics comes as smartphone growth slows. IDC estimates that China’s smartphone shipments reached roughly 284 million units in 2025, a slight year-on-year decline. For manufacturers with in-house chips, operating systems, and AI models, robotics represents an adjacent growth market built on overlapping technologies.
Lu Weibing, president of Xiaomi’s mobile division, has argued that investments in proprietary silicon, operating systems, and foundational AI are interconnected and transferable to robotics platforms.
Other technology firms are also advancing embodied systems. At MWC, iFlytek demonstrated a humanoid guide robot powered by upgraded multimodal voice interaction, eliminating the need for handheld remote controls. China Mobile presented an unmanned restaurant concept in which embodied robots collaborated on ordering, food preparation, and delivery.
These deployments suggest that large models are increasingly acting as real-time control interfaces rather than conversational add-ons.
AI Glasses and the Search for Monetization
While AI apps saw a surge in daily active users during China’s Spring Festival promotions, retention and revenue models remain uncertain. Several internet companies are now shifting attention toward AI hardware.
Alibaba’s Qwen brand introduced its first AI glasses at MWC, embedding large language models into wearable devices capable of translation, transcription, photography, and object recognition. The devices are positioned for both consumer and professional use.
IDC forecasts that global smart glasses shipments will exceed 23 million units by 2026, including nearly 5 million units in China. Compared with heavily subsidized AI apps, glasses offer a direct hardware revenue stream and clearer monetization path.
iFlytek also debuted lightweight AI glasses weighing approximately 40 grams, emphasizing multimodal recording and translation capabilities.
Redefining the Smartphone Form
AI integration is also altering the smartphone itself.
ZTE showcased AI-powered devices that embed assistants directly into the system layer, enabling cross-application control via natural language. Rather than functioning as standalone apps, these AI agents are integrated into core operating system workflows.
Honor introduced a more experimental concept: a “Robot Phone” featuring a motorized multi-axis gimbal paired with a 200-megapixel sensor. The device can physically rotate and track users during video calls, combining AI vision with mechanical motion.
The common thread across categories is the shift from AI-enabled hardware to AI-defined hardware. Large models are beginning to influence device structure, interaction methods, and mechanical design.
MWC 2026 did not present a single dominant form factor. Instead, it revealed a competitive search for the most natural interface between AI systems and the physical world. Whether that interface proves to be humanoid robots, wearable glasses, or reengineered smartphones remains unsettled. What is clear is that AI is no longer just inside devices. It is beginning to shape what those devices become.
Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms
Researchers at Georgia Tech have developed a robot pollinator that uses computer vision and 3D modeling to automate flower pollination in indoor farms.
By Laura Bennett | Edited by Kseniia Klichova
Published:
A prototype robot pollinator developed at Georgia Tech uses computer vision to determine flower orientation before performing targeted pollination. Photo: Georgia Tech Research Institute
Researchers at Georgia Tech have developed a robotic system designed to automate pollination inside indoor farms, addressing one of the most labor-intensive challenges in vertical agriculture.
The prototype, created by engineers at the Georgia Tech Research Institute (GTRI) and the George W. Woodruff School of Mechanical Engineering, uses computer vision and robotic manipulation to pollinate flowering plants without human intervention.
As indoor farming expands in urban environments, automating pollination has become a critical bottleneck in scaling production.
Pollination without Bees
Indoor farms offer several advantages over traditional agriculture, including year-round production, reduced water use, and minimal pesticide reliance. However, enclosed growing environments prevent natural pollinators such as bees from accessing crops.
For many flowering plants grown indoors – including strawberries and tomatoes – farmers must manually transfer pollen using brushes or vibrating tools. The process is repetitive and time-consuming, limiting scalability.
The Georgia Tech team’s robot is designed to pollinate plants that contain both male and female reproductive structures within the same flower. These plants require pollen transfer within a single bloom rather than cross-pollination between separate flowers.
By automating this step, researchers aim to reduce labor demands and increase consistency in crop yields.
Teaching a Robot to Understand Flower Orientation
One of the central technical challenges was enabling the robot to recognize the “pose” of each flower – its orientation, symmetry, and position relative to the stem.
Accurate pose detection is critical because pollen must be delivered precisely to the reproductive structures at the front of the flower. Even small alignment errors can reduce pollination effectiveness.
To solve this, the team developed a computer vision pipeline that reconstructs flowers in 3D from multiple camera images. The 3D model is then converted into depth-enhanced 2D representations that can be processed by object detection algorithms.
The researchers used a real-time object detection system known as YOLO (You Only Look Once) to classify flower features in a single processing pass. By converting 3D data into structured 2D inputs, they leveraged the abundance of training resources available for 2D computer vision systems.
The approach enabled the robot to estimate flower orientation with sufficient precision to approach and manipulate the stem correctly.
From Detection to Physical Interaction
Once the robot identifies the flower’s pose, it grips the stem and applies controlled vibration to dislodge and distribute pollen within the bloom.
Unlike simple mechanical vibration tools, the system integrates perception, positioning, and actuation into a single workflow. This coordination is essential in dense vertical farming environments where flowers vary in size, spacing, and orientation.
The prototype was built in Georgia Tech’s Safe Robotics Lab and remains in testing.
Adding Microscopic Feedback
Beyond basic pollination, the system includes an inspection capability that allows it to evaluate pollination success. The robot can perform close-up imaging of flower structures to assess whether pollen has been effectively transferred.
This feedback loop is a notable feature, as most manual pollination methods offer no immediate verification of success.
The research team has documented its technical approach in a paper accepted to the 2025 International Conference on Robotics and Automation (ICRA).
Automation Expands in Controlled Agriculture
Indoor farming is often promoted as a solution to urban food supply challenges and climate variability. However, high labor costs and operational complexity have slowed widespread adoption.
Automating tasks such as pollination could help reduce those barriers. Robotics in agriculture has traditionally focused on harvesting and monitoring, but pollination represents a more delicate and technically demanding process.
The Georgia Tech prototype demonstrates how advances in AI perception and robotic control can be applied to biological systems.
While the system remains in early development, it illustrates how robotics may increasingly support food production in controlled environments – where precision, repeatability, and data-driven feedback are essential for scaling output.
Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security
Revobots has introduced TASKBOT SCOUT XT, an all-weather autonomous patrol robot designed for outdoor enforcement and campus monitoring under a Robots-as-a-Service model.
By Daniel Krauss | Edited by Kseniia Klichova
Published:
Revobots’ TASKBOT SCOUT XT is designed for outdoor patrol, featuring an all-wheel-drive chassis and weather-resistant enclosure. Photo: Campus Innovation
Revobots has introduced an all-weather version of its autonomous patrol robot, expanding its security robotics platform beyond indoor facilities and into outdoor environments.
The new system, called TASKBOT SCOUT XT, is engineered for exterior enforcement and monitoring tasks across campuses, parking lots, and mixed-use spaces. The Phoenix-based company says the robot is designed to address one of the longstanding limitations of autonomous patrol systems: reliable operation in unpredictable weather and uneven terrain.
The launch reflects growing demand for robotics solutions that can supplement security staffing in environments where labor shortages and operational costs continue to rise.
Hardware Upgrades for Outdoor Deployment
SCOUT XT builds on Revobots’ indoor patrol platform but incorporates significant hardware modifications to withstand environmental exposure.
The robot features an IP65-rated enclosure designed to protect against dust and water ingress. Its extended-wheelbase, all-wheel-drive chassis is intended to provide stability across uneven pavement, gravel, and surface transitions.
Outdoor-calibrated vision systems allow the robot to operate in variable lighting conditions, including bright daylight and low-light evening environments. Longer-range perception capabilities are designed to accommodate open spaces with fewer visual landmarks than indoor corridors.
All-terrain wheels further support navigation across cracked pavement, curb transitions, and mixed surfaces common in parking facilities and campus grounds.
Autonomous Operation with Human Oversight
SCOUT XT operates on Revobots’ existing backend infrastructure, including its Robots-as-a-Service subscription model and REVO Pilot human-in-the-loop oversight system.
By default, the robot navigates autonomously, using onboard AI to conduct patrol routes and monitor designated areas. When conditions exceed predefined thresholds – such as ambiguous detections or unusual environmental scenarios – the system can escalate to human supervisors for intervention.
This hybrid autonomy model is increasingly common in commercial robotics deployments, particularly in security applications where accountability and reliability are critical.
Campus Deployment Highlights Practical Use Case
Revobots said SCOUT XT recently completed pilot testing at Xavier University in Cincinnati. During the trial, the robot supported automated license plate recognition enforcement across multiple campus parking areas.
The deployment was designed to expand monitoring coverage without increasing staffing levels, a key consideration for educational institutions and other organizations managing large facilities.
Integration with existing campus infrastructure was supported through collaboration with Campus Innovation and its C-Park platform.
The university pilot demonstrates how outdoor patrol robots can supplement traditional security operations, particularly in structured environments such as campuses, business parks, and residential communities.
Expanding the Scope of Security Robotics
Autonomous security robots have typically been deployed indoors, where environmental variables are more predictable. Extending patrol capabilities outdoors introduces challenges including weather exposure, uneven terrain, and dynamic lighting.
By adapting its existing platform rather than building an entirely new system, Revobots is pursuing incremental expansion of its task-adaptive robotics model.
The broader security robotics market is evolving toward service-based deployment models, where customers subscribe to robotics coverage rather than purchase hardware outright. This approach lowers upfront costs and allows providers to maintain centralized oversight and software updates.
As robotics companies seek commercially viable applications, outdoor patrol represents a practical step toward broader real-world autonomy.
While fully autonomous security operations remain a long-term ambition, platforms like SCOUT XT illustrate how robotics companies are addressing specific operational gaps – expanding coverage, improving consistency, and reducing reliance on human patrol staffing in large, open environments.