Monthly Archives: March 2026
NASA Valkyrie Humanoid Robot Returns to US after Decade of Research
NASA’s Valkyrie humanoid robot is returning to the United States after ten years of research at the University of Edinburgh, marking the next phase in the development of robots designed for future planetary missions.
A humanoid robot originally developed for future Mars missions is returning to the United States after spending a decade in Scotland supporting robotics research.
The robot, known as Valkyrie, has been based at the University of Edinburgh since 2016 as part of a collaboration with NASA. The platform helped researchers advance technologies related to mobility, perception, and human-robot interaction before being transferred back to NASA’s Johnson Space Center in Texas for further development.
Standing roughly 1.8 meters tall and weighing about 125 kilograms, Valkyrie remains one of only a handful of humanoid research robots built for extreme environments such as planetary exploration.
Its return marks a new phase in the project, as NASA continues evaluating how humanoid robots could assist astronauts on future missions to the Moon and Mars.
A Research Platform for Space Robotics
Valkyrie was originally developed as part of NASA’s effort to build robots capable of preparing infrastructure in hazardous environments before human crews arrive.
The robot’s human-like structure allows it to operate in environments designed for people, including industrial tools, ladders, and access points. That design principle has long been central to humanoid robotics research: rather than redesigning environments for machines, robots are built to function within existing human systems.
The platform incorporates a range of sensors and Series Elastic Actuators, a type of joint mechanism designed to enable safer physical interaction between robots and humans while maintaining precise control.
When the robot arrived in Edinburgh nearly a decade ago, its capabilities were relatively limited. It could walk on flat surfaces and perform basic manipulation tasks such as grasping objects.
Over time, researchers enhanced the system using artificial intelligence and machine learning techniques that improved balance, perception, and decision-making. The work focused on helping the robot interpret sensor data, maintain stability on uneven terrain, and translate visual information into coordinated physical actions.
A Decade of Progress in Humanoid Robotics
At the time Valkyrie was deployed in Edinburgh, humanoid robots were still largely confined to research laboratories. Commercial systems had yet to emerge, and only a small number of prototypes existed worldwide.
The robot provided researchers with a rare experimental platform for studying how humanoids move, maintain balance, and interact with people in real-world environments.
According to researchers involved in the program, the project helped train a generation of roboticists while contributing to the broader development of humanoid robotics.
Vladimir Ivan, who worked on the Valkyrie project as a student and now serves as chief technical officer of robotics start-up Touchlab, described hosting the robot as a unique opportunity during a period when advanced humanoid systems were largely inaccessible to academic labs.
The presence of the NASA robot also helped strengthen the University of Edinburgh’s role as a global hub for robotics research, supporting collaborations between academia, industry, and emerging robotics companies.
Influencing the Next Generation of Robots
Beyond its role in academic research, Valkyrie has also influenced newer humanoid platforms.
Elements of the robot’s architecture have informed the development of modern systems, including humanoid designs emerging from robotics companies that are beginning to transition the technology from laboratories into industrial environments.
One example is Apptronik’s Apollo robot, which draws on research lineage connected to earlier NASA humanoid programs.
Meanwhile, research in Edinburgh has continued with newer platforms such as the Talos humanoid robot, which scientists use to study advanced locomotion, manipulation, and collaborative interaction between humans and machines.
This research includes work on dyadic human-robot interaction, where robots and people cooperate directly to complete tasks. Such approaches could eventually support applications ranging from assisted living technologies to rehabilitation systems.
What Valkyrie’s Return Signals
The return of Valkyrie to NASA comes at a moment when humanoid robotics is moving rapidly from academic experimentation toward commercial deployment.
Companies across the United States, Europe, and Asia are developing humanoid machines for factories, logistics centers, and service environments. At the same time, space agencies are continuing to explore how similar technologies could support exploration missions.
For NASA, robots like Valkyrie remain part of a long-term strategy to extend human capabilities in hazardous environments. Humanoid machines could eventually assemble infrastructure, maintain equipment, or conduct preliminary exploration before astronauts arrive.
The past decade of research in Edinburgh helped advance the fundamental technologies behind that vision.
Now, as Valkyrie returns to the United States, the robot is expected to play a role in the next stage of development as space agencies and robotics companies alike push toward machines capable of operating far beyond Earth.
BMW Begins Humanoid Robot Pilot at Leipzig Factory
BMW has launched its first humanoid robot pilot project in Germany at its Leipzig plant, testing how AI-driven machines can support workers in battery assembly and logistics tasks on the factory floor.
BMW is introducing humanoid robots into production at its Leipzig plant in Germany, marking the company’s first such deployment in Europe as automakers explore how artificial intelligence can move beyond software and into physical industrial work.
The pilot project centers on a robot called AEON, developed by Hexagon’s robotics division, which is being tested to assist workers with logistics and repetitive tasks inside the factory. BMW says the initiative forms part of a broader strategy to integrate what it describes as physical AI directly into manufacturing operations.
For an industry already heavily automated with fixed industrial robots, humanoid machines represent a different approach: mobile systems designed to operate alongside people and adapt to multiple tasks rather than performing a single programmed action.
From Industrial Automation to Physical AI
Industrial robots have long been a cornerstone of automotive manufacturing, performing highly precise tasks such as welding, bonding, and component handling. These machines are typically fixed in place and optimized for specific operations.
Humanoid robots are being explored as a complementary layer of automation. Their human-like form allows them to navigate factory environments originally designed for people and perform tasks that require mobility or flexible manipulation.
AEON stands about 1.65 meters tall and weighs roughly 60 kilograms. Instead of walking, it moves through the factory on wheeled legs, allowing it to travel at speeds of up to 2.5 meters per second while carrying materials and avoiding obstacles.
At the Leipzig plant, the robot is initially being tested in high-voltage battery assembly and component manufacturing, where it will handle repetitive tasks and transport materials along production lines.
BMW executives say the goal is not to replace workers but to support them by taking over physically demanding or repetitive jobs.
“Digitalisation makes our production more competitive both in Europe and globally,” said Milan Nedeljković, BMW’s board member responsible for production. “The combination of engineering expertise and artificial intelligence opens up entirely new opportunities in manufacturing.”
Building The Infrastructure for AI-Driven Factories
The Leipzig pilot is supported by a new internal competence center dedicated to physical AI in production. The group brings together robotics and AI specialists tasked with evaluating new technologies and integrating them into BMW’s global manufacturing network.
Behind the robots is a broader digital architecture the company has been developing for several years. BMW has restructured its factory IT systems into a unified data platform that connects production information across different systems and plants.
This infrastructure enables AI agents to analyze data, make decisions, and control machines across the production environment. When combined with robotic hardware and autonomous transport systems, the result is what BMW describes as physical AI: intelligent systems capable of perceiving, deciding, and acting directly on the factory floor.
Digital twins, AI-assisted quality inspection systems, and autonomous logistics robots are already part of that ecosystem. Humanoid robots represent the next step, bringing adaptable automation to tasks that previously required human mobility.
Lessons from BMW’s Earlier Robot Experiments
The Leipzig deployment follows earlier humanoid robot testing at BMW’s Spartanburg plant in South Carolina.
In that pilot project, conducted with robotics company Figure AI, a humanoid robot called Figure 02 assisted in vehicle body production. Over a ten-month trial, the system helped assemble more than 30,000 BMW X3 vehicles by retrieving and positioning sheet metal parts for welding.
According to BMW, the robot handled more than 90,000 components during the pilot and logged approximately 1,250 operating hours. The project also provided insights into safety systems and factory connectivity, including the need for improved wireless coverage and new safety barriers around collaborative work zones.
Engineers found that once the robot learned motion sequences in a test environment, it could reliably repeat them on the production line with millimeter-level precision.
The lessons from Spartanburg are now shaping how BMW approaches humanoid robotics in Europe.
Testing How Humanoid Robots Fit into Production
The Leipzig pilot is designed not only to evaluate the robot itself but also to determine how humanoid systems can integrate into existing manufacturing workflows.
Rather than assigning a single task permanently, BMW engineers are experimenting with different roles for AEON across battery module production and component manufacturing.
Factory workers are involved in the process, helping determine which tasks are suitable for humanoid robots and how workstations may need to change to accommodate them.
The rollout is proceeding gradually. After initial testing and laboratory validation, AEON began operating inside the Leipzig plant in late 2025. Further trials are scheduled throughout 2026 as BMW evaluates whether the technology can move toward broader deployment.
What This Signals for Automotive Manufacturing
Automakers have long been among the largest users of industrial robotics, but humanoid robots introduce a new category of automation that could reshape how factories are designed.
Because these robots can navigate environments built for human workers, they could allow manufacturers to automate tasks without redesigning entire production lines.
That flexibility may become increasingly important as electric vehicles, battery systems, and customized production introduce more variability into manufacturing processes.
BMW’s experiments also highlight a broader shift across the robotics industry toward physical AI, where machine learning systems control real-world machines rather than operating purely in software.
Whether humanoid robots will become common on factory floors remains uncertain. But pilot projects like those in Spartanburg and Leipzig suggest manufacturers are beginning to test how these systems might work alongside traditional industrial automation.
ABB and NVIDIA Partner to Bring Physical AI Simulation to Factory Robots
ABB Robotics is integrating NVIDIA Omniverse simulation technology into its RobotStudio platform to train factory robots using synthetic data and digital twins, aiming to close the long-standing gap between virtual training and real-world deployment.
Industrial robotics has long relied on simulation to design production lines and program machines, but translating those virtual models into reliable real-world performance has remained a persistent challenge.
ABB Robotics and NVIDIA say they are taking a significant step toward closing that gap. The companies announced a collaboration that integrates NVIDIA’s Omniverse simulation libraries into ABB’s RobotStudio platform, enabling manufacturers to train AI-driven robots in physically accurate digital environments before deploying them on factory floors.
The partnership reflects a growing push across the robotics industry to apply artificial intelligence not only to perception and decision-making but also to the way robots are designed, trained, and validated before they enter production.
Closing the Simulation to Reality Gap
Simulation has been a standard tool in manufacturing for decades, allowing engineers to model production lines and test automation systems without disrupting real operations. But even sophisticated digital twins have struggled to reproduce the complexities of real-world factory environments.
Differences in lighting, materials, object behavior, and mechanical tolerances often produce what engineers call the “sim-to-real gap”, where robots trained in simulation behave differently once deployed.
ABB and NVIDIA say combining RobotStudio’s robot programming environment with NVIDIA’s physically accurate Omniverse simulation tools could significantly reduce that gap. Developers will be able to generate synthetic training data and train AI models in detailed digital twins of factories, then transfer those models directly to physical robots.
ABB said the combined system could achieve simulation accuracy of up to 99 percent when transitioning from virtual training to real-world operation.
A key element is ABB’s virtual controller technology, which runs the same firmware used in physical robots. That allows developers to test programs in simulation under conditions that closely mirror real machines. When paired with the company’s Absolute Accuracy calibration technology, which reduces positioning errors to roughly half a millimeter, the platform is intended to support high-precision industrial tasks.
Digital Training for Complex Manufacturing
The companies say improved simulation accuracy could significantly accelerate the way factories deploy new automation systems.
Using digital twins and synthetic data, engineers can design and test production lines virtually before installing physical equipment. ABB estimates this approach could cut commissioning time by up to 80 percent while reducing development costs by as much as 40 percent by eliminating physical prototypes.
For industries with fast product cycles, such as consumer electronics, the ability to simulate entire production processes before deployment could also shorten time to market.
Foxconn, the world’s largest contract electronics manufacturer, is already piloting the technology for assembly processes that require extremely precise manipulation of small components. The company is using virtual training environments to simulate multiple product variants and production scenarios before robots are deployed on real assembly lines.
Dr. Zhe Shi, chief digital officer at Foxconn, said the approach allows manufacturers to run engineering and production planning in parallel rather than sequentially, accelerating factory ramp-up for new devices.
A New Layer of Physical AI for Industry
The collaboration also highlights how large robotics vendors are beginning to integrate AI infrastructure directly into industrial automation platforms.
ABB is exploring the use of NVIDIA’s Jetson edge computing modules within its Omnicore robot controller, enabling real-time AI inference directly on industrial robots. That could allow robots to adapt to changing environments, interpret visual data, and modify tasks dynamically without relying on centralized computing.
The companies say the integrated platform will be released to ABB’s roughly 60,000 RobotStudio users in the second half of 2026 under a new system called RobotStudio HyperReality.
Beyond large manufacturers, the technology is also being positioned for smaller factories. WORKR, a U.S.-based robotics workforce company, plans to demonstrate how robots trained using synthetic data and Omniverse simulation can be deployed without traditional programming, allowing operators to teach new tasks quickly.
The demonstration, scheduled at NVIDIA’s GTC conference, reflects a broader trend in robotics toward simplifying deployment for manufacturers that lack specialized automation engineers.
What This Signals for the Robotics Industry
The collaboration underscores a growing convergence between industrial robotics and the AI infrastructure ecosystem.
Historically, factory automation focused primarily on mechanical precision and deterministic programming. But as robots take on more flexible tasks, manufacturers increasingly need systems capable of learning from data and adapting to complex environments.
Physically accurate simulation environments may become a critical step in that process. Training robots in large-scale digital environments allows developers to generate enormous datasets and refine AI models before robots interact with real machinery or workers.
For companies like ABB, the approach could reshape how industrial automation systems are designed, shifting much of the engineering process into virtual environments.
If widely adopted, the combination of AI training, digital twins, and robotics simulation could make factory automation faster to deploy and easier to scale, potentially accelerating the spread of intelligent robotics across global manufacturing.
Samsung SDI Introduces Solid State Batteries Designed for Humanoid Robots
Samsung SDI plans to showcase a pouch-style all-solid-state battery designed for humanoid robots at InterBattery 2026 in Seoul, signaling a push to adapt next-generation battery technology for physical AI systems.
The rapid emergence of humanoid robots and other autonomous machines is forcing a new question across the robotics industry: how to power mobile systems that must operate safely for long periods while carrying increasingly complex computing hardware.
At InterBattery 2026 in Seoul, Samsung SDI plans to publicly present a pouch-style all-solid-state battery designed specifically for robotics applications, marking the company’s first demonstration of solid-state technology aimed at physical AI systems rather than electric vehicles.
The announcement reflects a broader shift in the battery sector as manufacturers begin adapting next-generation energy storage technologies for emerging robotics markets, where reliability, safety, and weight constraints differ significantly from those of passenger vehicles.
Expanding Solid State Batteries Beyond Electric Vehicles
Samsung SDI has spent years developing prismatic all-solid-state batteries primarily for electric vehicles, an area where automakers have long sought higher energy density and improved safety compared with conventional lithium-ion cells.
The company now intends to extend the technology into new form factors suited to robotics and other mobility platforms. For the robotics sector, Samsung SDI is introducing a pouch-style design, which the company says reduces overall weight while maintaining stable power output and improved safety characteristics.
Solid-state batteries replace the liquid electrolyte used in conventional lithium-ion batteries with a solid material, reducing the risk of leakage or thermal runaway while potentially enabling higher energy density. Those properties are especially relevant for humanoid robots, which must balance energy storage with strict limits on weight and thermal management.
Unlike industrial robots connected to fixed power sources, mobile robots depend on compact onboard batteries to support both actuation systems and increasingly demanding artificial intelligence workloads.
Why Energy Storage Is Becoming a Robotics Bottleneck
Battery design has quietly become one of the central constraints in humanoid robotics development. Many emerging robots rely on electric motors, advanced sensors, and onboard AI processors that collectively require large amounts of power.
This challenge is particularly visible in humanoid platforms being developed by companies across Asia, the United States, and Europe. Maintaining balance, locomotion, and real-time perception requires continuous computation, while real-world deployments demand operating times that extend well beyond short demonstration cycles.
Energy density therefore plays a direct role in whether robots can move from controlled demonstrations to practical industrial or service applications.
Solid-state batteries offer one potential pathway to addressing that constraint. Higher energy density could allow robots to run longer between charging cycles, while improved thermal stability may simplify safety requirements for systems operating near people.
Samsung SDI said its pouch-format solid-state battery is intended not only for humanoid robots but also for aviation platforms and wearable technologies, suggesting that robotics may become part of a broader category of emerging mobility devices.
What This Signals for the Physical AI Industry
The move underscores how battery manufacturers are beginning to recognize robotics as a distinct energy market alongside electric vehicles and consumer electronics.
Humanoid robotics has attracted growing investment over the past two years, with companies racing to develop machines capable of performing tasks in warehouses, factories, and logistics operations. These systems require compact power systems capable of sustaining both mechanical motion and advanced AI workloads.
Battery suppliers that can adapt their technology to these requirements may find new demand emerging alongside the growth of physical AI systems.
For Samsung SDI, the InterBattery demonstration appears to represent an early step in exploring that opportunity. While the company is targeting mass production of its prismatic solid-state batteries for electric vehicles in the second half of next year, the robotics-focused pouch cell suggests a parallel effort to diversify how the technology is applied.
If humanoid robots reach large-scale deployment in the coming decade, the energy systems that power them could become as strategically important as the AI models that control them.
Italian Startup Mirai Robotics Raises $4.2M to Build Autonomous Robot Ships
Italian startup Mirai Robotics has raised $4.2 million in pre-seed funding to develop autonomous maritime systems, aiming to deploy fleets of robot ships for security, monitoring, and commercial operations.
An Italian robotics startup is betting that the next frontier for autonomy will not be on roads but at sea.
Mirai Robotics, a Puglia-based company developing autonomous maritime systems, has raised $4.2 million in pre-seed funding to expand its fleet of robotic vessels and supporting AI software. The round was led by Primo Ventures, Techshop, and 40Jemz Ventures, with participation from a group of international angel investors.
Founded in 2025 by CEO Luciano Belviso alongside CTO Luca Mascaro and entrepreneur Davide Dattoli, the company is building autonomous ships designed to operate continuously in complex maritime environments.
Autonomy Moves Into the Maritime Domain
Mirai has already developed two autonomous vessels alongside a software stack that includes perception systems, navigation tools, and remote supervision capabilities. The platform can operate as a fully autonomous vessel or be integrated into existing ships as an autonomy layer.
Belviso argues that the maritime sector is overdue for technological transformation.
“The ocean plays a huge role in our global economy, but it’s also a hugely vulnerable domain,” he said. “A fully human-centric model is struggling to sustain continuous, safe and scalable operations.”
Autonomous ships could enable persistent monitoring of coastlines, offshore infrastructure, and subsea communication cables. These systems are particularly attractive for defense and security use cases, where continuous surveillance is difficult to maintain with human crews.
Safety, Labor Shortages and Efficiency
The case for autonomous shipping rests on several structural pressures affecting the maritime sector.
Human error remains the leading cause of maritime accidents. According to Allianz insurance data cited by industry analysts, roughly three-quarters of incidents at sea involve human mistakes. Removing crew from certain operations could significantly reduce these risks.
Labor shortages are another driver. The global shipping industry faces a deficit of roughly 90,000 seafarers, according to the BIMCO/ICS Seafarer Workforce Survey. Autonomous vessels could help operators maintain operations despite shrinking maritime workforces.
Efficiency and sustainability are also increasingly important. With new emissions reduction requirements from the International Maritime Organization, shipping companies are under pressure to optimize fuel consumption and reduce greenhouse gas output. AI-powered route planning and automated vessel control could improve operational efficiency while lowering costs.
A Growing Market for Autonomous Shipping
Mirai enters a market that is still emerging but expanding rapidly. The global autonomous shipping sector was valued at approximately $7.8 billion in 2025 and could exceed $24 billion by 2034, according to estimates from Polaris Market Research.
Several large players are already experimenting with autonomous vessels. Norway’s Yara Birkeland is widely considered the world’s first autonomous container ship, while Japan and South Korea are running national-scale trials of AI-enabled maritime navigation systems.
Mirai’s founders believe startups can compete effectively in the sector by focusing on software and robotics capabilities rather than traditional shipbuilding.
Large marine engineering firms have deep expertise in vessel design and heavy industry, Belviso noted, but often lack experience in AI-driven autonomy systems.
Building a European Hub for Marine Robotics
Located in southern Italy near key Mediterranean shipping routes, Mirai aims to position itself as a European center for maritime autonomy research and development.
The company is already in early discussions with potential customers and government programs as it prepares to scale its technology.
Investors see the sector as approaching a turning point. Primo Ventures partner Gianluca Dettori described the maritime domain as “a huge economy still operating with models designed decades ago,” adding that autonomous systems could become the foundational infrastructure enabling safer and more efficient ocean operations.
If that shift materializes, fleets of robotic ships may soon join autonomous vehicles and aerial drones as the next major frontier for physical AI.
Surgeon in London Removes Prostate via Robot 1,500 Miles Away in Gibraltar
A surgeon in London remotely controlled a robotic system to remove a prostate from a patient in Gibraltar, demonstrating how teleoperated robotics could expand access to specialized surgical care.
A surgeon in London has successfully removed a patient’s prostate using a robotic system located roughly 1,500 miles away in Gibraltar, highlighting the growing potential of long-distance robotic surgery, reports The Guardian.
The procedure involved a 62-year-old patient, Paul Buxton, who underwent a robotic prostatectomy at St Bernard’s Hospital in Gibraltar while the operation was conducted remotely from London’s Harley Street district. The surgery was performed by Prof. Prokar Dasgupta, a leading urologist and head of the robotic centre of excellence at The London Clinic.
Using a specialized surgical console, Dasgupta controlled the Toumai Robotic System developed by Shanghai-based MicroPort’s MedBot. The robot, equipped with four articulated arms and a high-definition 3D camera, executed the delicate procedure inside the operating theatre while transmitting real-time visual feedback to the surgeon.
A Milestone in Remote Robotic Surgery
The operation relied on a high-speed fibre optic connection linking London and Gibraltar, supported by a backup 5G network to ensure continuity in case of connectivity issues. According to the medical team, the system maintained a delay of only 60 milliseconds between the surgeon’s commands and the robot’s movements.
“We operated on an NHS patient in Gibraltar from the London Clinic 2,400km away using a robot with a 3D HD camera with four arms,” Dasgupta said. “The robot is completely controlled from a console using high-speed lines with a time delay of only 0.06 seconds.”
Medical staff were present in the operating room in Gibraltar throughout the procedure, prepared to intervene if the connection failed or complications arose. The surgery was completed successfully, and Buxton reported feeling “fantastic” within days of the operation.
For the patient, who has lived in Gibraltar for four decades, the alternative would likely have involved travelling to the United Kingdom for treatment and spending weeks waiting for surgery.
“If I hadn’t gone for the telesurgery in Gibraltar, I would have had to fly to London and go on the NHS waiting list,” Buxton said, adding that taking part in the procedure felt like being involved in “medical history.”
Rapid Growth of the Toumai Surgical Robot
The Toumai system used in the operation is part of a rapidly expanding global surgical robotics platform. According to its developer, Toumai has surpassed 200 commercial orders worldwide across nearly 50 countries and regions, with close to 130 systems already installed in hospitals.
Adoption has accelerated quickly, doubling from just over 100 orders in late 2025 to more than 200 within a few months. Growth has been particularly strong in emerging healthcare markets such as India and Brazil, while hospitals in developed markets including Spain and Australia are also expanding deployments.
The platform has supported thousands of procedures across urology, thoracic surgery, general surgery, gynecology, and head and neck operations. Nearly 800 remote robotic surgeries have already been performed in more than 20 countries, all reportedly completed successfully.
Expanding Access to Specialist Care
Remote robotic surgery has long been viewed as a way to connect expert surgeons with patients in regions where specialized care is difficult to access. Instead of transporting patients long distances, the technology allows experienced doctors to perform procedures remotely while local medical teams provide on-site support.
Dasgupta said the potential humanitarian impact could be significant, particularly for patients in remote locations or smaller healthcare systems that lack highly specialized surgeons. “I think it is very, very exciting,” he said. “The humanitarian benefit is going to be significant.”
The success of the Gibraltar procedure comes as hospitals and technology providers worldwide continue experimenting with telesurgery, aided by improvements in robotics, fiber-optic networks, and low-latency communication systems. Dasgupta is expected to repeat the remote procedure soon while broadcasting it to thousands of surgeons attending a major European urology conference.
While remote surgery still requires rigorous safeguards and reliable connectivity, the milestone operation suggests that robotics may increasingly enable expert medical care to reach patients regardless of distance.
MWC 2026: HONOR Wins More than 70 Awards for AI Devices and Robotics
HONOR captured more than 70 media awards at Mobile World Congress 2026, highlighting its push into AI-powered devices, robotics concepts, and next-generation foldables.
At Mobile World Congress 2026 in Barcelona, Chinese technology company HONOR received more than 70 media and industry awards for its latest devices and AI initiatives, underscoring the company’s growing focus on artificial intelligence, robotics-inspired hardware, and next-generation foldable smartphones.
The recognition highlighted HONOR’s broader strategy around “Augmented Human Intelligence,” a vision aimed at integrating AI capabilities deeply into consumer electronics.
The company showcased several new products and concepts at the event, including the Magic V6 foldable smartphone, the MagicPad 4 tablet, the MagicBook Pro 14 laptop, and experimental AI-driven hardware concepts such as its “Robot Phone.”
Robotics Concepts And Embodied AI
One of the most discussed demonstrations at the event was HONOR’s Robot Phone, a concept device designed to showcase how AI-powered hardware might physically interact with users in future devices.
The prototype combines robotic-style movement with AI-powered imaging and sensing capabilities. According to reports from major media outlets including Bloomberg and Reuters, the device represents an early experiment in embodied AI, where computing systems combine perception, reasoning, and physical interaction.
A video demonstration released during the event showed how the device can visually recognize objects and move to capture photos automatically. Coverage from Bloomberg, Reuters, and CNBC highlighted the concept as part of a wider shift among technology companies exploring physical AI devices.
Technology publications also discussed the device’s design approach. Engadget described it as resembling a compact personal robot integrated into a smartphone form factor, while GadgetMatch suggested it offered a preview of devices that behave more like intelligent assistants than traditional electronics.
Magic V6 Highlights Foldable Innovation
Alongside its experimental concepts, HONOR also received strong recognition for its flagship Magic V6 foldable smartphone, which reviewers widely described as one of the most advanced foldable devices introduced at the event.
Technology outlets including TechRadar, Android Authority, and Trusted Reviews praised the device for its thin design, durability improvements, and advanced battery technology.
The smartphone incorporates a silicon-carbon battery, a newer battery chemistry that improves energy density while allowing thinner device designs. According to reports from Stuff and GSMArena, the device also runs on Qualcomm’s Snapdragon 8 Elite Gen 5 processor.
HONOR’s battery innovation also received the Best Disruptive Device Innovation award at the Global Mobile Awards, presented during MWC. The winners were announced by the GSMA during the event’s annual ceremony, according to the official MWC GLOMO awards announcement.
Expanding AI Device Ecosystem
Beyond smartphones, HONOR used the Barcelona event to expand its broader AI device ecosystem.
The MagicPad 4 tablet and MagicBook Pro 14 laptop were presented as productivity-focused devices designed to integrate tightly with AI services and cross-device workflows. Reviewers highlighted the tablet’s lightweight design and performance improvements, while the laptop emphasized AI-assisted computing features.
Several outlets, including TechRadar and TechAdvisor, placed HONOR devices on their “Best of MWC 2026” lists, reflecting strong reception from technology journalists covering the show.
As smartphone makers increasingly compete on AI capabilities, HONOR’s presentation at MWC suggests the company is positioning itself not just as a smartphone manufacturer but as a broader AI hardware platform provider, experimenting with robotics-inspired designs and intelligent device ecosystems.
Texas Instruments and NVIDIA Partner to Accelerate Physical AI and Robotics
Texas Instruments and NVIDIA are expanding their collaboration to accelerate robots and other physical AI systems by combining advanced sensing, power electronics, and AI computing platforms.
Texas Instruments and NVIDIA have expanded their collaboration to accelerate the development of robots and other machines powered by physical AI. The initiative brings together Texas Instruments’ sensing and power technologies with NVIDIA’s AI computing platforms to support the next generation of autonomous systems operating in the physical world.
The partnership reflects a broader industry shift as artificial intelligence moves beyond data centers and digital applications into machines capable of sensing, reasoning, and acting in real environments. From industrial robots and autonomous vehicles to intelligent infrastructure, physical AI systems depend on tight integration between sensors, control electronics, and high-performance AI processors.
Bridging Sensing Hardware With AI Computing
Physical AI systems rely on a continuous loop of perception, decision-making, and action. Texas Instruments supplies many of the semiconductor components that allow machines to capture real-world signals, manage energy, and control motion with high precision.
Through the expanded collaboration, TI’s sensing, power management, and real-time control technologies will work alongside NVIDIA’s AI computing platforms used in robotics and autonomy.
“Our collaboration with NVIDIA will help engineers accelerate the development of autonomous machines by combining TI’s expertise in sensing and power with NVIDIA’s AI computing platforms,” said Amichai Ron, senior vice president of embedded processing at Texas Instruments.
By linking sensing hardware with high-performance computing, the companies aim to simplify the architecture required to build intelligent robots and autonomous machines that must operate safely in unpredictable environments.
Building Infrastructure for the Physical AI Era
The collaboration also underscores the growing importance of technology infrastructure designed specifically for machines interacting with the real world. Physical AI systems require hardware and software capable of interpreting sensor data, navigating complex environments, and executing precise mechanical actions in real time.
NVIDIA has been investing heavily in platforms that support robotics and autonomy, positioning physical AI as a major growth area across industries.
“Physical AI will enable a new generation of intelligent machines that can perceive, reason and act in the real world,” said Jensen Huang, founder and CEO of NVIDIA.
Together, NVIDIA’s computing platforms and Texas Instruments’ sensing and control technologies form a foundational stack for companies developing robots, autonomous vehicles, and industrial automation systems. As robotics moves into more dynamic real-world environments, such integrated ecosystems are expected to play a central role in scaling physical AI deployment across industries.
China Establishes First National Standards for Humanoid Robots
China has introduced its first national standard system for humanoid robotics, aiming to unify technical specifications and accelerate commercial deployment across industries.
China has formally introduced its first national standard system for humanoid robotics, marking a coordinated effort to structure one of the country’s fastest-growing technology sectors.
The framework was unveiled at the Humanoid Robots and Embodied Intelligence Standardization meeting in Beijing. It establishes unified technical guidelines intended to streamline development, reduce fragmentation, and accelerate the transition from pilot projects to commercial deployment.
The move signals that policymakers view humanoid robotics not as an experimental field, but as an emerging industrial category requiring formal governance.
Six Pillars for Industrial Alignment
The standard system is organized around six core pillars: foundational and common standards, neuromorphic and intelligent computing, limbs and key components, full-system integration, application scenarios, and safety and ethics.
Together, these categories define technical specifications, interface protocols, and evaluation benchmarks. Committee experts involved in the initiative said the goal is to reduce coordination friction between suppliers, lower production costs, and shorten iteration cycles across the value chain.
By clarifying interfaces and performance metrics, the framework is designed to enable interoperability between hardware platforms, software systems, and embodied AI models. It also embeds safety and ethical considerations into early-stage development, reflecting regulatory awareness as robots move into workplaces and homes.
From Prototypes to Scaled Deployment
According to China’s Ministry of Industry and Information Technology, 2024 marked the country’s first year of humanoid robot mass production. More than 140 domestic companies released over 330 models, with deployments expanding into manufacturing, household services, healthcare, and elderly care.
Until now, much of that growth has occurred in a relatively fragmented environment, with companies developing proprietary architectures and evaluation criteria. National standards are expected to impose structure on a rapidly expanding ecosystem.
The framework could also serve a strategic function. As Chinese firms compete globally in embodied AI and humanoid robotics, standardized technical benchmarks may strengthen export readiness and ecosystem coordination.
While many humanoid deployments remain in early stages, the introduction of national standards suggests the industry is entering a new phase, where commercialization and regulatory alignment advance in parallel.
University of Southampton Develops Adaptive Robot Fin for Underwater Stability
Researchers at the University of Southampton have developed a flexible robotic fin with embedded electronic skin that automatically adapts to changing water currents, improving underwater robot stability and efficiency.
Autonomous underwater vehicles are built to withstand unpredictable ocean conditions, but their rigid fins often require significant energy to counteract sudden currents and turbulence. Researchers at the University of Southampton are proposing a different approach: fins that sense water flow and adjust their shape in real time.
The team has developed a flexible robotic fin embedded with electronic skin capable of detecting subtle changes in water movement. The system automatically modifies the fin’s stiffness and curvature to stabilize underwater robots while reducing energy consumption.
The research, published in npj under the title “Harnessing proprioception in aquatic soft wings enables hybrid passive-active disturbance rejection,” reflects a broader push toward soft robotics and adaptive control in marine environments.
Inspired by Biological Sensing
The design draws from biological proprioception mechanisms observed in birds and fish. Birds detect airflow changes through sensory feedback in their feathers, while fish rely on lateral line systems and fin rays to perceive water disturbances.
To replicate similar sensing capabilities, the Southampton engineers embedded flexible liquid metal wiring inside a silicone fin. When water flow deforms the fin, the integrated electronic skin registers changes in electrical resistance. These signals are transmitted to a hydraulic system inside the robot’s body, which adjusts internal pressure through connected hoses to alter the fin’s shape.
Rather than relying solely on active propulsion corrections, the system combines passive flexibility with active hydraulic adjustment.
Reducing Energy Use in Turbulent Waters
Rigid AUVs typically expend substantial energy to maintain orientation when struck by waves or shifting currents. According to the researchers, the adaptive fin significantly improves disturbance rejection.
In controlled tests, the fin reduced unwanted buoyancy effects caused by sudden water flow by 87 percent compared with a similar vehicle using rigid fins. The robot demonstrated improved self-stabilization and maneuverability while consuming less energy to maintain position.
The findings suggest potential advantages for underwater inspection, environmental monitoring, and defense applications where energy efficiency and stability are critical.
Technical Constraints Remain
Despite promising results, integration challenges remain. Scaling the flexible system to larger vehicles and embedding it into rigid hull designs could complicate deployment. Long-term durability of the electronic skin and hydraulic components in harsh marine environments also requires further validation.
The researchers note that more robust actuators and structural refinements may help address these constraints.
The project illustrates how bio-inspired sensing and soft robotics are reshaping underwater vehicle design. As offshore energy, marine research, and subsea infrastructure monitoring expand, adaptive control systems such as this may become increasingly relevant to improving endurance and operational stability in dynamic ocean conditions.
MWC 2026 Marks Shift From AI Apps to AI Native Hardware
Mobile World Congress 2026 highlighted a decisive shift as AI moved beyond apps and into physical devices, from humanoid robots and AI glasses to smartphones with mechanical motion systems.
Mobile World Congress 2026 underscored a structural change in the AI industry: artificial intelligence is no longer confined to apps running on smartphones. It is beginning to reshape the hardware itself.
Across the exhibition floor in Barcelona, companies presented humanoid robots controlled entirely by voice, AI glasses positioned as daily computing devices, and smartphones equipped with mechanical camera systems that physically move. The theme was consistent: large AI models are evolving from software layers into defining elements of device architecture.
Smartphone Makers Enter Robotics
Several Chinese smartphone manufacturers used MWC to demonstrate ambitions beyond handsets.
Honor unveiled its first humanoid robot during its global launch event, showcasing AI-driven motion control and multimodal interaction. The demonstration included acrobatic movements and coordinated choreography, signaling technical progress in embodied control systems.
Xiaomi, which introduced its CyberOne humanoid in 2022, did not display a robot on the show floor but reported new milestones. According to the company, its humanoid systems have begun operating in automotive factories, performing tasks such as self-tapping nut installation and material transport. Chairman Lei Jun said large-scale deployment in Xiaomi’s factories could occur within five years.
The move into robotics comes as smartphone growth slows. IDC estimates that China’s smartphone shipments reached roughly 284 million units in 2025, a slight year-on-year decline. For manufacturers with in-house chips, operating systems, and AI models, robotics represents an adjacent growth market built on overlapping technologies.
Lu Weibing, president of Xiaomi’s mobile division, has argued that investments in proprietary silicon, operating systems, and foundational AI are interconnected and transferable to robotics platforms.
Other technology firms are also advancing embodied systems. At MWC, iFlytek demonstrated a humanoid guide robot powered by upgraded multimodal voice interaction, eliminating the need for handheld remote controls. China Mobile presented an unmanned restaurant concept in which embodied robots collaborated on ordering, food preparation, and delivery.
These deployments suggest that large models are increasingly acting as real-time control interfaces rather than conversational add-ons.
AI Glasses and the Search for Monetization
While AI apps saw a surge in daily active users during China’s Spring Festival promotions, retention and revenue models remain uncertain. Several internet companies are now shifting attention toward AI hardware.
Alibaba’s Qwen brand introduced its first AI glasses at MWC, embedding large language models into wearable devices capable of translation, transcription, photography, and object recognition. The devices are positioned for both consumer and professional use.
IDC forecasts that global smart glasses shipments will exceed 23 million units by 2026, including nearly 5 million units in China. Compared with heavily subsidized AI apps, glasses offer a direct hardware revenue stream and clearer monetization path.
iFlytek also debuted lightweight AI glasses weighing approximately 40 grams, emphasizing multimodal recording and translation capabilities.
Redefining the Smartphone Form
AI integration is also altering the smartphone itself.
ZTE showcased AI-powered devices that embed assistants directly into the system layer, enabling cross-application control via natural language. Rather than functioning as standalone apps, these AI agents are integrated into core operating system workflows.
Honor introduced a more experimental concept: a “Robot Phone” featuring a motorized multi-axis gimbal paired with a 200-megapixel sensor. The device can physically rotate and track users during video calls, combining AI vision with mechanical motion.
The common thread across categories is the shift from AI-enabled hardware to AI-defined hardware. Large models are beginning to influence device structure, interaction methods, and mechanical design.
MWC 2026 did not present a single dominant form factor. Instead, it revealed a competitive search for the most natural interface between AI systems and the physical world. Whether that interface proves to be humanoid robots, wearable glasses, or reengineered smartphones remains unsettled. What is clear is that AI is no longer just inside devices. It is beginning to shape what those devices become.
Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms
Researchers at Georgia Tech have developed a robot pollinator that uses computer vision and 3D modeling to automate flower pollination in indoor farms.
Researchers at Georgia Tech have developed a robotic system designed to automate pollination inside indoor farms, addressing one of the most labor-intensive challenges in vertical agriculture.
The prototype, created by engineers at the Georgia Tech Research Institute (GTRI) and the George W. Woodruff School of Mechanical Engineering, uses computer vision and robotic manipulation to pollinate flowering plants without human intervention.
As indoor farming expands in urban environments, automating pollination has become a critical bottleneck in scaling production.
Pollination without Bees
Indoor farms offer several advantages over traditional agriculture, including year-round production, reduced water use, and minimal pesticide reliance. However, enclosed growing environments prevent natural pollinators such as bees from accessing crops.
For many flowering plants grown indoors – including strawberries and tomatoes – farmers must manually transfer pollen using brushes or vibrating tools. The process is repetitive and time-consuming, limiting scalability.
The Georgia Tech team’s robot is designed to pollinate plants that contain both male and female reproductive structures within the same flower. These plants require pollen transfer within a single bloom rather than cross-pollination between separate flowers.
By automating this step, researchers aim to reduce labor demands and increase consistency in crop yields.
Teaching a Robot to Understand Flower Orientation
One of the central technical challenges was enabling the robot to recognize the “pose” of each flower – its orientation, symmetry, and position relative to the stem.
Accurate pose detection is critical because pollen must be delivered precisely to the reproductive structures at the front of the flower. Even small alignment errors can reduce pollination effectiveness.
To solve this, the team developed a computer vision pipeline that reconstructs flowers in 3D from multiple camera images. The 3D model is then converted into depth-enhanced 2D representations that can be processed by object detection algorithms.
The researchers used a real-time object detection system known as YOLO (You Only Look Once) to classify flower features in a single processing pass. By converting 3D data into structured 2D inputs, they leveraged the abundance of training resources available for 2D computer vision systems.
The approach enabled the robot to estimate flower orientation with sufficient precision to approach and manipulate the stem correctly.
From Detection to Physical Interaction
Once the robot identifies the flower’s pose, it grips the stem and applies controlled vibration to dislodge and distribute pollen within the bloom.
Unlike simple mechanical vibration tools, the system integrates perception, positioning, and actuation into a single workflow. This coordination is essential in dense vertical farming environments where flowers vary in size, spacing, and orientation.
The prototype was built in Georgia Tech’s Safe Robotics Lab and remains in testing.
Adding Microscopic Feedback
Beyond basic pollination, the system includes an inspection capability that allows it to evaluate pollination success. The robot can perform close-up imaging of flower structures to assess whether pollen has been effectively transferred.
This feedback loop is a notable feature, as most manual pollination methods offer no immediate verification of success.
The research team has documented its technical approach in a paper accepted to the 2025 International Conference on Robotics and Automation (ICRA).
Automation Expands in Controlled Agriculture
Indoor farming is often promoted as a solution to urban food supply challenges and climate variability. However, high labor costs and operational complexity have slowed widespread adoption.
Automating tasks such as pollination could help reduce those barriers. Robotics in agriculture has traditionally focused on harvesting and monitoring, but pollination represents a more delicate and technically demanding process.
The Georgia Tech prototype demonstrates how advances in AI perception and robotic control can be applied to biological systems.
While the system remains in early development, it illustrates how robotics may increasingly support food production in controlled environments – where precision, repeatability, and data-driven feedback are essential for scaling output.
Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security
Revobots has introduced TASKBOT SCOUT XT, an all-weather autonomous patrol robot designed for outdoor enforcement and campus monitoring under a Robots-as-a-Service model.
Revobots has introduced an all-weather version of its autonomous patrol robot, expanding its security robotics platform beyond indoor facilities and into outdoor environments.
The new system, called TASKBOT SCOUT XT, is engineered for exterior enforcement and monitoring tasks across campuses, parking lots, and mixed-use spaces. The Phoenix-based company says the robot is designed to address one of the longstanding limitations of autonomous patrol systems: reliable operation in unpredictable weather and uneven terrain.
The launch reflects growing demand for robotics solutions that can supplement security staffing in environments where labor shortages and operational costs continue to rise.
Hardware Upgrades for Outdoor Deployment
SCOUT XT builds on Revobots’ indoor patrol platform but incorporates significant hardware modifications to withstand environmental exposure.
The robot features an IP65-rated enclosure designed to protect against dust and water ingress. Its extended-wheelbase, all-wheel-drive chassis is intended to provide stability across uneven pavement, gravel, and surface transitions.
Outdoor-calibrated vision systems allow the robot to operate in variable lighting conditions, including bright daylight and low-light evening environments. Longer-range perception capabilities are designed to accommodate open spaces with fewer visual landmarks than indoor corridors.
All-terrain wheels further support navigation across cracked pavement, curb transitions, and mixed surfaces common in parking facilities and campus grounds.
Autonomous Operation with Human Oversight
SCOUT XT operates on Revobots’ existing backend infrastructure, including its Robots-as-a-Service subscription model and REVO Pilot human-in-the-loop oversight system.
By default, the robot navigates autonomously, using onboard AI to conduct patrol routes and monitor designated areas. When conditions exceed predefined thresholds – such as ambiguous detections or unusual environmental scenarios – the system can escalate to human supervisors for intervention.
This hybrid autonomy model is increasingly common in commercial robotics deployments, particularly in security applications where accountability and reliability are critical.
Campus Deployment Highlights Practical Use Case
Revobots said SCOUT XT recently completed pilot testing at Xavier University in Cincinnati. During the trial, the robot supported automated license plate recognition enforcement across multiple campus parking areas.
The deployment was designed to expand monitoring coverage without increasing staffing levels, a key consideration for educational institutions and other organizations managing large facilities.
Integration with existing campus infrastructure was supported through collaboration with Campus Innovation and its C-Park platform.
The university pilot demonstrates how outdoor patrol robots can supplement traditional security operations, particularly in structured environments such as campuses, business parks, and residential communities.
Expanding the Scope of Security Robotics
Autonomous security robots have typically been deployed indoors, where environmental variables are more predictable. Extending patrol capabilities outdoors introduces challenges including weather exposure, uneven terrain, and dynamic lighting.
By adapting its existing platform rather than building an entirely new system, Revobots is pursuing incremental expansion of its task-adaptive robotics model.
The broader security robotics market is evolving toward service-based deployment models, where customers subscribe to robotics coverage rather than purchase hardware outright. This approach lowers upfront costs and allows providers to maintain centralized oversight and software updates.
As robotics companies seek commercially viable applications, outdoor patrol represents a practical step toward broader real-world autonomy.
While fully autonomous security operations remain a long-term ambition, platforms like SCOUT XT illustrate how robotics companies are addressing specific operational gaps – expanding coverage, improving consistency, and reducing reliance on human patrol staffing in large, open environments.
TCL Unveils Tbot Concept to Turn Kids’ Smartwatch Into a Home AI Robot
At MWC 2026, TCL introduced Tbot, a concept desktop robot designed to pair with children’s smartwatches, extending AI support from outdoor tracking to home routines.
At Mobile World Congress 2026 in Barcelona, TCL presented a concept device that blends wearable technology with home robotics. Called Tbot, the desktop robot is designed to pair with TCL’s children’s smartwatches, acting as both a charging dock and an AI-powered companion.
The concept reflects a broader shift in consumer robotics toward focused, task-specific devices rather than fully autonomous humanoid machines. Instead of building a standalone home robot, TCL is extending the functionality of an existing wearable into a stationary, home-based form.
For now, the Tbot remains a concept with no announced release date or pricing.
Thrilled to introduce TCL Tbot, the world’s first AI desktop companion concept device designed to extend the kids watch experience.
Paired with the TCL Kids Watch MT48, TCL Tbot is designed to continue providing reassuring companionship and assistance even when the watch is… pic.twitter.com/HrFZ1chOpc
— TCL Mobile (@TCLMobileGlobal) March 2, 2026
Extending the Smartwatch Experience Indoors
Children’s smartwatches have become popular for location tracking, communication, and safety monitoring. However, their functionality typically pauses when the watch is removed for charging.
TCL’s idea is to bridge that gap. The Tbot features a magnetic dock that holds and charges the smartwatch when it is not being worn. During that time, the desktop robot takes over certain AI-driven features.
According to TCL, Tbot can handle morning alarms, homework timers, and bedtime routines. The device is positioned as a supportive assistant rather than a surveillance tool, offering reminders and guidance tailored to children.
By maintaining continuity between outdoor and indoor use, TCL aims to create a unified digital experience across environments.
AI Companion Designed for Routine and Learning
Beyond basic alarms and timers, Tbot is designed to act as a conversational learning companion. Children can ask questions and explore topics, while the system provides age-appropriate responses.
At night, the robot can transition into a sleep-support role, offering calming stories or audio to help children wind down. Parents can configure notifications and receive updates, maintaining oversight without constant direct interaction.
TCL emphasizes that AI features would operate with parental permission and regulatory compliance in mind, reflecting increasing scrutiny around children’s data privacy.
Consumer Robotics Moves Toward Targeted Use Cases
The Tbot concept illustrates a growing trend in consumer robotics: devices focused on narrow, clearly defined roles rather than broad household autonomy.
Rather than competing with smart speakers or building full humanoid assistants, TCL is exploring how robotics can complement wearables. The Tbot’s design integrates charging infrastructure with AI interaction, creating a hybrid between dock, assistant, and companion device.
This approach aligns with a wider industry movement where robotics capabilities are embedded into familiar consumer products instead of introduced as entirely new categories.
Concept Stage Highlights Industry Direction
TCL has not confirmed whether Tbot will enter mass production. The device was presented at MWC as a demonstration of the company’s direction in AI-enabled family technology.
Concept products at major trade shows often serve as signals rather than immediate commercial offerings. In this case, TCL is indicating interest in expanding beyond smartphones and wearables into interactive home robotics.
As AI becomes more integrated into everyday devices, companies are experimenting with ways to connect physical hardware with digital services in more seamless ways.
If Tbot reaches the market, it could represent an early example of robotics moving into family-focused, screen-light applications – where the machine’s role is subtle, supportive, and embedded within existing ecosystems.
For now, Tbot remains a prototype. But it underscores how robotics is increasingly intersecting with consumer electronics, particularly in categories centered on education, safety, and home routines.
AGIBOT Showcases X2 at MWC 2026 as Humanoid Robots Move Toward Commercial Scale
AGIBOT presented its X2 humanoid robot at MWC 2026, highlighting shipment leadership and a shift toward scenario-based, commercial-scale deployment.
At Mobile World Congress 2026 in Barcelona, humanoid robots were no longer treated as experimental curiosities. Instead, attention shifted toward companies capable of demonstrating commercial traction. Among them was AGIBOT, which used the event to showcase its X2 humanoid robot and emphasize its scale ambitions.
According to industry research firms including IDC and Omdia, AGIBOT ranked first globally in humanoid robot shipments in 2025. The company has also reported revenue exceeding RMB 1 billion, placing it among a small group of robotics firms claiming measurable commercial performance rather than pilot-stage experimentation.
The X2’s presence at MWC reflected that transition from demonstration to deployment.
X2: Motion and Function in Balance
The AGIBOT X2 is part of the company’s X Series, designed to combine advanced motion capability with interactive intelligence. The robot features 25 degrees of freedom and can reach walking speeds of up to 2 meters per second.
On the show floor, the X2 demonstrated stable locomotion and responsive movement, positioning itself between heavy industrial humanoids and lighter entertainment-focused machines.
Rather than emphasizing raw technical specifications alone, AGIBOT framed the X2 as a platform suited for defined operational scenarios. The company has segmented its portfolio into multiple product lines:
- The A Series for presentation and reception roles
- The G Series for factory and precision industrial environments
- The X Series for advanced motion and intelligent interaction
- The D Series quadrupeds for inspection and patrol applications
This structured segmentation signals a shift away from generic humanoid branding toward scenario-driven deployment strategies.
From Prototypes to Revenue Metrics
The humanoid robotics industry has seen dozens of prototypes unveiled over the past two years. However, few companies have disclosed shipment volumes or revenue figures.
AGIBOT’s public positioning around shipment rankings and revenue milestones suggests that scale, rather than spectacle, is becoming a primary differentiator. As more robotics firms move beyond lab environments, investors and enterprise customers are increasingly focused on production capacity and deployment readiness.
Industry data indicates that humanoid robotics remains a small but rapidly expanding segment within the broader robotics market. Commercial viability will depend on reliability, support infrastructure, and cost-effective scaling – not just technical performance.
Robot-as-a-Service Expands Overseas
AGIBOT is also pursuing international expansion through regional partnerships built around a Robot-as-a-Service model. Instead of centralizing global leasing operations, the company works with local partners to manage deployments and service contracts, particularly in Europe.
This approach aligns with a broader industry trend toward service-based robotics adoption. Many customers prefer flexible operational models over large upfront hardware investments, especially as humanoid robots remain a developing technology category.
Localized partnerships can also address regulatory requirements and after-sales support – both critical factors for scaling robotics across different markets.
MWC Signals a Shift in Industry Tone
Mobile World Congress has historically served as a barometer for technology maturity. In 2026, humanoid robotics appeared to enter a new phase where commercial scale and operational readiness overshadowed novelty.
AGIBOT’s X2 demonstration reflected that shift. The company’s messaging centered less on futuristic potential and more on deployment metrics and structured product segmentation.
As the humanoid robotics field matures, the competitive landscape may increasingly be defined by production capacity, service ecosystems, and real-world performance – rather than prototype announcements alone.
For companies seeking to lead the next stage of embodied AI, the challenge is no longer proving that humanoid robots can walk. It is proving that they can work – consistently and at scale.