Surgeon in London Removes Prostate via Robot 1,500 Miles Away in Gibraltar

A surgeon in London remotely controlled a robotic system to remove a prostate from a patient in Gibraltar, demonstrating how teleoperated robotics could expand access to specialized surgical care.

By Laura Bennett Published: Updated:
Surgeon in London Removes Prostate via Robot 1,500 Miles Away in Gibraltar
A robotic surgical system performs a prostate removal in Gibraltar while the surgeon operates remotely from London, demonstrating how robotics and high-speed networks enable long-distance medical procedures. Photo: MicroPort / LinkedIn

A surgeon in London has successfully removed a patient’s prostate using a robotic system located roughly 1,500 miles away in Gibraltar, highlighting the growing potential of long-distance robotic surgery, reports The Guardian.

The procedure involved a 62-year-old patient, Paul Buxton, who underwent a robotic prostatectomy at St Bernard’s Hospital in Gibraltar while the operation was conducted remotely from London’s Harley Street district. The surgery was performed by Prof. Prokar Dasgupta, a leading urologist and head of the robotic centre of excellence at The London Clinic.

Using a specialized surgical console, Dasgupta controlled the Toumai Robotic System developed by Shanghai-based MicroPort’s MedBot. The robot, equipped with four articulated arms and a high-definition 3D camera, executed the delicate procedure inside the operating theatre while transmitting real-time visual feedback to the surgeon.

A Milestone in Remote Robotic Surgery

The operation relied on a high-speed fibre optic connection linking London and Gibraltar, supported by a backup 5G network to ensure continuity in case of connectivity issues. According to the medical team, the system maintained a delay of only 60 milliseconds between the surgeon’s commands and the robot’s movements.

“We operated on an NHS patient in Gibraltar from the London Clinic 2,400km away using a robot with a 3D HD camera with four arms,” Dasgupta said. “The robot is completely controlled from a console using high-speed lines with a time delay of only 0.06 seconds.”

Medical staff were present in the operating room in Gibraltar throughout the procedure, prepared to intervene if the connection failed or complications arose. The surgery was completed successfully, and Buxton reported feeling “fantastic” within days of the operation.

For the patient, who has lived in Gibraltar for four decades, the alternative would likely have involved travelling to the United Kingdom for treatment and spending weeks waiting for surgery.

“If I hadn’t gone for the telesurgery in Gibraltar, I would have had to fly to London and go on the NHS waiting list,” Buxton said, adding that taking part in the procedure felt like being involved in “medical history.”

Rapid Growth of the Toumai Surgical Robot

The Toumai system used in the operation is part of a rapidly expanding global surgical robotics platform. According to its developer, Toumai has surpassed 200 commercial orders worldwide across nearly 50 countries and regions, with close to 130 systems already installed in hospitals.

Adoption has accelerated quickly, doubling from just over 100 orders in late 2025 to more than 200 within a few months. Growth has been particularly strong in emerging healthcare markets such as India and Brazil, while hospitals in developed markets including Spain and Australia are also expanding deployments.

The platform has supported thousands of procedures across urology, thoracic surgery, general surgery, gynecology, and head and neck operations. Nearly 800 remote robotic surgeries have already been performed in more than 20 countries, all reportedly completed successfully.

Expanding Access to Specialist Care

Remote robotic surgery has long been viewed as a way to connect expert surgeons with patients in regions where specialized care is difficult to access. Instead of transporting patients long distances, the technology allows experienced doctors to perform procedures remotely while local medical teams provide on-site support.

Dasgupta said the potential humanitarian impact could be significant, particularly for patients in remote locations or smaller healthcare systems that lack highly specialized surgeons. “I think it is very, very exciting,” he said. “The humanitarian benefit is going to be significant.”

The success of the Gibraltar procedure comes as hospitals and technology providers worldwide continue experimenting with telesurgery, aided by improvements in robotics, fiber-optic networks, and low-latency communication systems. Dasgupta is expected to repeat the remote procedure soon while broadcasting it to thousands of surgeons attending a major European urology conference.

While remote surgery still requires rigorous safeguards and reliable connectivity, the milestone operation suggests that robotics may increasingly enable expert medical care to reach patients regardless of distance.

MWC 2026: HONOR Wins More than 70 Awards for AI Devices and Robotics

HONOR captured more than 70 media awards at Mobile World Congress 2026, highlighting its push into AI-powered devices, robotics concepts, and next-generation foldables.

By Daniel Krauss Published: Updated:
MWC 2026: HONOR Wins More than 70 Awards for AI Devices and Robotics
HONOR showcases its latest AI-powered devices and robotics concepts at Mobile World Congress 2026, where the company received more than 70 industry awards for innovation. Photo: HONOR

At Mobile World Congress 2026 in Barcelona, Chinese technology company HONOR received more than 70 media and industry awards for its latest devices and AI initiatives, underscoring the company’s growing focus on artificial intelligence, robotics-inspired hardware, and next-generation foldable smartphones.

The recognition highlighted HONOR’s broader strategy around “Augmented Human Intelligence,” a vision aimed at integrating AI capabilities deeply into consumer electronics.

The company showcased several new products and concepts at the event, including the Magic V6 foldable smartphone, the MagicPad 4 tablet, the MagicBook Pro 14 laptop, and experimental AI-driven hardware concepts such as its “Robot Phone.”

Robotics Concepts And Embodied AI

One of the most discussed demonstrations at the event was HONOR’s Robot Phone, a concept device designed to showcase how AI-powered hardware might physically interact with users in future devices.

The prototype combines robotic-style movement with AI-powered imaging and sensing capabilities. According to reports from major media outlets including Bloomberg and Reuters, the device represents an early experiment in embodied AI, where computing systems combine perception, reasoning, and physical interaction.

A video demonstration released during the event showed how the device can visually recognize objects and move to capture photos automatically. Coverage from Bloomberg, Reuters, and CNBC highlighted the concept as part of a wider shift among technology companies exploring physical AI devices.

Technology publications also discussed the device’s design approach. Engadget described it as resembling a compact personal robot integrated into a smartphone form factor, while GadgetMatch suggested it offered a preview of devices that behave more like intelligent assistants than traditional electronics.

Magic V6 Highlights Foldable Innovation

Alongside its experimental concepts, HONOR also received strong recognition for its flagship Magic V6 foldable smartphone, which reviewers widely described as one of the most advanced foldable devices introduced at the event.

Technology outlets including TechRadar, Android Authority, and Trusted Reviews praised the device for its thin design, durability improvements, and advanced battery technology.

The smartphone incorporates a silicon-carbon battery, a newer battery chemistry that improves energy density while allowing thinner device designs. According to reports from Stuff and GSMArena, the device also runs on Qualcomm’s Snapdragon 8 Elite Gen 5 processor.

HONOR’s battery innovation also received the Best Disruptive Device Innovation award at the Global Mobile Awards, presented during MWC. The winners were announced by the GSMA during the event’s annual ceremony, according to the official MWC GLOMO awards announcement.

Expanding AI Device Ecosystem

Beyond smartphones, HONOR used the Barcelona event to expand its broader AI device ecosystem.

The MagicPad 4 tablet and MagicBook Pro 14 laptop were presented as productivity-focused devices designed to integrate tightly with AI services and cross-device workflows. Reviewers highlighted the tablet’s lightweight design and performance improvements, while the laptop emphasized AI-assisted computing features.

Several outlets, including TechRadar and TechAdvisor, placed HONOR devices on their “Best of MWC 2026” lists, reflecting strong reception from technology journalists covering the show.

As smartphone makers increasingly compete on AI capabilities, HONOR’s presentation at MWC suggests the company is positioning itself not just as a smartphone manufacturer but as a broader AI hardware platform provider, experimenting with robotics-inspired designs and intelligent device ecosystems.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Texas Instruments and NVIDIA Partner to Accelerate Physical AI and Robotics

Texas Instruments and NVIDIA are expanding their collaboration to accelerate robots and other physical AI systems by combining advanced sensing, power electronics, and AI computing platforms.

By Daniel Krauss Published: Updated:
Texas Instruments and NVIDIA Partner to Accelerate Physical AI and Robotics
Texas Instruments and NVIDIA are combining sensing, power electronics, and AI computing platforms to accelerate the development of robots and other physical AI systems. Photo: Texas Instruments

Texas Instruments and NVIDIA have expanded their collaboration to accelerate the development of robots and other machines powered by physical AI. The initiative brings together Texas Instruments’ sensing and power technologies with NVIDIA’s AI computing platforms to support the next generation of autonomous systems operating in the physical world.

The partnership reflects a broader industry shift as artificial intelligence moves beyond data centers and digital applications into machines capable of sensing, reasoning, and acting in real environments. From industrial robots and autonomous vehicles to intelligent infrastructure, physical AI systems depend on tight integration between sensors, control electronics, and high-performance AI processors.

Bridging Sensing Hardware With AI Computing

Physical AI systems rely on a continuous loop of perception, decision-making, and action. Texas Instruments supplies many of the semiconductor components that allow machines to capture real-world signals, manage energy, and control motion with high precision.

Through the expanded collaboration, TI’s sensing, power management, and real-time control technologies will work alongside NVIDIA’s AI computing platforms used in robotics and autonomy.

“Our collaboration with NVIDIA will help engineers accelerate the development of autonomous machines by combining TI’s expertise in sensing and power with NVIDIA’s AI computing platforms,” said Amichai Ron, senior vice president of embedded processing at Texas Instruments.

By linking sensing hardware with high-performance computing, the companies aim to simplify the architecture required to build intelligent robots and autonomous machines that must operate safely in unpredictable environments.

Building Infrastructure for the Physical AI Era

The collaboration also underscores the growing importance of technology infrastructure designed specifically for machines interacting with the real world. Physical AI systems require hardware and software capable of interpreting sensor data, navigating complex environments, and executing precise mechanical actions in real time.

NVIDIA has been investing heavily in platforms that support robotics and autonomy, positioning physical AI as a major growth area across industries.

“Physical AI will enable a new generation of intelligent machines that can perceive, reason and act in the real world,” said Jensen Huang, founder and CEO of NVIDIA.

Together, NVIDIA’s computing platforms and Texas Instruments’ sensing and control technologies form a foundational stack for companies developing robots, autonomous vehicles, and industrial automation systems. As robotics moves into more dynamic real-world environments, such integrated ecosystems are expected to play a central role in scaling physical AI deployment across industries.

China Establishes First National Standards for Humanoid Robots

China has introduced its first national standard system for humanoid robotics, aiming to unify technical specifications and accelerate commercial deployment across industries.

By Laura Bennett | Edited by Kseniia Klichova Published:
China Establishes First National Standards for Humanoid Robots
Officials and industry experts gather in Beijing to unveil China’s first national standard system for humanoid robotics, aimed at accelerating commercialization and ensuring safety alignment.

China has formally introduced its first national standard system for humanoid robotics, marking a coordinated effort to structure one of the country’s fastest-growing technology sectors.

The framework was unveiled at the Humanoid Robots and Embodied Intelligence Standardization meeting in Beijing. It establishes unified technical guidelines intended to streamline development, reduce fragmentation, and accelerate the transition from pilot projects to commercial deployment.

The move signals that policymakers view humanoid robotics not as an experimental field, but as an emerging industrial category requiring formal governance.

Six Pillars for Industrial Alignment

The standard system is organized around six core pillars: foundational and common standards, neuromorphic and intelligent computing, limbs and key components, full-system integration, application scenarios, and safety and ethics.

Together, these categories define technical specifications, interface protocols, and evaluation benchmarks. Committee experts involved in the initiative said the goal is to reduce coordination friction between suppliers, lower production costs, and shorten iteration cycles across the value chain.

By clarifying interfaces and performance metrics, the framework is designed to enable interoperability between hardware platforms, software systems, and embodied AI models. It also embeds safety and ethical considerations into early-stage development, reflecting regulatory awareness as robots move into workplaces and homes.

From Prototypes to Scaled Deployment

According to China’s Ministry of Industry and Information Technology, 2024 marked the country’s first year of humanoid robot mass production. More than 140 domestic companies released over 330 models, with deployments expanding into manufacturing, household services, healthcare, and elderly care.

Until now, much of that growth has occurred in a relatively fragmented environment, with companies developing proprietary architectures and evaluation criteria. National standards are expected to impose structure on a rapidly expanding ecosystem.

The framework could also serve a strategic function. As Chinese firms compete globally in embodied AI and humanoid robotics, standardized technical benchmarks may strengthen export readiness and ecosystem coordination.

While many humanoid deployments remain in early stages, the introduction of national standards suggests the industry is entering a new phase, where commercialization and regulatory alignment advance in parallel.

News, Policy & Regulation, Robots & Robotics

University of Southampton Develops Adaptive Robot Fin for Underwater Stability

Researchers at the University of Southampton have developed a flexible robotic fin with embedded electronic skin that automatically adapts to changing water currents, improving underwater robot stability and efficiency.

By Daniel Krauss | Edited by Kseniia Klichova Published:
University of Southampton Develops Adaptive Robot Fin for Underwater Stability
The adaptive robotic fin developed at the University of Southampton integrates electronic skin and hydraulic actuation to automatically counteract flow disturbances in underwater environments. Photo: University of Southampton

Autonomous underwater vehicles are built to withstand unpredictable ocean conditions, but their rigid fins often require significant energy to counteract sudden currents and turbulence. Researchers at the University of Southampton are proposing a different approach: fins that sense water flow and adjust their shape in real time.

The team has developed a flexible robotic fin embedded with electronic skin capable of detecting subtle changes in water movement. The system automatically modifies the fin’s stiffness and curvature to stabilize underwater robots while reducing energy consumption.

The research, published in npj under the title “Harnessing proprioception in aquatic soft wings enables hybrid passive-active disturbance rejection,” reflects a broader push toward soft robotics and adaptive control in marine environments.

Inspired by Biological Sensing

The design draws from biological proprioception mechanisms observed in birds and fish. Birds detect airflow changes through sensory feedback in their feathers, while fish rely on lateral line systems and fin rays to perceive water disturbances.

To replicate similar sensing capabilities, the Southampton engineers embedded flexible liquid metal wiring inside a silicone fin. When water flow deforms the fin, the integrated electronic skin registers changes in electrical resistance. These signals are transmitted to a hydraulic system inside the robot’s body, which adjusts internal pressure through connected hoses to alter the fin’s shape.

Rather than relying solely on active propulsion corrections, the system combines passive flexibility with active hydraulic adjustment.

Reducing Energy Use in Turbulent Waters

Rigid AUVs typically expend substantial energy to maintain orientation when struck by waves or shifting currents. According to the researchers, the adaptive fin significantly improves disturbance rejection.

In controlled tests, the fin reduced unwanted buoyancy effects caused by sudden water flow by 87 percent compared with a similar vehicle using rigid fins. The robot demonstrated improved self-stabilization and maneuverability while consuming less energy to maintain position.

The findings suggest potential advantages for underwater inspection, environmental monitoring, and defense applications where energy efficiency and stability are critical.

Technical Constraints Remain

Despite promising results, integration challenges remain. Scaling the flexible system to larger vehicles and embedding it into rigid hull designs could complicate deployment. Long-term durability of the electronic skin and hydraulic components in harsh marine environments also requires further validation.

The researchers note that more robust actuators and structural refinements may help address these constraints.

The project illustrates how bio-inspired sensing and soft robotics are reshaping underwater vehicle design. As offshore energy, marine research, and subsea infrastructure monitoring expand, adaptive control systems such as this may become increasingly relevant to improving endurance and operational stability in dynamic ocean conditions.

News, Robots & Robotics, Science & Tech

MWC 2026 Marks Shift From AI Apps to AI Native Hardware

Mobile World Congress 2026 highlighted a decisive shift as AI moved beyond apps and into physical devices, from humanoid robots and AI glasses to smartphones with mechanical motion systems.

By Rachel Whitman | Edited by Kseniia Klichova Published:
MWC 2026 Marks Shift From AI Apps to AI Native Hardware
Humanoid robots, AI glasses and AI-integrated smartphones on display at MWC 2026 reflect a broader industry shift toward AI-native hardware design. Photo: MWC

Mobile World Congress 2026 underscored a structural change in the AI industry: artificial intelligence is no longer confined to apps running on smartphones. It is beginning to reshape the hardware itself.

Across the exhibition floor in Barcelona, companies presented humanoid robots controlled entirely by voice, AI glasses positioned as daily computing devices, and smartphones equipped with mechanical camera systems that physically move. The theme was consistent: large AI models are evolving from software layers into defining elements of device architecture.

Smartphone Makers Enter Robotics

Several Chinese smartphone manufacturers used MWC to demonstrate ambitions beyond handsets.

Honor unveiled its first humanoid robot during its global launch event, showcasing AI-driven motion control and multimodal interaction. The demonstration included acrobatic movements and coordinated choreography, signaling technical progress in embodied control systems.

Xiaomi, which introduced its CyberOne humanoid in 2022, did not display a robot on the show floor but reported new milestones. According to the company, its humanoid systems have begun operating in automotive factories, performing tasks such as self-tapping nut installation and material transport. Chairman Lei Jun said large-scale deployment in Xiaomi’s factories could occur within five years.

The move into robotics comes as smartphone growth slows. IDC estimates that China’s smartphone shipments reached roughly 284 million units in 2025, a slight year-on-year decline. For manufacturers with in-house chips, operating systems, and AI models, robotics represents an adjacent growth market built on overlapping technologies.

Lu Weibing, president of Xiaomi’s mobile division, has argued that investments in proprietary silicon, operating systems, and foundational AI are interconnected and transferable to robotics platforms.

Other technology firms are also advancing embodied systems. At MWC, iFlytek demonstrated a humanoid guide robot powered by upgraded multimodal voice interaction, eliminating the need for handheld remote controls. China Mobile presented an unmanned restaurant concept in which embodied robots collaborated on ordering, food preparation, and delivery.

These deployments suggest that large models are increasingly acting as real-time control interfaces rather than conversational add-ons.

AI Glasses and the Search for Monetization

While AI apps saw a surge in daily active users during China’s Spring Festival promotions, retention and revenue models remain uncertain. Several internet companies are now shifting attention toward AI hardware.

Alibaba’s Qwen brand introduced its first AI glasses at MWC, embedding large language models into wearable devices capable of translation, transcription, photography, and object recognition. The devices are positioned for both consumer and professional use.

IDC forecasts that global smart glasses shipments will exceed 23 million units by 2026, including nearly 5 million units in China. Compared with heavily subsidized AI apps, glasses offer a direct hardware revenue stream and clearer monetization path.

iFlytek also debuted lightweight AI glasses weighing approximately 40 grams, emphasizing multimodal recording and translation capabilities.

Redefining the Smartphone Form

AI integration is also altering the smartphone itself.

ZTE showcased AI-powered devices that embed assistants directly into the system layer, enabling cross-application control via natural language. Rather than functioning as standalone apps, these AI agents are integrated into core operating system workflows.

Honor introduced a more experimental concept: a “Robot Phone” featuring a motorized multi-axis gimbal paired with a 200-megapixel sensor. The device can physically rotate and track users during video calls, combining AI vision with mechanical motion.

The common thread across categories is the shift from AI-enabled hardware to AI-defined hardware. Large models are beginning to influence device structure, interaction methods, and mechanical design.

MWC 2026 did not present a single dominant form factor. Instead, it revealed a competitive search for the most natural interface between AI systems and the physical world. Whether that interface proves to be humanoid robots, wearable glasses, or reengineered smartphones remains unsettled. What is clear is that AI is no longer just inside devices. It is beginning to shape what those devices become.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms

Researchers at Georgia Tech have developed a robot pollinator that uses computer vision and 3D modeling to automate flower pollination in indoor farms.

By Laura Bennett | Edited by Kseniia Klichova Published:
Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms
A prototype robot pollinator developed at Georgia Tech uses computer vision to determine flower orientation before performing targeted pollination. Photo: Georgia Tech Research Institute

Researchers at Georgia Tech have developed a robotic system designed to automate pollination inside indoor farms, addressing one of the most labor-intensive challenges in vertical agriculture.

The prototype, created by engineers at the Georgia Tech Research Institute (GTRI) and the George W. Woodruff School of Mechanical Engineering, uses computer vision and robotic manipulation to pollinate flowering plants without human intervention.

As indoor farming expands in urban environments, automating pollination has become a critical bottleneck in scaling production.

Pollination without Bees

Indoor farms offer several advantages over traditional agriculture, including year-round production, reduced water use, and minimal pesticide reliance. However, enclosed growing environments prevent natural pollinators such as bees from accessing crops.

For many flowering plants grown indoors – including strawberries and tomatoes – farmers must manually transfer pollen using brushes or vibrating tools. The process is repetitive and time-consuming, limiting scalability.

The Georgia Tech team’s robot is designed to pollinate plants that contain both male and female reproductive structures within the same flower. These plants require pollen transfer within a single bloom rather than cross-pollination between separate flowers.

By automating this step, researchers aim to reduce labor demands and increase consistency in crop yields.

Teaching a Robot to Understand Flower Orientation

One of the central technical challenges was enabling the robot to recognize the “pose” of each flower – its orientation, symmetry, and position relative to the stem.

Accurate pose detection is critical because pollen must be delivered precisely to the reproductive structures at the front of the flower. Even small alignment errors can reduce pollination effectiveness.

To solve this, the team developed a computer vision pipeline that reconstructs flowers in 3D from multiple camera images. The 3D model is then converted into depth-enhanced 2D representations that can be processed by object detection algorithms.

The researchers used a real-time object detection system known as YOLO (You Only Look Once) to classify flower features in a single processing pass. By converting 3D data into structured 2D inputs, they leveraged the abundance of training resources available for 2D computer vision systems.

The approach enabled the robot to estimate flower orientation with sufficient precision to approach and manipulate the stem correctly.

From Detection to Physical Interaction

Once the robot identifies the flower’s pose, it grips the stem and applies controlled vibration to dislodge and distribute pollen within the bloom.

Unlike simple mechanical vibration tools, the system integrates perception, positioning, and actuation into a single workflow. This coordination is essential in dense vertical farming environments where flowers vary in size, spacing, and orientation.

The prototype was built in Georgia Tech’s Safe Robotics Lab and remains in testing.

Adding Microscopic Feedback

Beyond basic pollination, the system includes an inspection capability that allows it to evaluate pollination success. The robot can perform close-up imaging of flower structures to assess whether pollen has been effectively transferred.

This feedback loop is a notable feature, as most manual pollination methods offer no immediate verification of success.

The research team has documented its technical approach in a paper accepted to the 2025 International Conference on Robotics and Automation (ICRA).

Automation Expands in Controlled Agriculture

Indoor farming is often promoted as a solution to urban food supply challenges and climate variability. However, high labor costs and operational complexity have slowed widespread adoption.

Automating tasks such as pollination could help reduce those barriers. Robotics in agriculture has traditionally focused on harvesting and monitoring, but pollination represents a more delicate and technically demanding process.

The Georgia Tech prototype demonstrates how advances in AI perception and robotic control can be applied to biological systems.

While the system remains in early development, it illustrates how robotics may increasingly support food production in controlled environments – where precision, repeatability, and data-driven feedback are essential for scaling output.

News, Robots & Robotics, Science & Tech

Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security

Revobots has introduced TASKBOT SCOUT XT, an all-weather autonomous patrol robot designed for outdoor enforcement and campus monitoring under a Robots-as-a-Service model.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security
Revobots’ TASKBOT SCOUT XT is designed for outdoor patrol, featuring an all-wheel-drive chassis and weather-resistant enclosure. Photo: Campus Innovation

Revobots has introduced an all-weather version of its autonomous patrol robot, expanding its security robotics platform beyond indoor facilities and into outdoor environments.

The new system, called TASKBOT SCOUT XT, is engineered for exterior enforcement and monitoring tasks across campuses, parking lots, and mixed-use spaces. The Phoenix-based company says the robot is designed to address one of the longstanding limitations of autonomous patrol systems: reliable operation in unpredictable weather and uneven terrain.

The launch reflects growing demand for robotics solutions that can supplement security staffing in environments where labor shortages and operational costs continue to rise.

Hardware Upgrades for Outdoor Deployment

SCOUT XT builds on Revobots’ indoor patrol platform but incorporates significant hardware modifications to withstand environmental exposure.

The robot features an IP65-rated enclosure designed to protect against dust and water ingress. Its extended-wheelbase, all-wheel-drive chassis is intended to provide stability across uneven pavement, gravel, and surface transitions.

Outdoor-calibrated vision systems allow the robot to operate in variable lighting conditions, including bright daylight and low-light evening environments. Longer-range perception capabilities are designed to accommodate open spaces with fewer visual landmarks than indoor corridors.

All-terrain wheels further support navigation across cracked pavement, curb transitions, and mixed surfaces common in parking facilities and campus grounds.

Autonomous Operation with Human Oversight

SCOUT XT operates on Revobots’ existing backend infrastructure, including its Robots-as-a-Service subscription model and REVO Pilot human-in-the-loop oversight system.

By default, the robot navigates autonomously, using onboard AI to conduct patrol routes and monitor designated areas. When conditions exceed predefined thresholds – such as ambiguous detections or unusual environmental scenarios – the system can escalate to human supervisors for intervention.

This hybrid autonomy model is increasingly common in commercial robotics deployments, particularly in security applications where accountability and reliability are critical.

Campus Deployment Highlights Practical Use Case

Revobots said SCOUT XT recently completed pilot testing at Xavier University in Cincinnati. During the trial, the robot supported automated license plate recognition enforcement across multiple campus parking areas.

The deployment was designed to expand monitoring coverage without increasing staffing levels, a key consideration for educational institutions and other organizations managing large facilities.

Integration with existing campus infrastructure was supported through collaboration with Campus Innovation and its C-Park platform.

The university pilot demonstrates how outdoor patrol robots can supplement traditional security operations, particularly in structured environments such as campuses, business parks, and residential communities.

Expanding the Scope of Security Robotics

Autonomous security robots have typically been deployed indoors, where environmental variables are more predictable. Extending patrol capabilities outdoors introduces challenges including weather exposure, uneven terrain, and dynamic lighting.

By adapting its existing platform rather than building an entirely new system, Revobots is pursuing incremental expansion of its task-adaptive robotics model.

The broader security robotics market is evolving toward service-based deployment models, where customers subscribe to robotics coverage rather than purchase hardware outright. This approach lowers upfront costs and allows providers to maintain centralized oversight and software updates.

As robotics companies seek commercially viable applications, outdoor patrol represents a practical step toward broader real-world autonomy.

While fully autonomous security operations remain a long-term ambition, platforms like SCOUT XT illustrate how robotics companies are addressing specific operational gaps – expanding coverage, improving consistency, and reducing reliance on human patrol staffing in large, open environments.

Automation, News, Robots & Robotics

TCL Unveils Tbot Concept to Turn Kids’ Smartwatch Into a Home AI Robot

At MWC 2026, TCL introduced Tbot, a concept desktop robot designed to pair with children’s smartwatches, extending AI support from outdoor tracking to home routines.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
TCL Unveils Tbot Concept to Turn Kids’ Smartwatch Into a Home AI Robot
TCL’s Tbot concept acts as a magnetic charging dock and AI companion for children’s smartwatches, extending functionality into the home. Photo: TCL

At Mobile World Congress 2026 in Barcelona, TCL presented a concept device that blends wearable technology with home robotics. Called Tbot, the desktop robot is designed to pair with TCL’s children’s smartwatches, acting as both a charging dock and an AI-powered companion.

The concept reflects a broader shift in consumer robotics toward focused, task-specific devices rather than fully autonomous humanoid machines. Instead of building a standalone home robot, TCL is extending the functionality of an existing wearable into a stationary, home-based form.

For now, the Tbot remains a concept with no announced release date or pricing.

Extending the Smartwatch Experience Indoors

Children’s smartwatches have become popular for location tracking, communication, and safety monitoring. However, their functionality typically pauses when the watch is removed for charging.

TCL’s idea is to bridge that gap. The Tbot features a magnetic dock that holds and charges the smartwatch when it is not being worn. During that time, the desktop robot takes over certain AI-driven features.

According to TCL, Tbot can handle morning alarms, homework timers, and bedtime routines. The device is positioned as a supportive assistant rather than a surveillance tool, offering reminders and guidance tailored to children.

By maintaining continuity between outdoor and indoor use, TCL aims to create a unified digital experience across environments.

AI Companion Designed for Routine and Learning

Beyond basic alarms and timers, Tbot is designed to act as a conversational learning companion. Children can ask questions and explore topics, while the system provides age-appropriate responses.

At night, the robot can transition into a sleep-support role, offering calming stories or audio to help children wind down. Parents can configure notifications and receive updates, maintaining oversight without constant direct interaction.

TCL emphasizes that AI features would operate with parental permission and regulatory compliance in mind, reflecting increasing scrutiny around children’s data privacy.

Consumer Robotics Moves Toward Targeted Use Cases

The Tbot concept illustrates a growing trend in consumer robotics: devices focused on narrow, clearly defined roles rather than broad household autonomy.

Rather than competing with smart speakers or building full humanoid assistants, TCL is exploring how robotics can complement wearables. The Tbot’s design integrates charging infrastructure with AI interaction, creating a hybrid between dock, assistant, and companion device.

This approach aligns with a wider industry movement where robotics capabilities are embedded into familiar consumer products instead of introduced as entirely new categories.

Concept Stage Highlights Industry Direction

TCL has not confirmed whether Tbot will enter mass production. The device was presented at MWC as a demonstration of the company’s direction in AI-enabled family technology.

Concept products at major trade shows often serve as signals rather than immediate commercial offerings. In this case, TCL is indicating interest in expanding beyond smartphones and wearables into interactive home robotics.

As AI becomes more integrated into everyday devices, companies are experimenting with ways to connect physical hardware with digital services in more seamless ways.

If Tbot reaches the market, it could represent an early example of robotics moving into family-focused, screen-light applications – where the machine’s role is subtle, supportive, and embedded within existing ecosystems.

For now, Tbot remains a prototype. But it underscores how robotics is increasingly intersecting with consumer electronics, particularly in categories centered on education, safety, and home routines.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

AGIBOT Showcases X2 at MWC 2026 as Humanoid Robots Move Toward Commercial Scale

AGIBOT presented its X2 humanoid robot at MWC 2026, highlighting shipment leadership and a shift toward scenario-based, commercial-scale deployment.

By Daniel Krauss | Edited by Kseniia Klichova Published:
AGIBOT Showcases X2 at MWC 2026 as Humanoid Robots Move Toward Commercial Scale
AGIBOT’s X2 humanoid robot demonstrated agile movement and interactive capabilities at MWC 2026 in Barcelona. Photo: AGIBOT

At Mobile World Congress 2026 in Barcelona, humanoid robots were no longer treated as experimental curiosities. Instead, attention shifted toward companies capable of demonstrating commercial traction. Among them was AGIBOT, which used the event to showcase its X2 humanoid robot and emphasize its scale ambitions.

According to industry research firms including IDC and Omdia, AGIBOT ranked first globally in humanoid robot shipments in 2025. The company has also reported revenue exceeding RMB 1 billion, placing it among a small group of robotics firms claiming measurable commercial performance rather than pilot-stage experimentation.

The X2’s presence at MWC reflected that transition from demonstration to deployment.

X2: Motion and Function in Balance

The AGIBOT X2 is part of the company’s X Series, designed to combine advanced motion capability with interactive intelligence. The robot features 25 degrees of freedom and can reach walking speeds of up to 2 meters per second.

On the show floor, the X2 demonstrated stable locomotion and responsive movement, positioning itself between heavy industrial humanoids and lighter entertainment-focused machines.

Rather than emphasizing raw technical specifications alone, AGIBOT framed the X2 as a platform suited for defined operational scenarios. The company has segmented its portfolio into multiple product lines:

  • The A Series for presentation and reception roles
  • The G Series for factory and precision industrial environments
  • The X Series for advanced motion and intelligent interaction
  • The D Series quadrupeds for inspection and patrol applications

This structured segmentation signals a shift away from generic humanoid branding toward scenario-driven deployment strategies.

From Prototypes to Revenue Metrics

The humanoid robotics industry has seen dozens of prototypes unveiled over the past two years. However, few companies have disclosed shipment volumes or revenue figures.

AGIBOT’s public positioning around shipment rankings and revenue milestones suggests that scale, rather than spectacle, is becoming a primary differentiator. As more robotics firms move beyond lab environments, investors and enterprise customers are increasingly focused on production capacity and deployment readiness.

Industry data indicates that humanoid robotics remains a small but rapidly expanding segment within the broader robotics market. Commercial viability will depend on reliability, support infrastructure, and cost-effective scaling – not just technical performance.

Robot-as-a-Service Expands Overseas

AGIBOT is also pursuing international expansion through regional partnerships built around a Robot-as-a-Service model. Instead of centralizing global leasing operations, the company works with local partners to manage deployments and service contracts, particularly in Europe.

This approach aligns with a broader industry trend toward service-based robotics adoption. Many customers prefer flexible operational models over large upfront hardware investments, especially as humanoid robots remain a developing technology category.

Localized partnerships can also address regulatory requirements and after-sales support – both critical factors for scaling robotics across different markets.

MWC Signals a Shift in Industry Tone

Mobile World Congress has historically served as a barometer for technology maturity. In 2026, humanoid robotics appeared to enter a new phase where commercial scale and operational readiness overshadowed novelty.

AGIBOT’s X2 demonstration reflected that shift. The company’s messaging centered less on futuristic potential and more on deployment metrics and structured product segmentation.

As the humanoid robotics field matures, the competitive landscape may increasingly be defined by production capacity, service ecosystems, and real-world performance – rather than prototype announcements alone.

For companies seeking to lead the next stage of embodied AI, the challenge is no longer proving that humanoid robots can walk. It is proving that they can work – consistently and at scale.

News, Robots & Robotics, Science & Tech

Qualcomm Bets on Robotics as Core Revenue Driver by 2028

Qualcomm CEO Cristiano Amon says robotics will become a major revenue stream by 2028, as the company positions its Dragonwing chip at the center of the physical AI market.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Qualcomm Bets on Robotics as Core Revenue Driver by 2028
Qualcomm’s Dragonwing processor is designed to power next-generation robots, from industrial automation systems to humanoid platforms. Photo: Qualcomm

Qualcomm expects robotics to become a major revenue driver within the next two years, signaling a strategic shift for the semiconductor company as it seeks growth beyond smartphones.

Speaking at Mobile World Congress in Barcelona, CEO Cristiano Amon said robotics would scale commercially by 2027 and evolve into a significant business segment by 2028. The comments accompanied Qualcomm’s broader push into “physical AI” – artificial intelligence systems designed to operate in real-world environments.

To support that ambition, Qualcomm introduced its Dragonwing processor earlier this year, a chip built specifically for robotics platforms.

From Snapdragon to Dragonwing

Qualcomm’s strategy mirrors its earlier success in mobile computing. Just as Snapdragon became a widely adopted processor family for smartphones, the company hopes Dragonwing can serve as a common computing platform across robotics manufacturers.

Dragonwing is designed to power a range of machines, from industrial automation systems and logistics robots to emerging humanoid platforms. By focusing on edge AI processing – where computation occurs directly on the robot rather than in the cloud – Qualcomm aims to enable real-time perception and decision-making.

Amon framed robotics as the next frontier for semiconductor growth, arguing that advances in physical AI have made robots more capable and commercially viable.

“Robotics will start to get scale within the next two years,” he said during an interview, describing the opportunity as larger than many investors currently assume.

Physical AI Expands the Robotics Market

The renewed optimism around robotics is closely tied to breakthroughs in AI models capable of interpreting vision, language, and motion in physical environments.

Unlike traditional industrial robots that follow fixed programming, newer systems incorporate machine learning models that adapt to changing conditions. These capabilities increase the range of tasks robots can perform, from warehouse automation to advanced humanoid applications.

Industry forecasts underscore the potential scale. Analysts estimate that general-purpose robotics could grow into a market worth hundreds of billions of dollars within the next two decades. Humanoid robots, still largely in prototype stages, are projected by some financial institutions to represent a multi-trillion-dollar opportunity by mid-century.

Currently, the global robotics sector is valued at roughly $67 billion and growing at double-digit annual rates, according to recent market data.

Competitive Pressures Intensify

Qualcomm is not alone in targeting robotics as a growth engine. Nvidia has positioned its own computing platforms as foundational infrastructure for AI-driven machines, while companies such as Tesla and several Chinese robotics startups are advancing humanoid robot development.

Qualcomm’s differentiation strategy centers on scalable silicon and edge processing efficiency. By embedding AI acceleration directly into robotics hardware, the company aims to reduce latency and dependency on cloud connectivity.

However, scaling robotics remains complex. Development costs for advanced AI models are high, supply chains for specialized components can be constrained, and commercialization timelines are often longer than anticipated.

Following Amon’s comments, Qualcomm shares dipped slightly during trading, reflecting broader market volatility rather than a fundamental shift in investor outlook.

Diversifying Beyond Smartphones

For Qualcomm, robotics represents part of a broader effort to diversify revenue streams beyond mobile handsets. The company has already expanded into automotive chips and IoT devices, areas that share similar requirements for embedded AI processing.

Robotics combines elements of all three: mobility, connectivity, and autonomous computation.

If Dragonwing gains traction across multiple robotics manufacturers, Qualcomm could position itself as a key supplier in the emerging physical AI ecosystem.

Whether robotics achieves the scale projected by Amon by 2028 remains to be seen. But the semiconductor industry’s growing focus on embodied AI suggests that the competition to supply the brains of next-generation robots is accelerating.

BMW Deploys Humanoid Robot Workers in Leipzig Battery Plant Pilot

BMW has introduced the AEON humanoid robot at its Leipzig plant to automate battery assembly tasks as part of a broader push into physical AI-powered manufacturing.

By Daniel Krauss | Edited by Kseniia Klichova Published:

BMW Group has begun testing humanoid robots in its Leipzig manufacturing plant, marking a new phase in the company’s effort to integrate artificial intelligence and robotics into automotive production. The pilot program introduces the AEON humanoid robot, developed with Hexagon’s robotics division, to assist with complex assembly tasks in battery manufacturing.

The deployment represents the first European test of BMW’s broader “physical AI” strategy, which combines advanced artificial intelligence with real-world robotics systems capable of operating on factory floors.

The company is evaluating whether humanoid robots can automate physically demanding and repetitive production processes as electric vehicle manufacturing becomes increasingly complex.

Humanoid Robots Enter Battery Production

The AEON robot is being tested for tasks related to high-voltage battery assembly and component manufacturing. In these roles, the system uses modular gripping tools, scanning sensors, and mobile locomotion to handle materials and support assembly workflows.

Battery production involves heavy components and precise placement operations, making it a potential candidate for robotic assistance. BMW is particularly interested in determining whether humanoid robots can operate across multiple production stages rather than performing a single specialized task.

According to BMW production executives, the goal of the Leipzig pilot is to evaluate the robot’s ability to perform multifunctional tasks across various parts of the manufacturing process, including energy module assembly and exterior component production.

The project follows earlier experiments at BMW’s Spartanburg facility in the United States, where humanoid robots were tested in live factory environments.

Simulation and AI Accelerate Robot Training

The AEON system was developed using a simulation-first approach. Much of the robot’s training occurred in virtual environments before deployment in the real world.

Using NVIDIA’s robotics simulation platforms, engineers trained the robot to perform navigation, locomotion, and manipulation tasks within digital factory models. This approach significantly reduces the time required to develop new robotic capabilities.

Once trained in simulation, the robot’s learned behaviors can be transferred to physical hardware. This process allows engineers to refine motion control and task planning before exposing the robot to real manufacturing conditions.

The system runs on NVIDIA Jetson edge computing hardware, which processes sensor data and supports real-time decision-making on the production floor.

Multimodal Sensors Enable Industrial Awareness

AEON integrates a combination of cameras, scanners, and spatial sensing systems that allow it to understand its environment and interact with industrial equipment.

These sensors capture high-resolution data from the factory floor and upload it to cloud-based digital twin platforms. Engineers can then analyze the data using 3D models that replicate the physical production environment.

By connecting the robot to digital twin infrastructure, BMW and Hexagon can monitor operations remotely and adjust robot behavior through software updates.

The system also uses machine learning models that learn from human demonstrations and synthetic training data, enabling the robot to acquire new skills more quickly than traditional industrial robots.

Automakers Explore the Future of Physical AI

BMW’s humanoid robot pilot reflects a broader trend across the automotive industry. Car manufacturers are increasingly exploring humanoid robotics as a way to automate labor-intensive production tasks.

Humanoid robots offer potential advantages over traditional industrial robots because they can operate in environments designed for human workers, using existing tools and workstations without major infrastructure changes.

At the same time, the technology remains in an early stage of deployment. Automakers are testing whether humanoid systems can meet the reliability, safety, and productivity standards required for large-scale manufacturing.

BMW has also established a new Center of Competence for Physical AI in Production to coordinate research and development across its global manufacturing network.

If the Leipzig pilot proves successful, humanoid robots could eventually become a regular presence on automotive production lines as manufacturers continue to integrate AI-driven automation into modern factories.

Automation, Business & Markets, News, Robots & Robotics

Xiaomi Plans New Robot Product as It Expands Push into AI and Chips

Xiaomi is preparing to release a new robotics product this year as the company deepens investment in AI, chips, and operating systems to compete in embodied intelligence.

By Laura Bennett | Edited by Kseniia Klichova Published:
Xiaomi Plans New Robot Product as It Expands Push into AI and Chips
Xiaomi is expanding its robotics strategy by combining self-developed chips, operating systems, and AI models in future robot platforms. Photo: Xiaomi

Xiaomi is preparing to launch a new robotics product later this year as the Chinese technology company accelerates its push into artificial intelligence, chips, and embodied robotics. The upcoming device is expected to integrate Xiaomi’s self-developed semiconductor technology, proprietary operating system, and large AI models into a unified robotics platform.

The announcement signals a deeper commitment to robotics as part of Xiaomi’s long-term technology strategy. Company executives believe humanoid robots could become a significant component of Xiaomi’s industrial operations within the next five years.

The move comes as China’s technology companies race to establish leadership in embodied AI – a field that combines artificial intelligence with machines capable of interacting with the physical world.

A Robotics Platform Built on In-House Technologies

Xiaomi’s robotics initiative is closely tied to its broader investment in core technologies. Over the past five years, the company has spent more than 100 billion yuan on research and development across areas including semiconductors, operating systems, and artificial intelligence.

Executives say the new robotics product will bring these technologies together in a single system. By integrating its own chips and software stack, Xiaomi aims to control key elements of the robotics platform while reducing reliance on external suppliers.

This approach mirrors strategies used by other technology companies seeking to build vertically integrated AI systems.

Last year, Xiaomi introduced its self-developed XRing O1 chip, which the company described as a major milestone in its semiconductor ambitions. The processor is part of a broader effort to strengthen China’s domestic technology capabilities amid global competition in advanced computing.

Robotics Becomes a New Battleground for Tech Companies

Xiaomi’s robotics ambitions place it in direct competition with other Chinese technology and automotive companies expanding into humanoid robotics.

Electric vehicle maker Xpeng is building a manufacturing base for humanoid robots and aims to begin large-scale production in the coming years. Meanwhile, Li Auto has reorganized its research structure to accelerate development of embodied intelligence and autonomous driving technologies.

Across the industry, companies are increasingly viewing robotics as the next major platform after smartphones and electric vehicles.

For Xiaomi, the robotics push builds on its existing expertise in consumer electronics and connected devices. The company’s leadership believes its ability to rapidly commercialize technologies could give it an advantage in bringing robots to market.

“Private technology companies have the advantage of being close to users and market demand,” founder and CEO Lei Jun said in a recent interview, emphasizing the importance of quickly transforming research breakthroughs into scalable products.

From Devices to Embodied AI Ecosystems

Xiaomi’s robotics strategy is also tied to its broader ecosystem of connected devices. The company already produces a wide range of consumer electronics, from smartphones and smart home devices to electric vehicles.

Integrating robotics into this ecosystem could enable new forms of interaction between AI systems and physical environments.

Humanoid robots, for example, could eventually connect with smart home devices, autonomous vehicles, and cloud-based AI services. Such integration would extend Xiaomi’s technology platform beyond screens and vehicles into real-world automation.

Executives have suggested that humanoid robots may eventually be deployed inside Xiaomi’s own manufacturing facilities. If successful, robots could handle repetitive assembly tasks and logistics operations within the company’s factories.

Long-Term Bet on Core Technologies

Xiaomi recently announced plans to invest an additional 200 billion yuan over the next five years to accelerate research in foundational technologies such as semiconductors, operating systems, and artificial intelligence.

Robotics is expected to play a central role in this strategy as companies worldwide compete to develop machines capable of performing complex physical tasks.

While the commercial market for humanoid robots remains in its early stages, the increasing number of companies investing in the technology suggests that embodied AI may become one of the next major platforms in the global technology industry.

Xiaomi’s upcoming robot launch will offer an early indication of how consumer electronics companies plan to translate their expertise in chips, software, and AI into physical machines.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Noetix Robotics Raises $140 Million to Expand Consumer Humanoid Robots

Beijing-based Noetix Robotics has raised nearly $140 million in a Series B round led by a CATL-backed investment fund as it develops humanoid and biomimetic consumer robots.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Noetix Robotics Raises $140 Million to Expand Consumer Humanoid Robots
Noetix Robotics’ humanoid robot performs during a demonstration event, highlighting advances in motion control and interactive robotics. Photo: Noetix Robotics

Chinese robotics startup Noetix Robotics has raised nearly RMB 1 billion (about $140 million) in a Series B funding round as it accelerates development of humanoid and biomimetic robots aimed at consumer applications.

The financing was led by CD Capital, an industrial investment platform linked to battery manufacturer CATL, with participation from CAS Investment, Jingguosheng Fund, and Unity Ventures. The new investment brings the Beijing-based company’s total funding across nine rounds and follows growing interest in humanoid robotics across China’s technology sector.

Investors are increasingly betting that consumer-focused robots could become one of the next major markets for embodied artificial intelligence.

Young Engineering Team Driving Rapid Development

Noetix’s leadership and engineering teams are notably young, with most core members born after the mid-1990s and an average team age below 30. The company attributes its rapid development cycles partly to this structure, which allows it to iterate quickly on both hardware and software.

The company demonstrated its rapid prototyping approach when it built its first humanoid robot prototype in just over six weeks. Since then, the engineering team has continued to refine its designs through fast iteration cycles and real-world testing.

One example came during preparations for China’s Lunar New Year Gala television program. The company’s humanoid robot, named Xiao Bumi, underwent more than 20 dance training iterations within a single month to prepare for a stage performance. The project was designed to demonstrate the robot’s ability to adapt to new scenarios while maintaining balance and coordination.

Dual Focus on Bipedal and Biomimetic Robots

While most humanoid robotics companies focus exclusively on bipedal robots designed to resemble human movement, Noetix is pursuing a dual-track strategy. The company develops both traditional humanoid robots and biomimetic humanoids designed to mimic biological motion more closely.

According to the company, biomimetic designs could play a significant role in the future of consumer robotics because they allow for more natural interaction and emotional engagement with users.

This focus on interaction reflects a broader shift in robotics development. As robots move into homes and public spaces, social interaction and emotional resonance may become as important as mechanical performance.

The company says progress in biomimetic systems has reinforced its work on conventional humanoid robots, creating technological overlap between the two product lines.

Investors Bet on Consumer Robotics

The Series B funding round reflects growing investor confidence in the long-term potential of humanoid robots. While industrial robotics remains the largest segment of the robotics market today, many investors believe consumer robots could eventually reach a comparable scale.

China has become one of the most active regions for humanoid robotics development, with startups and major technology companies alike investing heavily in embodied AI.

Noetix says it has developed a full-stack robotics platform covering mechanical design, control systems, and AI software. The company currently holds more than 30 patents related to its robotics technologies.

With the new funding, Noetix plans to continue expanding its humanoid and biomimetic product lines and move closer to commercial deployment.

Although consumer humanoid robots remain in an early stage of development, the latest investment suggests that investors increasingly view the sector as a potential trillion-yuan market in the coming decades.

News, Robots & Robotics, Startups & Venture

Honor Unveils ‘Robot Phone’ with AI-Powered Moving Camera Arm at MWC 2026

Honor introduced a “Robot Phone” at MWC 2026 featuring a 200-megapixel AI tracking camera mounted on a miniature robotic gimbal arm designed for autonomous filming.

By Laura Bennett | Edited by Kseniia Klichova Published:
Honor Unveils ‘Robot Phone’ with AI-Powered Moving Camera Arm at MWC 2026
Honor’s Robot Phone features a miniature robotic camera arm with a 4-DoF gimbal that uses AI tracking to automatically follow subjects. Photo: Honor

At Mobile World Congress 2026 in Barcelona, Chinese smartphone maker Honor introduced an experimental device that blends robotics and mobile computing: the “Robot Phone”. The concept smartphone features a 200-megapixel camera mounted on a miniature robotic gimbal arm capable of tracking users and responding to physical gestures.

The device reflects a broader push by consumer electronics companies to integrate robotics into everyday devices. Rather than limiting AI to software assistants or image processing, Honor’s concept brings mechanical motion and autonomous camera control directly into the smartphone form factor.

The company says the Robot Phone will launch commercially in China in the second half of 2026.

A Robotic Camera Built into a Smartphone

The most distinctive feature of the Robot Phone is its compact 4-degree-of-freedom (4-DoF) gimbal system. Using a custom micromotor assembly, the camera module can extend outward, rotate, and stabilize itself independently of the phone’s body.

This mechanical flexibility allows the device to track subjects in real time. The camera can follow a moving person during video recording or automatically reposition itself during video calls.

AI-powered tracking algorithms enable the system to detect motion, identify subjects, and adjust orientation accordingly. The camera can also rotate up to 90 or 180 degrees for cinematic shots and perform features such as “AI SpinShot”, which creates dynamic rotating video perspectives.

Honor designed the system primarily for creators, vloggers, and social media users who frequently record hands-free video.

AI Interaction Moves from Screen to Motion

Beyond simple tracking, the robotic camera module can respond to user gestures. During demonstrations, the phone’s camera arm was shown nodding or shaking slightly, mimicking human gestures as it followed a user.

This capability reflects a growing trend toward devices that combine AI perception with physical movement. By integrating robotics directly into the smartphone, Honor is exploring new ways for devices to interact with users in physical space rather than through touchscreens alone.

To improve imaging quality, Honor has partnered with ARRI, a well-known manufacturer of professional cinema cameras. The collaboration aims to adapt professional color science and imaging technologies for mobile devices.

If successful, the concept could narrow the gap between smartphone cameras and dedicated filmmaking equipment.

Part of a Broader AI and Robotics Ecosystem

The Robot Phone was introduced alongside several other products at MWC 2026, including the Magic V6 foldable smartphone, the MagicPad 4 tablet, and the MagicBook Pro 14 laptop.

Together, these devices form part of Honor’s strategy to build an AI-powered ecosystem connecting mobile devices, computing platforms, and robotics.

The company also demonstrated a humanoid robot designed for service and assistance roles. The robot is intended for applications such as retail support, workplace inspection, and general customer assistance.

By linking these devices through shared AI systems, Honor aims to create what it calls “augmented human intelligence”, where digital assistants can interact with users in both virtual and physical environments.

Consumer Robotics Begins to Enter Everyday Devices

The Robot Phone illustrates how robotics concepts are beginning to move beyond dedicated robots into mainstream consumer electronics.

Smartphones have long incorporated advanced sensors and AI processing, but the addition of mechanical motion introduces a new dimension of interaction. Cameras that can physically reposition themselves could improve video calls, content creation, and augmented reality experiences.

While it remains to be seen whether robotic camera systems will become a standard feature in smartphones, the concept reflects how the boundaries between robotics, AI, and personal devices are increasingly converging.

As companies experiment with new hardware designs, robotics may gradually become part of everyday consumer technology – not just as standalone machines, but embedded within the devices people carry every day.