Qualcomm Showcases Humanoid Robotics Platform at India AI Impact Summit

Qualcomm unveiled its robotics platform and new humanoid-focused processor at the India AI Impact Summit 2026, signaling its push to become a core infrastructure provider for physical AI systems.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Qualcomm presented its robotics system and Dragonwing IQ-10 processor at the India AI Impact Summit, targeting scalable infrastructure for humanoid and autonomous robots. Photo: Qualcomm

As humanoid robotics transitions from experimental prototypes into commercial platforms, semiconductor companies are positioning themselves as foundational infrastructure providers. Qualcomm this week showcased its robotics system and introduced its Dragonwing IQ-10 processor at the India AI Impact Summit 2026, marking the company’s clearest move yet into humanoid robotics hardware.

The announcement, made at the Bharat Mandapam convention center in New Delhi, reflects a growing industry shift: robotics is becoming a computing problem as much as a mechanical one. Qualcomm’s robotics platform is designed to provide the processing, AI integration, and software foundation required to operate humanoids, autonomous mobile robots, and service machines across diverse environments.

A Processor Designed for Physical AI

At the center of Qualcomm’s robotics push is the Dragonwing IQ-10, its first processor specifically targeting full-size humanoid robots and advanced autonomous mobile robots. The chip represents Qualcomm’s entry into high-performance robotics computing, extending its presence beyond smartphones, automotive systems, and edge AI.

According to Qualcomm representatives, the robotics system integrates heterogeneous computing architecture, combining multiple types of processors optimized for different workloads such as perception, motion planning, and control. This mixed-criticality design allows robots to simultaneously handle safety-critical tasks, such as balance and obstacle avoidance, while running high-level AI models for perception and decision-making.

This architecture reflects the computational complexity of humanoid robotics. Unlike traditional industrial machines, humanoids require continuous interpretation of visual, auditory, and spatial data, along with real-time motor coordination. That workload requires tightly integrated hardware and software optimized for low latency and high reliability.

Qualcomm’s approach also incorporates an AI data flywheel model, where robots continuously generate operational data that improves future performance. This aligns with broader industry trends, where embodied AI systems improve through real-world interaction rather than static programming.

Positioning for a Fragmented Robotics Ecosystem

Qualcomm’s strategy is to provide modular infrastructure that can scale across multiple robot form factors, from domestic service robots to industrial automation systems. Rather than building complete robots, the company is targeting the computing layer that enables robotics platforms.

This approach mirrors Qualcomm’s historical role in smartphones, where it supplied core processors and connectivity technologies that enabled hardware manufacturers to build consumer devices at scale. In robotics, a similar dynamic may emerge, with semiconductor providers supplying standardized compute platforms while robotics companies focus on mechanical systems and applications.

The robotics industry remains fragmented, with dozens of humanoid startups and established manufacturers developing proprietary platforms. A standardized computing layer could accelerate development by reducing the need for each company to build custom hardware stacks from scratch.

India’s Role in the Global Robotics Landscape

Qualcomm’s announcement came at the India AI Impact Summit, a five-day event bringing together policymakers, technology firms, and global AI leaders. The summit reflects India’s growing role in shaping global AI and automation policy, particularly through initiatives aimed at expanding AI deployment across infrastructure, manufacturing, and public services.

India’s emphasis on scalable AI deployment aligns with Qualcomm’s robotics strategy. The company’s platform is designed to support deployment across environments with high automation demand, including manufacturing, logistics, and service sectors.

As robotics adoption accelerates globally, computing infrastructure is emerging as a critical competitive layer. Advances in processors, edge AI systems, and integrated software platforms will determine how quickly robots can move from development to large-scale deployment.

Qualcomm’s entry into humanoid robotics computing signals that the sector is evolving beyond mechanical engineering into a full-stack computing ecosystem. The companies that define the hardware and software infrastructure may ultimately shape how physical AI scales across industries.

News, Robots & Robotics, Science & Tech

China Establishes First National Standards for Humanoid Robots

China has introduced its first national standard system for humanoid robotics, aiming to unify technical specifications and accelerate commercial deployment across industries.

By Laura Bennett | Edited by Kseniia Klichova Published:
Officials and industry experts gather in Beijing to unveil China’s first national standard system for humanoid robotics, aimed at accelerating commercialization and ensuring safety alignment.

China has formally introduced its first national standard system for humanoid robotics, marking a coordinated effort to structure one of the country’s fastest-growing technology sectors.

The framework was unveiled at the Humanoid Robots and Embodied Intelligence Standardization meeting in Beijing. It establishes unified technical guidelines intended to streamline development, reduce fragmentation, and accelerate the transition from pilot projects to commercial deployment.

The move signals that policymakers view humanoid robotics not as an experimental field, but as an emerging industrial category requiring formal governance.

Six Pillars for Industrial Alignment

The standard system is organized around six core pillars: foundational and common standards, neuromorphic and intelligent computing, limbs and key components, full-system integration, application scenarios, and safety and ethics.

Together, these categories define technical specifications, interface protocols, and evaluation benchmarks. Committee experts involved in the initiative said the goal is to reduce coordination friction between suppliers, lower production costs, and shorten iteration cycles across the value chain.

By clarifying interfaces and performance metrics, the framework is designed to enable interoperability between hardware platforms, software systems, and embodied AI models. It also embeds safety and ethical considerations into early-stage development, reflecting regulatory awareness as robots move into workplaces and homes.

From Prototypes to Scaled Deployment

According to China’s Ministry of Industry and Information Technology, 2024 marked the country’s first year of humanoid robot mass production. More than 140 domestic companies released over 330 models, with deployments expanding into manufacturing, household services, healthcare, and elderly care.

Until now, much of that growth has occurred in a relatively fragmented environment, with companies developing proprietary architectures and evaluation criteria. National standards are expected to impose structure on a rapidly expanding ecosystem.

The framework could also serve a strategic function. As Chinese firms compete globally in embodied AI and humanoid robotics, standardized technical benchmarks may strengthen export readiness and ecosystem coordination.

While many humanoid deployments remain in early stages, the introduction of national standards suggests the industry is entering a new phase, where commercialization and regulatory alignment advance in parallel.

News, Policy & Regulation, Robots & Robotics

University of Southampton Develops Adaptive Robot Fin for Underwater Stability

Researchers at the University of Southampton have developed a flexible robotic fin with embedded electronic skin that automatically adapts to changing water currents, improving underwater robot stability and efficiency.

By Daniel Krauss | Edited by Kseniia Klichova Published:
The adaptive robotic fin developed at the University of Southampton integrates electronic skin and hydraulic actuation to automatically counteract flow disturbances in underwater environments. Photo: University of Southampton

Autonomous underwater vehicles are built to withstand unpredictable ocean conditions, but their rigid fins often require significant energy to counteract sudden currents and turbulence. Researchers at the University of Southampton are proposing a different approach: fins that sense water flow and adjust their shape in real time.

The team has developed a flexible robotic fin embedded with electronic skin capable of detecting subtle changes in water movement. The system automatically modifies the fin’s stiffness and curvature to stabilize underwater robots while reducing energy consumption.

The research, published in npj under the title “Harnessing proprioception in aquatic soft wings enables hybrid passive-active disturbance rejection,” reflects a broader push toward soft robotics and adaptive control in marine environments.

Inspired by Biological Sensing

The design draws from biological proprioception mechanisms observed in birds and fish. Birds detect airflow changes through sensory feedback in their feathers, while fish rely on lateral line systems and fin rays to perceive water disturbances.

To replicate similar sensing capabilities, the Southampton engineers embedded flexible liquid metal wiring inside a silicone fin. When water flow deforms the fin, the integrated electronic skin registers changes in electrical resistance. These signals are transmitted to a hydraulic system inside the robot’s body, which adjusts internal pressure through connected hoses to alter the fin’s shape.

Rather than relying solely on active propulsion corrections, the system combines passive flexibility with active hydraulic adjustment.

Reducing Energy Use in Turbulent Waters

Rigid AUVs typically expend substantial energy to maintain orientation when struck by waves or shifting currents. According to the researchers, the adaptive fin significantly improves disturbance rejection.

In controlled tests, the fin reduced unwanted buoyancy effects caused by sudden water flow by 87 percent compared with a similar vehicle using rigid fins. The robot demonstrated improved self-stabilization and maneuverability while consuming less energy to maintain position.

The findings suggest potential advantages for underwater inspection, environmental monitoring, and defense applications where energy efficiency and stability are critical.

Technical Constraints Remain

Despite promising results, integration challenges remain. Scaling the flexible system to larger vehicles and embedding it into rigid hull designs could complicate deployment. Long-term durability of the electronic skin and hydraulic components in harsh marine environments also requires further validation.

The researchers note that more robust actuators and structural refinements may help address these constraints.

The project illustrates how bio-inspired sensing and soft robotics are reshaping underwater vehicle design. As offshore energy, marine research, and subsea infrastructure monitoring expand, adaptive control systems such as this may become increasingly relevant to improving endurance and operational stability in dynamic ocean conditions.

News, Robots & Robotics, Science & Tech

MWC 2026 Marks Shift From AI Apps to AI Native Hardware

Mobile World Congress 2026 highlighted a decisive shift as AI moved beyond apps and into physical devices, from humanoid robots and AI glasses to smartphones with mechanical motion systems.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Humanoid robots, AI glasses and AI-integrated smartphones on display at MWC 2026 reflect a broader industry shift toward AI-native hardware design. Photo: MWC

Mobile World Congress 2026 underscored a structural change in the AI industry: artificial intelligence is no longer confined to apps running on smartphones. It is beginning to reshape the hardware itself.

Across the exhibition floor in Barcelona, companies presented humanoid robots controlled entirely by voice, AI glasses positioned as daily computing devices, and smartphones equipped with mechanical camera systems that physically move. The theme was consistent: large AI models are evolving from software layers into defining elements of device architecture.

Smartphone Makers Enter Robotics

Several Chinese smartphone manufacturers used MWC to demonstrate ambitions beyond handsets.

Honor unveiled its first humanoid robot during its global launch event, showcasing AI-driven motion control and multimodal interaction. The demonstration included acrobatic movements and coordinated choreography, signaling technical progress in embodied control systems.

Xiaomi, which introduced its CyberOne humanoid in 2022, did not display a robot on the show floor but reported new milestones. According to the company, its humanoid systems have begun operating in automotive factories, performing tasks such as self-tapping nut installation and material transport. Chairman Lei Jun said large-scale deployment in Xiaomi’s factories could occur within five years.

The move into robotics comes as smartphone growth slows. IDC estimates that China’s smartphone shipments reached roughly 284 million units in 2025, a slight year-on-year decline. For manufacturers with in-house chips, operating systems, and AI models, robotics represents an adjacent growth market built on overlapping technologies.

Lu Weibing, president of Xiaomi’s mobile division, has argued that investments in proprietary silicon, operating systems, and foundational AI are interconnected and transferable to robotics platforms.

Other technology firms are also advancing embodied systems. At MWC, iFlytek demonstrated a humanoid guide robot powered by upgraded multimodal voice interaction, eliminating the need for handheld remote controls. China Mobile presented an unmanned restaurant concept in which embodied robots collaborated on ordering, food preparation, and delivery.

These deployments suggest that large models are increasingly acting as real-time control interfaces rather than conversational add-ons.

AI Glasses and the Search for Monetization

While AI apps saw a surge in daily active users during China’s Spring Festival promotions, retention and revenue models remain uncertain. Several internet companies are now shifting attention toward AI hardware.

Alibaba’s Qwen brand introduced its first AI glasses at MWC, embedding large language models into wearable devices capable of translation, transcription, photography, and object recognition. The devices are positioned for both consumer and professional use.

IDC forecasts that global smart glasses shipments will exceed 23 million units by 2026, including nearly 5 million units in China. Compared with heavily subsidized AI apps, glasses offer a direct hardware revenue stream and clearer monetization path.

iFlytek also debuted lightweight AI glasses weighing approximately 40 grams, emphasizing multimodal recording and translation capabilities.

Redefining the Smartphone Form

AI integration is also altering the smartphone itself.

ZTE showcased AI-powered devices that embed assistants directly into the system layer, enabling cross-application control via natural language. Rather than functioning as standalone apps, these AI agents are integrated into core operating system workflows.

Honor introduced a more experimental concept: a “Robot Phone” featuring a motorized multi-axis gimbal paired with a 200-megapixel sensor. The device can physically rotate and track users during video calls, combining AI vision with mechanical motion.

The common thread across categories is the shift from AI-enabled hardware to AI-defined hardware. Large models are beginning to influence device structure, interaction methods, and mechanical design.

MWC 2026 did not present a single dominant form factor. Instead, it revealed a competitive search for the most natural interface between AI systems and the physical world. Whether that interface proves to be humanoid robots, wearable glasses, or reengineered smartphones remains unsettled. What is clear is that AI is no longer just inside devices. It is beginning to shape what those devices become.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms

Researchers at Georgia Tech have developed a robot pollinator that uses computer vision and 3D modeling to automate flower pollination in indoor farms.

By Laura Bennett | Edited by Kseniia Klichova Published:
A prototype robot pollinator developed at Georgia Tech uses computer vision to determine flower orientation before performing targeted pollination. Photo: Georgia Tech Research Institute

Researchers at Georgia Tech have developed a robotic system designed to automate pollination inside indoor farms, addressing one of the most labor-intensive challenges in vertical agriculture.

The prototype, created by engineers at the Georgia Tech Research Institute (GTRI) and the George W. Woodruff School of Mechanical Engineering, uses computer vision and robotic manipulation to pollinate flowering plants without human intervention.

As indoor farming expands in urban environments, automating pollination has become a critical bottleneck in scaling production.

Pollination without Bees

Indoor farms offer several advantages over traditional agriculture, including year-round production, reduced water use, and minimal pesticide reliance. However, enclosed growing environments prevent natural pollinators such as bees from accessing crops.

For many flowering plants grown indoors – including strawberries and tomatoes – farmers must manually transfer pollen using brushes or vibrating tools. The process is repetitive and time-consuming, limiting scalability.

The Georgia Tech team’s robot is designed to pollinate plants that contain both male and female reproductive structures within the same flower. These plants require pollen transfer within a single bloom rather than cross-pollination between separate flowers.

By automating this step, researchers aim to reduce labor demands and increase consistency in crop yields.

Teaching a Robot to Understand Flower Orientation

One of the central technical challenges was enabling the robot to recognize the “pose” of each flower – its orientation, symmetry, and position relative to the stem.

Accurate pose detection is critical because pollen must be delivered precisely to the reproductive structures at the front of the flower. Even small alignment errors can reduce pollination effectiveness.

To solve this, the team developed a computer vision pipeline that reconstructs flowers in 3D from multiple camera images. The 3D model is then converted into depth-enhanced 2D representations that can be processed by object detection algorithms.

The researchers used a real-time object detection system known as YOLO (You Only Look Once) to classify flower features in a single processing pass. By converting 3D data into structured 2D inputs, they leveraged the abundance of training resources available for 2D computer vision systems.

The approach enabled the robot to estimate flower orientation with sufficient precision to approach and manipulate the stem correctly.

From Detection to Physical Interaction

Once the robot identifies the flower’s pose, it grips the stem and applies controlled vibration to dislodge and distribute pollen within the bloom.

Unlike simple mechanical vibration tools, the system integrates perception, positioning, and actuation into a single workflow. This coordination is essential in dense vertical farming environments where flowers vary in size, spacing, and orientation.

The prototype was built in Georgia Tech’s Safe Robotics Lab and remains in testing.

Adding Microscopic Feedback

Beyond basic pollination, the system includes an inspection capability that allows it to evaluate pollination success. The robot can perform close-up imaging of flower structures to assess whether pollen has been effectively transferred.

This feedback loop is a notable feature, as most manual pollination methods offer no immediate verification of success.

The research team has documented its technical approach in a paper accepted to the 2025 International Conference on Robotics and Automation (ICRA).

Automation Expands in Controlled Agriculture

Indoor farming is often promoted as a solution to urban food supply challenges and climate variability. However, high labor costs and operational complexity have slowed widespread adoption.

Automating tasks such as pollination could help reduce those barriers. Robotics in agriculture has traditionally focused on harvesting and monitoring, but pollination represents a more delicate and technically demanding process.

The Georgia Tech prototype demonstrates how advances in AI perception and robotic control can be applied to biological systems.

While the system remains in early development, it illustrates how robotics may increasingly support food production in controlled environments – where precision, repeatability, and data-driven feedback are essential for scaling output.

News, Robots & Robotics, Science & Tech

Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security

Revobots has introduced TASKBOT SCOUT XT, an all-weather autonomous patrol robot designed for outdoor enforcement and campus monitoring under a Robots-as-a-Service model.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Revobots’ TASKBOT SCOUT XT is designed for outdoor patrol, featuring an all-wheel-drive chassis and weather-resistant enclosure. Photo: Campus Innovation

Revobots has introduced an all-weather version of its autonomous patrol robot, expanding its security robotics platform beyond indoor facilities and into outdoor environments.

The new system, called TASKBOT SCOUT XT, is engineered for exterior enforcement and monitoring tasks across campuses, parking lots, and mixed-use spaces. The Phoenix-based company says the robot is designed to address one of the longstanding limitations of autonomous patrol systems: reliable operation in unpredictable weather and uneven terrain.

The launch reflects growing demand for robotics solutions that can supplement security staffing in environments where labor shortages and operational costs continue to rise.

Hardware Upgrades for Outdoor Deployment

SCOUT XT builds on Revobots’ indoor patrol platform but incorporates significant hardware modifications to withstand environmental exposure.

The robot features an IP65-rated enclosure designed to protect against dust and water ingress. Its extended-wheelbase, all-wheel-drive chassis is intended to provide stability across uneven pavement, gravel, and surface transitions.

Outdoor-calibrated vision systems allow the robot to operate in variable lighting conditions, including bright daylight and low-light evening environments. Longer-range perception capabilities are designed to accommodate open spaces with fewer visual landmarks than indoor corridors.

All-terrain wheels further support navigation across cracked pavement, curb transitions, and mixed surfaces common in parking facilities and campus grounds.

Autonomous Operation with Human Oversight

SCOUT XT operates on Revobots’ existing backend infrastructure, including its Robots-as-a-Service subscription model and REVO Pilot human-in-the-loop oversight system.

By default, the robot navigates autonomously, using onboard AI to conduct patrol routes and monitor designated areas. When conditions exceed predefined thresholds – such as ambiguous detections or unusual environmental scenarios – the system can escalate to human supervisors for intervention.

This hybrid autonomy model is increasingly common in commercial robotics deployments, particularly in security applications where accountability and reliability are critical.

Campus Deployment Highlights Practical Use Case

Revobots said SCOUT XT recently completed pilot testing at Xavier University in Cincinnati. During the trial, the robot supported automated license plate recognition enforcement across multiple campus parking areas.

The deployment was designed to expand monitoring coverage without increasing staffing levels, a key consideration for educational institutions and other organizations managing large facilities.

Integration with existing campus infrastructure was supported through collaboration with Campus Innovation and its C-Park platform.

The university pilot demonstrates how outdoor patrol robots can supplement traditional security operations, particularly in structured environments such as campuses, business parks, and residential communities.

Expanding the Scope of Security Robotics

Autonomous security robots have typically been deployed indoors, where environmental variables are more predictable. Extending patrol capabilities outdoors introduces challenges including weather exposure, uneven terrain, and dynamic lighting.

By adapting its existing platform rather than building an entirely new system, Revobots is pursuing incremental expansion of its task-adaptive robotics model.

The broader security robotics market is evolving toward service-based deployment models, where customers subscribe to robotics coverage rather than purchase hardware outright. This approach lowers upfront costs and allows providers to maintain centralized oversight and software updates.

As robotics companies seek commercially viable applications, outdoor patrol represents a practical step toward broader real-world autonomy.

While fully autonomous security operations remain a long-term ambition, platforms like SCOUT XT illustrate how robotics companies are addressing specific operational gaps – expanding coverage, improving consistency, and reducing reliance on human patrol staffing in large, open environments.

Automation, News, Robots & Robotics
Exit mobile version