Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms

Researchers at Georgia Tech have developed a robot pollinator that uses computer vision and 3D modeling to automate flower pollination in indoor farms.

By Laura Bennett | Edited by Kseniia Klichova Published:
Georgia Tech Researchers Develop Robot Pollinator for Indoor Farms
A prototype robot pollinator developed at Georgia Tech uses computer vision to determine flower orientation before performing targeted pollination. Photo: Georgia Tech Research Institute

Researchers at Georgia Tech have developed a robotic system designed to automate pollination inside indoor farms, addressing one of the most labor-intensive challenges in vertical agriculture.

The prototype, created by engineers at the Georgia Tech Research Institute (GTRI) and the George W. Woodruff School of Mechanical Engineering, uses computer vision and robotic manipulation to pollinate flowering plants without human intervention.

As indoor farming expands in urban environments, automating pollination has become a critical bottleneck in scaling production.

Pollination without Bees

Indoor farms offer several advantages over traditional agriculture, including year-round production, reduced water use, and minimal pesticide reliance. However, enclosed growing environments prevent natural pollinators such as bees from accessing crops.

For many flowering plants grown indoors – including strawberries and tomatoes – farmers must manually transfer pollen using brushes or vibrating tools. The process is repetitive and time-consuming, limiting scalability.

The Georgia Tech team’s robot is designed to pollinate plants that contain both male and female reproductive structures within the same flower. These plants require pollen transfer within a single bloom rather than cross-pollination between separate flowers.

By automating this step, researchers aim to reduce labor demands and increase consistency in crop yields.

Teaching a Robot to Understand Flower Orientation

One of the central technical challenges was enabling the robot to recognize the “pose” of each flower – its orientation, symmetry, and position relative to the stem.

Accurate pose detection is critical because pollen must be delivered precisely to the reproductive structures at the front of the flower. Even small alignment errors can reduce pollination effectiveness.

To solve this, the team developed a computer vision pipeline that reconstructs flowers in 3D from multiple camera images. The 3D model is then converted into depth-enhanced 2D representations that can be processed by object detection algorithms.

The researchers used a real-time object detection system known as YOLO (You Only Look Once) to classify flower features in a single processing pass. By converting 3D data into structured 2D inputs, they leveraged the abundance of training resources available for 2D computer vision systems.

The approach enabled the robot to estimate flower orientation with sufficient precision to approach and manipulate the stem correctly.

From Detection to Physical Interaction

Once the robot identifies the flower’s pose, it grips the stem and applies controlled vibration to dislodge and distribute pollen within the bloom.

Unlike simple mechanical vibration tools, the system integrates perception, positioning, and actuation into a single workflow. This coordination is essential in dense vertical farming environments where flowers vary in size, spacing, and orientation.

The prototype was built in Georgia Tech’s Safe Robotics Lab and remains in testing.

Adding Microscopic Feedback

Beyond basic pollination, the system includes an inspection capability that allows it to evaluate pollination success. The robot can perform close-up imaging of flower structures to assess whether pollen has been effectively transferred.

This feedback loop is a notable feature, as most manual pollination methods offer no immediate verification of success.

The research team has documented its technical approach in a paper accepted to the 2025 International Conference on Robotics and Automation (ICRA).

Automation Expands in Controlled Agriculture

Indoor farming is often promoted as a solution to urban food supply challenges and climate variability. However, high labor costs and operational complexity have slowed widespread adoption.

Automating tasks such as pollination could help reduce those barriers. Robotics in agriculture has traditionally focused on harvesting and monitoring, but pollination represents a more delicate and technically demanding process.

The Georgia Tech prototype demonstrates how advances in AI perception and robotic control can be applied to biological systems.

While the system remains in early development, it illustrates how robotics may increasingly support food production in controlled environments – where precision, repeatability, and data-driven feedback are essential for scaling output.

News, Robots & Robotics, Science & Tech

China Establishes First National Standards for Humanoid Robots

China has introduced its first national standard system for humanoid robotics, aiming to unify technical specifications and accelerate commercial deployment across industries.

By Laura Bennett | Edited by Kseniia Klichova Published:
China Establishes First National Standards for Humanoid Robots
Officials and industry experts gather in Beijing to unveil China’s first national standard system for humanoid robotics, aimed at accelerating commercialization and ensuring safety alignment.

China has formally introduced its first national standard system for humanoid robotics, marking a coordinated effort to structure one of the country’s fastest-growing technology sectors.

The framework was unveiled at the Humanoid Robots and Embodied Intelligence Standardization meeting in Beijing. It establishes unified technical guidelines intended to streamline development, reduce fragmentation, and accelerate the transition from pilot projects to commercial deployment.

The move signals that policymakers view humanoid robotics not as an experimental field, but as an emerging industrial category requiring formal governance.

Six Pillars for Industrial Alignment

The standard system is organized around six core pillars: foundational and common standards, neuromorphic and intelligent computing, limbs and key components, full-system integration, application scenarios, and safety and ethics.

Together, these categories define technical specifications, interface protocols, and evaluation benchmarks. Committee experts involved in the initiative said the goal is to reduce coordination friction between suppliers, lower production costs, and shorten iteration cycles across the value chain.

By clarifying interfaces and performance metrics, the framework is designed to enable interoperability between hardware platforms, software systems, and embodied AI models. It also embeds safety and ethical considerations into early-stage development, reflecting regulatory awareness as robots move into workplaces and homes.

From Prototypes to Scaled Deployment

According to China’s Ministry of Industry and Information Technology, 2024 marked the country’s first year of humanoid robot mass production. More than 140 domestic companies released over 330 models, with deployments expanding into manufacturing, household services, healthcare, and elderly care.

Until now, much of that growth has occurred in a relatively fragmented environment, with companies developing proprietary architectures and evaluation criteria. National standards are expected to impose structure on a rapidly expanding ecosystem.

The framework could also serve a strategic function. As Chinese firms compete globally in embodied AI and humanoid robotics, standardized technical benchmarks may strengthen export readiness and ecosystem coordination.

While many humanoid deployments remain in early stages, the introduction of national standards suggests the industry is entering a new phase, where commercialization and regulatory alignment advance in parallel.

News, Policy & Regulation, Robots & Robotics

University of Southampton Develops Adaptive Robot Fin for Underwater Stability

Researchers at the University of Southampton have developed a flexible robotic fin with embedded electronic skin that automatically adapts to changing water currents, improving underwater robot stability and efficiency.

By Daniel Krauss | Edited by Kseniia Klichova Published:
University of Southampton Develops Adaptive Robot Fin for Underwater Stability
The adaptive robotic fin developed at the University of Southampton integrates electronic skin and hydraulic actuation to automatically counteract flow disturbances in underwater environments. Photo: University of Southampton

Autonomous underwater vehicles are built to withstand unpredictable ocean conditions, but their rigid fins often require significant energy to counteract sudden currents and turbulence. Researchers at the University of Southampton are proposing a different approach: fins that sense water flow and adjust their shape in real time.

The team has developed a flexible robotic fin embedded with electronic skin capable of detecting subtle changes in water movement. The system automatically modifies the fin’s stiffness and curvature to stabilize underwater robots while reducing energy consumption.

The research, published in npj under the title “Harnessing proprioception in aquatic soft wings enables hybrid passive-active disturbance rejection,” reflects a broader push toward soft robotics and adaptive control in marine environments.

Inspired by Biological Sensing

The design draws from biological proprioception mechanisms observed in birds and fish. Birds detect airflow changes through sensory feedback in their feathers, while fish rely on lateral line systems and fin rays to perceive water disturbances.

To replicate similar sensing capabilities, the Southampton engineers embedded flexible liquid metal wiring inside a silicone fin. When water flow deforms the fin, the integrated electronic skin registers changes in electrical resistance. These signals are transmitted to a hydraulic system inside the robot’s body, which adjusts internal pressure through connected hoses to alter the fin’s shape.

Rather than relying solely on active propulsion corrections, the system combines passive flexibility with active hydraulic adjustment.

Reducing Energy Use in Turbulent Waters

Rigid AUVs typically expend substantial energy to maintain orientation when struck by waves or shifting currents. According to the researchers, the adaptive fin significantly improves disturbance rejection.

In controlled tests, the fin reduced unwanted buoyancy effects caused by sudden water flow by 87 percent compared with a similar vehicle using rigid fins. The robot demonstrated improved self-stabilization and maneuverability while consuming less energy to maintain position.

The findings suggest potential advantages for underwater inspection, environmental monitoring, and defense applications where energy efficiency and stability are critical.

Technical Constraints Remain

Despite promising results, integration challenges remain. Scaling the flexible system to larger vehicles and embedding it into rigid hull designs could complicate deployment. Long-term durability of the electronic skin and hydraulic components in harsh marine environments also requires further validation.

The researchers note that more robust actuators and structural refinements may help address these constraints.

The project illustrates how bio-inspired sensing and soft robotics are reshaping underwater vehicle design. As offshore energy, marine research, and subsea infrastructure monitoring expand, adaptive control systems such as this may become increasingly relevant to improving endurance and operational stability in dynamic ocean conditions.

News, Robots & Robotics, Science & Tech

MWC 2026 Marks Shift From AI Apps to AI Native Hardware

Mobile World Congress 2026 highlighted a decisive shift as AI moved beyond apps and into physical devices, from humanoid robots and AI glasses to smartphones with mechanical motion systems.

By Rachel Whitman | Edited by Kseniia Klichova Published:
MWC 2026 Marks Shift From AI Apps to AI Native Hardware
Humanoid robots, AI glasses and AI-integrated smartphones on display at MWC 2026 reflect a broader industry shift toward AI-native hardware design. Photo: MWC

Mobile World Congress 2026 underscored a structural change in the AI industry: artificial intelligence is no longer confined to apps running on smartphones. It is beginning to reshape the hardware itself.

Across the exhibition floor in Barcelona, companies presented humanoid robots controlled entirely by voice, AI glasses positioned as daily computing devices, and smartphones equipped with mechanical camera systems that physically move. The theme was consistent: large AI models are evolving from software layers into defining elements of device architecture.

Smartphone Makers Enter Robotics

Several Chinese smartphone manufacturers used MWC to demonstrate ambitions beyond handsets.

Honor unveiled its first humanoid robot during its global launch event, showcasing AI-driven motion control and multimodal interaction. The demonstration included acrobatic movements and coordinated choreography, signaling technical progress in embodied control systems.

Xiaomi, which introduced its CyberOne humanoid in 2022, did not display a robot on the show floor but reported new milestones. According to the company, its humanoid systems have begun operating in automotive factories, performing tasks such as self-tapping nut installation and material transport. Chairman Lei Jun said large-scale deployment in Xiaomi’s factories could occur within five years.

The move into robotics comes as smartphone growth slows. IDC estimates that China’s smartphone shipments reached roughly 284 million units in 2025, a slight year-on-year decline. For manufacturers with in-house chips, operating systems, and AI models, robotics represents an adjacent growth market built on overlapping technologies.

Lu Weibing, president of Xiaomi’s mobile division, has argued that investments in proprietary silicon, operating systems, and foundational AI are interconnected and transferable to robotics platforms.

Other technology firms are also advancing embodied systems. At MWC, iFlytek demonstrated a humanoid guide robot powered by upgraded multimodal voice interaction, eliminating the need for handheld remote controls. China Mobile presented an unmanned restaurant concept in which embodied robots collaborated on ordering, food preparation, and delivery.

These deployments suggest that large models are increasingly acting as real-time control interfaces rather than conversational add-ons.

AI Glasses and the Search for Monetization

While AI apps saw a surge in daily active users during China’s Spring Festival promotions, retention and revenue models remain uncertain. Several internet companies are now shifting attention toward AI hardware.

Alibaba’s Qwen brand introduced its first AI glasses at MWC, embedding large language models into wearable devices capable of translation, transcription, photography, and object recognition. The devices are positioned for both consumer and professional use.

IDC forecasts that global smart glasses shipments will exceed 23 million units by 2026, including nearly 5 million units in China. Compared with heavily subsidized AI apps, glasses offer a direct hardware revenue stream and clearer monetization path.

iFlytek also debuted lightweight AI glasses weighing approximately 40 grams, emphasizing multimodal recording and translation capabilities.

Redefining the Smartphone Form

AI integration is also altering the smartphone itself.

ZTE showcased AI-powered devices that embed assistants directly into the system layer, enabling cross-application control via natural language. Rather than functioning as standalone apps, these AI agents are integrated into core operating system workflows.

Honor introduced a more experimental concept: a “Robot Phone” featuring a motorized multi-axis gimbal paired with a 200-megapixel sensor. The device can physically rotate and track users during video calls, combining AI vision with mechanical motion.

The common thread across categories is the shift from AI-enabled hardware to AI-defined hardware. Large models are beginning to influence device structure, interaction methods, and mechanical design.

MWC 2026 did not present a single dominant form factor. Instead, it revealed a competitive search for the most natural interface between AI systems and the physical world. Whether that interface proves to be humanoid robots, wearable glasses, or reengineered smartphones remains unsettled. What is clear is that AI is no longer just inside devices. It is beginning to shape what those devices become.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security

Revobots has introduced TASKBOT SCOUT XT, an all-weather autonomous patrol robot designed for outdoor enforcement and campus monitoring under a Robots-as-a-Service model.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Revobots Launches All-Weather Autonomous Patrol Robot for Outdoor Security
Revobots’ TASKBOT SCOUT XT is designed for outdoor patrol, featuring an all-wheel-drive chassis and weather-resistant enclosure. Photo: Campus Innovation

Revobots has introduced an all-weather version of its autonomous patrol robot, expanding its security robotics platform beyond indoor facilities and into outdoor environments.

The new system, called TASKBOT SCOUT XT, is engineered for exterior enforcement and monitoring tasks across campuses, parking lots, and mixed-use spaces. The Phoenix-based company says the robot is designed to address one of the longstanding limitations of autonomous patrol systems: reliable operation in unpredictable weather and uneven terrain.

The launch reflects growing demand for robotics solutions that can supplement security staffing in environments where labor shortages and operational costs continue to rise.

Hardware Upgrades for Outdoor Deployment

SCOUT XT builds on Revobots’ indoor patrol platform but incorporates significant hardware modifications to withstand environmental exposure.

The robot features an IP65-rated enclosure designed to protect against dust and water ingress. Its extended-wheelbase, all-wheel-drive chassis is intended to provide stability across uneven pavement, gravel, and surface transitions.

Outdoor-calibrated vision systems allow the robot to operate in variable lighting conditions, including bright daylight and low-light evening environments. Longer-range perception capabilities are designed to accommodate open spaces with fewer visual landmarks than indoor corridors.

All-terrain wheels further support navigation across cracked pavement, curb transitions, and mixed surfaces common in parking facilities and campus grounds.

Autonomous Operation with Human Oversight

SCOUT XT operates on Revobots’ existing backend infrastructure, including its Robots-as-a-Service subscription model and REVO Pilot human-in-the-loop oversight system.

By default, the robot navigates autonomously, using onboard AI to conduct patrol routes and monitor designated areas. When conditions exceed predefined thresholds – such as ambiguous detections or unusual environmental scenarios – the system can escalate to human supervisors for intervention.

This hybrid autonomy model is increasingly common in commercial robotics deployments, particularly in security applications where accountability and reliability are critical.

Campus Deployment Highlights Practical Use Case

Revobots said SCOUT XT recently completed pilot testing at Xavier University in Cincinnati. During the trial, the robot supported automated license plate recognition enforcement across multiple campus parking areas.

The deployment was designed to expand monitoring coverage without increasing staffing levels, a key consideration for educational institutions and other organizations managing large facilities.

Integration with existing campus infrastructure was supported through collaboration with Campus Innovation and its C-Park platform.

The university pilot demonstrates how outdoor patrol robots can supplement traditional security operations, particularly in structured environments such as campuses, business parks, and residential communities.

Expanding the Scope of Security Robotics

Autonomous security robots have typically been deployed indoors, where environmental variables are more predictable. Extending patrol capabilities outdoors introduces challenges including weather exposure, uneven terrain, and dynamic lighting.

By adapting its existing platform rather than building an entirely new system, Revobots is pursuing incremental expansion of its task-adaptive robotics model.

The broader security robotics market is evolving toward service-based deployment models, where customers subscribe to robotics coverage rather than purchase hardware outright. This approach lowers upfront costs and allows providers to maintain centralized oversight and software updates.

As robotics companies seek commercially viable applications, outdoor patrol represents a practical step toward broader real-world autonomy.

While fully autonomous security operations remain a long-term ambition, platforms like SCOUT XT illustrate how robotics companies are addressing specific operational gaps – expanding coverage, improving consistency, and reducing reliance on human patrol staffing in large, open environments.

Automation, News, Robots & Robotics

TCL Unveils Tbot Concept to Turn Kids’ Smartwatch Into a Home AI Robot

At MWC 2026, TCL introduced Tbot, a concept desktop robot designed to pair with children’s smartwatches, extending AI support from outdoor tracking to home routines.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
TCL Unveils Tbot Concept to Turn Kids’ Smartwatch Into a Home AI Robot
TCL’s Tbot concept acts as a magnetic charging dock and AI companion for children’s smartwatches, extending functionality into the home. Photo: TCL

At Mobile World Congress 2026 in Barcelona, TCL presented a concept device that blends wearable technology with home robotics. Called Tbot, the desktop robot is designed to pair with TCL’s children’s smartwatches, acting as both a charging dock and an AI-powered companion.

The concept reflects a broader shift in consumer robotics toward focused, task-specific devices rather than fully autonomous humanoid machines. Instead of building a standalone home robot, TCL is extending the functionality of an existing wearable into a stationary, home-based form.

For now, the Tbot remains a concept with no announced release date or pricing.

Extending the Smartwatch Experience Indoors

Children’s smartwatches have become popular for location tracking, communication, and safety monitoring. However, their functionality typically pauses when the watch is removed for charging.

TCL’s idea is to bridge that gap. The Tbot features a magnetic dock that holds and charges the smartwatch when it is not being worn. During that time, the desktop robot takes over certain AI-driven features.

According to TCL, Tbot can handle morning alarms, homework timers, and bedtime routines. The device is positioned as a supportive assistant rather than a surveillance tool, offering reminders and guidance tailored to children.

By maintaining continuity between outdoor and indoor use, TCL aims to create a unified digital experience across environments.

AI Companion Designed for Routine and Learning

Beyond basic alarms and timers, Tbot is designed to act as a conversational learning companion. Children can ask questions and explore topics, while the system provides age-appropriate responses.

At night, the robot can transition into a sleep-support role, offering calming stories or audio to help children wind down. Parents can configure notifications and receive updates, maintaining oversight without constant direct interaction.

TCL emphasizes that AI features would operate with parental permission and regulatory compliance in mind, reflecting increasing scrutiny around children’s data privacy.

Consumer Robotics Moves Toward Targeted Use Cases

The Tbot concept illustrates a growing trend in consumer robotics: devices focused on narrow, clearly defined roles rather than broad household autonomy.

Rather than competing with smart speakers or building full humanoid assistants, TCL is exploring how robotics can complement wearables. The Tbot’s design integrates charging infrastructure with AI interaction, creating a hybrid between dock, assistant, and companion device.

This approach aligns with a wider industry movement where robotics capabilities are embedded into familiar consumer products instead of introduced as entirely new categories.

Concept Stage Highlights Industry Direction

TCL has not confirmed whether Tbot will enter mass production. The device was presented at MWC as a demonstration of the company’s direction in AI-enabled family technology.

Concept products at major trade shows often serve as signals rather than immediate commercial offerings. In this case, TCL is indicating interest in expanding beyond smartphones and wearables into interactive home robotics.

As AI becomes more integrated into everyday devices, companies are experimenting with ways to connect physical hardware with digital services in more seamless ways.

If Tbot reaches the market, it could represent an early example of robotics moving into family-focused, screen-light applications – where the machine’s role is subtle, supportive, and embedded within existing ecosystems.

For now, Tbot remains a prototype. But it underscores how robotics is increasingly intersecting with consumer electronics, particularly in categories centered on education, safety, and home routines.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech