MIT Wristband Lets Users Control Robotic Hands with Natural Movements

MIT researchers have developed an ultrasound wristband that translates muscle activity into real-time control signals for robotic hands, potentially advancing dexterity in humanoid robotics.

By Rachel Whitman | Edited by Kseniia Klichova Published:
An experimental ultrasound wristband developed at MIT tracks muscle and tendon movement in the wrist, allowing users to control robotic hands and virtual objects through natural gestures. Photo: MIT

Researchers at the Massachusetts Institute of Technology have developed a wearable ultrasound wristband that allows users to control robotic hands using their own natural movements, a breakthrough that could improve dexterity in both robotics and virtual reality systems.

The device captures detailed images of muscles and tendons in the wrist as a person moves their fingers and hand. Artificial intelligence software then translates those images into precise hand positions, enabling real-time control of robotic systems.

In demonstrations, users wearing the wristband were able to manipulate a robotic hand wirelessly, performing actions such as playing simple piano notes or shooting a miniature basketball into a hoop. The system also allowed users to control objects in a digital environment, pinching or rotating virtual items with natural gestures.

The research highlights a growing effort to bridge the gap between human motor control and robotic manipulation.

Seeing the Mechanics of the Human Hand

Replicating the dexterity of the human hand has long been one of the most difficult challenges in robotics.

A human hand contains dozens of muscles and joints working together to produce subtle and continuous movements. Capturing those movements accurately enough to control a robot has traditionally required complex camera setups or sensor-filled gloves.

The MIT team approached the problem differently by focusing on the wrist, where the muscles and tendons responsible for finger movement are located.

The wearable device incorporates a miniaturized ultrasound sensor similar to those used in medical imaging. Positioned on the wrist, the sensor continuously captures images of the internal structures that control the hand.

Each ultrasound image reflects the changing state of muscles and tendons as they contract and relax. Because these structures act like strings pulling the fingers, researchers realized the images could be used to infer the exact position of the hand.

AI Translates Motion into Robot Control

To interpret the ultrasound data, the team trained an artificial intelligence model to recognize patterns in the images and associate them with specific hand positions.

Human fingers have 22 degrees of freedom, meaning they can move in many subtle combinations. By analyzing different regions of the ultrasound images, the system can estimate how each finger is positioned and how the hand is oriented.

During training, volunteers performed various gestures while cameras recorded their hand movements. The researchers matched those recorded positions with the ultrasound images to create a dataset used to train the AI system.

Once trained, the algorithm could accurately predict hand movements directly from ultrasound data.

In testing, participants used the wristband to perform a range of gestures, including forming the letters of the American Sign Language alphabet and manipulating objects such as bottles, scissors and tennis balls.

Implications for Humanoid Robots

Beyond human-machine interfaces, the researchers believe the wristband could play a significant role in training future humanoid robots.

One challenge in robotics is collecting large datasets that capture how humans manipulate objects in real-world environments. By recording wrist ultrasound data from many people performing everyday tasks, researchers could build datasets that teach robots more natural manipulation skills.

Such data could eventually help robots learn delicate tasks such as assembling electronics, handling tools or even assisting with surgical procedures.

The technology could also replace existing hand-tracking methods used in virtual and augmented reality systems. Compared with camera-based systems, wearable ultrasound sensors are less affected by lighting conditions or visual obstructions.

The MIT team plans to further miniaturize the device and expand the training dataset with more users and gestures.

If successful, wearable ultrasound interfaces could provide a more intuitive way for humans to interact with machines – and potentially offer robots a better model for replicating the remarkable dexterity of the human hand.

News, Robots & Robotics, Science & Tech

Figure AI’s Humanoid Robot Takes the Spotlight at the White House

Figure AI’s third-generation humanoid robot appeared alongside First Lady Melania Trump at a White House summit, highlighting the growing political and commercial attention surrounding humanoid robotics.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
Figure AI’s third-generation humanoid robot appeared at the White House during an international summit focused on artificial intelligence and education. Photo: The White House

A humanoid robot from startup Figure AI made an appearance at the White House this week, offering a high-profile glimpse into a technology that many in the industry believe could become the next major frontier for artificial intelligence.

The robot, known as Figure 3, appeared alongside First Lady Melania Trump during the second day of the Fostering the Future Together Global Coalition Summit, an event focused on artificial intelligence and education. During the gathering, the robot greeted attendees in multiple languages and introduced itself as a humanoid built in the United States.

The moment marked one of the most visible public demonstrations of humanoid robotics in a political setting in the United States, underscoring how governments are beginning to treat the technology as a strategic priority.

A Startup With Global Ambitions

Figure AI is one of the most closely watched newcomers in the rapidly expanding humanoid robotics sector. The company was founded in 2022 by entrepreneur Brett Adcock, who previously co-founded electric aircraft company Archer Aviation and the hiring platform Vettery.

The startup’s robots are powered by an internal artificial intelligence system known as Helix, a vision-language-action model designed to translate perception and verbal instructions into physical robotic actions.

Rather than building machines limited to specific industrial tasks, Figure AI is pursuing a more general-purpose approach. The company says its humanoids are intended for a wide range of applications, from manufacturing and logistics to household assistance.

That ambition has attracted major investors. In 2025, the company raised more than $1 billion in funding, bringing its valuation to roughly $39 billion. Backers include technology companies and venture firms such as Nvidia, Intel Capital, Qualcomm Ventures and Salesforce.

From Factories to Homes

Figure AI has already begun testing its robots in industrial environments. One of its earliest commercial partnerships involves working with BMW to deploy humanoid robots in manufacturing facilities, where they assist with tasks such as handling sheet metal components.

The company ultimately hopes to expand beyond factories into logistics operations and eventually consumer environments.

Supporters argue that humanoid robots could become widely useful because they are designed to operate in spaces already built for people. Doors, tools and equipment are typically designed for human bodies, making human-shaped robots potentially easier to integrate into existing workplaces.

The appearance of Figure’s robot at the White House also tied into broader discussions about how AI could reshape education. During the summit, the robot was presented as an example of how humanoid systems might one day act as interactive tutors capable of assisting students at home.

Safety Questions in a Rapidly Growing Field

The company’s growing visibility has also brought scrutiny.

Figure AI is currently involved in a legal dispute with a former head of product safety who alleged he was dismissed after raising concerns about the potential risks posed by powerful humanoid robots. According to the lawsuit, the machines could generate forces capable of causing serious injury.

The company disputes the claims and has countersued, arguing the allegations are false and that the employee was terminated for performance reasons.

The case reflects a broader challenge facing the humanoid robotics industry. As machines become stronger and more autonomous, companies must demonstrate that they can operate safely alongside humans.

Despite these debates, investor enthusiasm for humanoid robotics continues to grow. Companies around the world are racing to develop robots capable of performing real-world tasks, and the appearance of one such machine at the White House signals how quickly the technology is moving from research labs into the center of economic and political conversations.

For Figure AI, the moment offered a powerful showcase for a company still relatively young but already aiming to deploy thousands of robots in the years ahead.

Artificial Intelligence (AI), News, Robots & Robotics

Lucid Bots Raises $20 Million to Scale Autonomous Cleaning Robots

Lucid Bots has raised $20 million in Series B funding to expand its autonomous exterior cleaning platform, as demand grows for robots that can perform labor-intensive industrial tasks.

By Laura Bennett | Edited by Kseniia Klichova Published:
Lucid Bots’ Sherpa drone cleans building exteriors autonomously, part of a growing platform of robots designed to automate industrial cleaning tasks. Photo: Lucid Bots

Lucid Bots, a U.S. robotics company focused on automating exterior cleaning work, has raised $20 million in a Series B funding round as demand grows for robots capable of performing physically demanding industrial tasks.

The round, co-led by Cubit Capital and Idea Fund Partners, brings the Charlotte-based company’s total funding to $34 million. The capital will be used to expand manufacturing capacity, develop new autonomous systems and scale the company’s robotics platform across the United States.

Lucid Bots has built a business around a simple premise: many industries need automation not for digital workflows but for repetitive physical labor. Exterior cleaning – including window washing and pressure washing on large commercial buildings – has emerged as an early target.

Robots for Dangerous and Labor-Intensive Work

The company’s flagship product, the Sherpa drone, is designed to clean building exteriors using a spraying system that allows operators to wash windows and facades without scaffolding or rope access.

Lucid Bots says the drone can complete jobs two to five times faster than traditional cleaning methods, reducing both labor costs and safety risks for workers.

The company recently expanded its portfolio with Lavo AI, an autonomous pressure-washing robot designed to operate on ground-level surfaces such as sidewalks, parking structures and industrial facilities.

Together the systems form the foundation of Lucid’s broader platform for automated exterior maintenance.

Instead of selling robots outright, the company offers its technology through a subscription service called Lucid Refresh, which bundles robots, software, operator training and support into a single robotics-as-a-service package.

For cleaning companies, the model allows them to take on projects that would otherwise require significant capital investment or specialized equipment.

Building a Robotics Platform Around Data

Lucid Bots’ growth reflects a broader shift in robotics toward platforms that combine hardware with operational data and cloud software.

The company says its robots have collectively logged hundreds of thousands of hours of real-world cleaning operations across different building types, weather conditions and materials.

That operational dataset feeds into the company’s AI systems, allowing robots to improve their performance over time as they encounter new scenarios.

The result, Lucid argues, is a compounding advantage in autonomy. As more robots are deployed and more jobs are completed, the underlying models become more effective at adapting to real-world conditions.

The company says its network of operators has already completed more than $75 million in cleaning jobs using Lucid systems.

Industrial Robotics Expands Beyond Factories

Lucid Bots’ expansion highlights how robotics is moving into industries historically considered difficult to automate.

While industrial robots have long dominated manufacturing lines, many service industries still rely heavily on manual labor. Tasks such as cleaning building exteriors involve complex environments, variable surfaces and changing weather conditions that are difficult for traditional automation systems.

Recent advances in sensors, autonomy software and cloud-connected robotics platforms are beginning to change that equation.

Lucid Bots now has nearly 1,000 robots deployed nationwide, serving customers ranging from independent cleaning operators to large commercial facilities and organizations including Disney and Sunbelt Rentals.

The company manufactures its systems at a 25,000-square-foot facility in Charlotte, North Carolina, positioning itself to meet growing demand from customers seeking domestically produced robotic equipment.

For investors, the company’s traction suggests that industrial service robotics may become one of the next large markets for embodied AI systems.

Rather than replacing workers outright, the technology is often framed as augmenting human labor by automating the most hazardous or physically demanding tasks.

As industries continue searching for ways to address labor shortages and improve safety, robots that perform real-world physical work may increasingly become a core part of the automation landscape.

Business & Markets, News, Robots & Robotics

San Jose Airport Tests Humanoid Robot to Assist Travelers

San José Mineta International Airport has launched a pilot program using an AI-powered humanoid robot to greet passengers, answer questions and provide real-time multilingual support.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
The humanoid robot José assists passengers at San José Mineta International Airport as part of a pilot program testing AI-powered customer service in busy public spaces. Photo: IntBot / X

San José Mineta International Airport has begun testing an AI-powered humanoid robot designed to help travelers navigate the terminal, marking another example of robotics moving into public-facing service roles.

The robot, named José, is stationed in Terminal B near Gate 24 and serves as a digital concierge for passengers. Developed by Silicon Valley startup IntBot, the system can greet travelers, answer questions and provide real-time information about flights and airport services.

The deployment is part of a four-month pilot program aimed at evaluating whether humanoid robots can meaningfully improve passenger experience and operational efficiency in a busy transportation hub.

Airports have long experimented with automation, but the arrival of physically embodied AI assistants signals a new phase in the adoption of robotics for customer-facing roles.

A Multilingual Digital Concierge

José is designed to interact naturally with passengers using conversational AI and contextual reasoning. The robot can communicate in more than 50 languages, allowing it to assist international travelers who may struggle to navigate airport signage or find staff who speak their language.

The system combines speech recognition, natural language processing and physical presence to operate in crowded environments where traditional kiosks or mobile apps may be less effective.

Passengers can ask questions about directions, boarding gates, airport services or local transportation. The robot responds verbally while using its screen and gestures to provide additional guidance.

Airport officials say the goal is not to replace human staff but to provide an additional layer of assistance during busy travel periods.

Airports as Testing Grounds for Robotics

San José’s decision to test a humanoid robot reflects a broader trend in which airports are becoming experimental spaces for robotics and artificial intelligence.

Airports present a challenging environment for automation: they are crowded, multilingual and constantly changing. Successfully operating in such conditions requires machines that can navigate complex social and logistical situations.

For technology companies, these environments provide valuable real-world data about how robots interact with people.

The pilot also reflects San José’s role as a gateway to Silicon Valley, where many robotics and AI companies are developing new applications for embodied intelligence.

City officials say the program aligns with the region’s focus on practical technology deployment rather than purely experimental demonstrations.

Measuring the Impact of Service Robots

Over the next four months, airport officials and IntBot engineers will evaluate how travelers respond to the robot and whether it improves customer service outcomes.

Metrics will likely include passenger engagement, accessibility improvements and the system’s ability to handle large volumes of questions during peak travel periods.

For IntBot, the deployment represents an opportunity to demonstrate that socially intelligent robots can function reliably in complex public environments.

If the pilot proves successful, similar systems could appear in other transportation hubs, shopping centers or public facilities where multilingual assistance is valuable.

More broadly, the experiment highlights how robotics developers are shifting their focus from controlled industrial settings to everyday environments where machines must interact directly with people.

Airports, with their mix of logistical complexity and constant passenger flow, may offer an early glimpse of how humanoid service robots will eventually integrate into public infrastructure.

News, Robots & Robotics, Startups & Venture

Humanoid Robot Debuts at White House AI Education Summit

A humanoid robot appeared at the White House during an international AI education summit hosted by First Lady Melania Trump, highlighting growing interest in humanoid systems as future learning tools.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:

A humanoid robot appeared at the White House this week during an international summit focused on artificial intelligence and education, offering a glimpse of how policymakers increasingly view robotics as part of future learning systems.

The demonstration took place during the Fostering the Future Together Global Summit, where First Lady Melania Trump hosted representatives and first spouses from 45 countries. The gathering is described by organizers as the largest international meeting convened by a U.S. First Lady at the White House.

During the event, officials from several nations presented strategies for integrating emerging technologies into education systems. Alongside policy discussions, the summit featured the introduction of an American-built humanoid robot called Figure3, marking a rare moment in which a humanoid machine was formally presented in a diplomatic setting at the White House.

The appearance of the robot reflected a broader theme of the summit: the idea that artificial intelligence could move beyond screens and become embodied in machines designed to interact directly with people.

Humanoid Robots as Future Educators

In her remarks, Trump outlined three technological shifts she believes will shape the next generation of education: personalized learning powered by artificial intelligence, the emergence of humanoid robots as teaching tools, and the economic role of education-driven technology.

The concept centers on AI systems capable of adapting lessons to individual students. In the vision described during the summit, humanoid robots could eventually act as patient, always-available tutors capable of tailoring instruction based on a student’s pace, knowledge level and even emotional state.

Advocates of the approach argue that AI-powered learning assistants could expand access to high-quality education and help students develop stronger analytical and problem-solving skills.

The summit framed humanoid robotics as a natural extension of AI technologies that have already transformed digital tools such as chatbots and online tutoring systems.

Because human environments are built around human bodies, proponents say robots with humanlike form factors could interact with students more naturally than traditional devices.

Robotics Enters the Policy Conversation

The event reflects a growing shift in how governments are approaching artificial intelligence.

Until recently, most policy debates around AI focused on software systems such as language models or data platforms. The appearance of a humanoid robot at the White House suggests robotics is increasingly entering the policy conversation as AI systems begin to operate in the physical world.

That shift is happening alongside rapid investment in humanoid robotics by both startups and major technology companies. Firms including Tesla, Figure AI, Apptronik and several Chinese robotics developers are racing to build machines capable of performing real-world tasks.

For governments, the implications extend beyond education. Humanoid robots are frequently discussed as future tools for industries such as logistics, healthcare, manufacturing and home services.

At the summit, speakers emphasized that technological optimism must be balanced with safeguards around safety, child protection and responsible use.

AI Education as Strategic Policy

Beyond the robotics demonstration, the summit highlighted the increasing geopolitical importance of AI education.

Participants discussed how artificial intelligence could influence economic growth, workforce development and national competitiveness. Several speakers argued that expanding access to AI education will be critical for preparing younger generations for a technology-driven economy.

The summit also emphasized collaboration between governments and the private sector, with technology companies playing a central role in developing the platforms and infrastructure that underpin AI systems.

For robotics developers, the event signals that humanoid systems are beginning to move from research labs and technology conferences into the realm of public policy.

Whether humanoid robots ultimately become common tools in education remains uncertain. But their appearance at a diplomatic gathering in Washington illustrates how quickly the conversation around artificial intelligence is shifting from software to machines designed to operate alongside humans.

News, Robots & Robotics, Science & Tech

Westlake Robotics Unveils Titan o1 Humanoid Powered by General Action AI Model

Chinese robotics company Westlake Robotics has introduced Titan o1, a humanoid robot powered by its General Action Expert foundation model designed to replicate human movement in real time.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Westlake University’s new Library complex and observation tower at sunset in Hangzhou. Photo: Westlake University

Westlake Robotics has unveiled a new humanoid robot called Titan o1 that is powered by a proprietary artificial intelligence system designed to replicate human movements in real time.

The Hangzhou-based company presented the robot during a demonstration in which an operator wearing a motion-capture suit performed a series of movements that the humanoid mirrored almost instantly. The robot reproduced the operator’s gestures with close synchronization, from arm swings and torso rotations to coordinated kicks.

The system is driven by Westlake Robotics’ internally developed General Action Expert model, known as GAE, which the company describes as a foundation model for physical actions.

The development reflects a broader trend in robotics toward building general-purpose AI systems capable of controlling machines across a wide range of tasks rather than relying on narrowly programmed motion routines.

A Foundation Model for Robot Motion

Titan o1’s capabilities are built around the GAE model, which processes signals from human operators and translates them into coordinated robotic motion.

During the demonstration, the robot reproduced complex physical actions within milliseconds of the operator performing them. According to the company, the system can adapt to different operators and motion styles without requiring task-specific programming.

Researchers involved in the project describe the model as functioning similarly to a cerebellum in the human body. In biological systems, the cerebellum plays a central role in coordinating movement, maintaining balance and ensuring fluid motion.

The GAE model is intended to provide a comparable function for robots, allowing them to interpret signals and generate appropriate physical responses even when encountering movements they have not previously executed.

Toward Scalable Humanoid Control

One of the key design goals of the system is scalability across different robotic platforms.

Westlake Robotics says the GAE model is designed with “cross-embodiment” capability, meaning the same AI model can potentially control robots with different structures or sizes. That approach could allow the same intelligence layer to operate across fleets of machines.

The company also demonstrated how a single operator could control multiple robots performing identical tasks simultaneously. Such a capability could be useful in industrial or service settings where coordinated groups of robots are required.

China’s Growing Focus on Embodied AI

The launch of Titan o1 comes as China accelerates development of humanoid robotics and embodied artificial intelligence.

Chinese companies have increasingly focused on integrating large-scale AI models with robotics hardware, an approach similar to efforts underway in the United States and Europe to combine foundation models with physical machines.

In these systems, the goal is not only to recognize objects or generate language but also to translate AI reasoning into coordinated physical actions.

For robotics developers, the challenge is to create AI models capable of understanding and generating movement in the same way large language models process text.

Whether approaches like Westlake Robotics’ General Action Expert can scale beyond demonstrations remains to be seen. But the company’s unveiling highlights how the global race to build intelligent humanoid machines is increasingly centered on software architectures that bridge AI reasoning and physical motion.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Neura Robotics Seeks €1 Billion Funding Round Backed by Amazon and Qatari Investor

German humanoid robotics startup Neura Robotics is reportedly seeking around €1 billion in new funding at a €4 billion valuation, with backing from Amazon and a Qatari investor highlighting rising global interest in physical AI.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Neura Robotics is developing AI-powered humanoid and collaborative robots aimed at industrial and service applications as global investors increase funding for physical AI companies. Photo: Neura Robotics

German robotics startup Neura Robotics is reportedly seeking roughly €1 billion in fresh funding at a valuation of about €4 billion, in a deal that would rank among the largest capital raises yet for a humanoid robotics company.

According to reports, the financing effort is attracting backing from Amazon and a prominent Qatari investor, highlighting how global capital is increasingly shifting toward robotics as artificial intelligence expands beyond software into physical machines.

If completed near the reported terms, the funding round would represent a significant milestone for Europe’s emerging robotics sector and underscore the scale of investment now flowing into embodied AI technologies.

Investors Bet on the Physical AI Economy

Neura Robotics is positioning itself as a developer of AI-powered robots capable of working alongside humans in industrial and service environments.

The company focuses on cognitive robotics, combining perception, motion control and machine learning systems designed to allow robots to interact safely with people and adapt to changing conditions. Its portfolio includes collaborative robots and humanoid platforms intended for tasks ranging from manufacturing to logistics.

The size of the planned funding round reflects how investors increasingly view robotics as a key next phase of the AI industry.

For years, artificial intelligence investment was concentrated primarily in cloud software and data infrastructure. Now the focus is shifting toward machines that can apply those capabilities in the physical world.

Amazon’s reported involvement is particularly notable because large technology companies often play an influential role in determining which robotics platforms gain traction. Tech giants can act simultaneously as investors, customers and strategic partners.

Capital Intensity Shapes the Robotics Race

The scale of Neura Robotics’ fundraising also illustrates the financial realities of building robotics companies.

Unlike software startups, robotics firms must fund extensive hardware engineering, manufacturing development and safety testing before products reach large-scale commercialization. These processes require far more capital and longer development cycles than typical AI software ventures.

Seeking funding equal to roughly a quarter of the company’s valuation suggests investors expect years of heavy investment before profitability becomes realistic.

That dynamic has begun to reshape the competitive landscape of robotics. Companies with strong financial backing are increasingly positioned to dominate development of complex systems such as humanoid robots.

The pattern resembles other capital-intensive industries such as aerospace or automotive manufacturing, where long development timelines and high engineering costs tend to concentrate leadership among a smaller number of well-funded players.

The Humanoid Robot Market Heats Up

The potential financing round also highlights the accelerating race to build commercially viable humanoid robots.

Startups and technology giants alike are betting that human-shaped machines could eventually automate tasks in warehouses, factories, logistics networks and service industries.

But the path to that future remains uncertain. Demonstrations of humanoid robots have improved dramatically in recent years, yet large-scale adoption still depends on achieving reliable performance, meeting safety standards and reducing costs to competitive levels.

For Neura Robotics and its backers, the bet is that advances in AI models combined with improved robotics hardware will eventually make such systems economically practical.

If billion-euro funding rounds become more common, the robotics sector could soon enter a new phase in which the competition is defined not only by technological breakthroughs but also by access to long-term capital.

Artificial Intelligence (AI), Business & Markets, News, Robots & Robotics, Startups & Venture

Decathlon’s Robot Warehouses Triple Efficiency as Automation Expands Across Retail Logistics

Decathlon says robotic warehouse systems from Exotec have significantly boosted productivity across several European facilities, highlighting both the promise and tensions of logistics automation.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Autonomous robots from Exotec move through high-density warehouse shelving to retrieve products for Decathlon’s retail distribution network. Photo: Exotec

Decathlon, the world’s largest sporting goods retailer, says automation is transforming the performance of its European distribution network as fleets of warehouse robots dramatically increase productivity.

The company reported significant operational gains across seven automated facilities where robots developed by French robotics company Exotec now handle a large share of sorting and picking tasks. At one warehouse in Portugal, the system doubled order preparation capacity from 57,000 to 114,000 shipments.

The deployment illustrates how robotics is rapidly reshaping retail logistics, a sector already under pressure to deliver faster fulfillment while controlling costs.

Robots Replace Miles of Walking

At Decathlon’s distribution center in Northampton, England, warehouse employees once walked more than six miles per shift while manually collecting products from shelves.

Today a fleet of roughly 100 robots performs much of that work.

The machines move autonomously through a three-dimensional storage grid, climbing shelves more than 12 meters high to retrieve inventory. They transport boxes directly to human workers stationed at packing stations, where items are sorted into shipments destined for stores.

The result, according to Decathlon’s logistics leadership, is a system that operates more than three times as efficiently as the previous manual process.

Robotic arms have also taken over physically demanding tasks such as unloading pallets and placing stock into the warehouse system. Boxes weighing up to 25 kilograms are lifted using suction mechanisms and routed into the storage grid.

The warehouse software maintains a detailed map of tens of thousands of storage locations, allowing robots to locate products and deliver them to workers with minimal delay.

Productivity Gains and Workforce Debate

While the automation has improved efficiency, it has also altered the nature of warehouse work.

Tasks that once required dozens of workers now need far fewer people. Decathlon says picking operations that previously required 50 to 60 employees can now be performed by about a dozen staff members supported by robotic systems.

Company executives argue that the technology is not primarily about reducing headcount but about shifting workers into different roles.

Employees who previously handled manual picking tasks are increasingly being trained for other positions such as equipment maintenance, customer service or repair services in the company’s growing circular economy operations.

Warehouse staff also report improvements in working conditions. Robots have eliminated tasks that required constant walking, climbing ladders or lifting heavy boxes from high shelves.

Still, labor groups remain skeptical. Union representatives say large-scale warehouse automation inevitably reduces the number of frontline logistics jobs, even if companies redeploy some workers elsewhere.

Automation’s Expanding Role in Logistics

The changes at Decathlon reflect a broader transformation underway across the logistics sector.

Warehouses are rapidly adopting robotic systems capable of navigating storage grids, retrieving inventory and preparing shipments with minimal human intervention. The shift is being driven by rising e-commerce demand, labor shortages and pressure to accelerate delivery times.

In the United Kingdom alone, logistics accounts for about eight percent of national employment and contributes roughly £170 billion to the economy. Many of these jobs are concentrated in distribution hubs such as the so-called logistics “golden triangle” in central England.

Automation is already reshaping the skill profile required for these roles. Employers increasingly seek workers with technical and analytical capabilities to operate, monitor and maintain automated systems.

Yet robotics has limits. Decathlon executives note that automated systems operate within fixed capacity constraints. Unlike human teams, robots cannot simply work overtime during sudden spikes in demand.

Even so, the company views robotics as essential for scaling its supply chain.

As retailers compete to move goods faster and more efficiently, automated warehouses like Decathlon’s are becoming a preview of how logistics networks may operate in the coming decade.

Automation, Business & Markets, News, Robots & Robotics

Amazon Acquires Fauna Robotics as It Expands Into Humanoid Machines

Amazon has acquired startup Fauna Robotics, maker of the small humanoid robot Sprout, as the company broadens its push into robotics beyond warehouses and into homes and delivery.

By Laura Bennett | Edited by Kseniia Klichova Published:
Fauna Robotics’ Sprout humanoid robot is designed as a compact, approachable machine aimed at consumer and developer applications. Photo: Fauna Robotics

Amazon has acquired Fauna Robotics, a startup developing compact humanoid robots designed for consumer and developer applications, marking the company’s latest move to expand its robotics ambitions beyond warehouses and logistics.

The acquisition brings Fauna’s roughly 50-person team into Amazon’s organization, where the company says it will continue operating as Fauna Robotics, an Amazon company. Financial terms of the deal were not disclosed.

The move highlights Amazon’s growing interest in humanoid robotics as a potential new platform for automation in both commercial and household settings.

A Smaller, “Approachable” Humanoid

Fauna Robotics launched in 2024 with a focus on building humanoid robots designed to feel accessible and safe around people. Its first product, called Sprout, is a bipedal robot standing about 3 feet 6 inches tall and weighing roughly 50 pounds.

Priced at around $50,000, the robot was designed to be developer-friendly and relatively easy to integrate into software platforms. The company positioned Sprout as an approachable humanoid platform capable of operating in environments where traditional industrial robots would appear intimidating or impractical.

The startup was founded by engineers previously working at major technology companies including Meta and Google. Early customers reportedly included companies exploring robotics applications in entertainment and advanced robotics development.

For Amazon, acquiring Fauna adds a humanoid robotics team at a time when the technology is attracting increasing investment across the technology sector.

A Broader Robotics Expansion

Amazon has spent more than a decade building one of the largest robotics operations in the private sector, primarily focused on warehouse automation.

That effort accelerated after the company’s 2012 acquisition of Kiva Systems, which became the foundation for Amazon Robotics and helped transform fulfillment centers with fleets of mobile warehouse robots.

More recently, the company has begun expanding robotics development beyond warehouses.

Just days before the Fauna deal, Amazon confirmed the acquisition of Swiss robotics startup Rivr, which develops robots designed to assist with doorstep delivery tasks. The move reflects Amazon’s interest in automating portions of last-mile logistics, one of the most expensive stages of the e-commerce supply chain.

The company has also experimented with consumer robotics. In 2021 Amazon launched Astro, a mobile home robot designed for security monitoring and household assistance, though the device has remained limited to a small invitation-only program.

Entering the Humanoid Robot Race

The Fauna acquisition places Amazon more directly into the emerging humanoid robot market, which has attracted increasing attention from both startups and large technology companies.

Tesla is developing its Optimus humanoid robot with plans to manufacture the machines at scale. Other companies pursuing similar systems include Figure AI, Apptronik, Agility Robotics and Norway-based 1X.

Many of these robots are aimed initially at industrial environments such as factories or warehouses. Others are exploring longer-term consumer applications ranging from home assistance to retail service roles.

Amazon has not disclosed how it plans to deploy Fauna’s technology, but the company’s combination of logistics infrastructure, consumer devices and cloud services could offer multiple pathways for humanoid robots.

If the technology matures, such machines could potentially assist with warehouse tasks, support delivery operations or eventually perform limited duties inside homes.

For now, the acquisition appears to be another step in Amazon’s broader strategy to build robotics capabilities across its business ecosystem.

As companies race to develop practical humanoid machines, the combination of robotics hardware, artificial intelligence and real-world deployment data may determine which platforms ultimately reach large-scale adoption.

Google DeepMind Partners with Agile Robots to Bring Gemini AI into Industrial Robotics

Google DeepMind is partnering with Munich-based Agile Robots to integrate its Gemini Robotics models with industrial robot hardware, expanding Google’s push into real-world AI deployment.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
Agile Robots’ industrial robotic systems are expected to integrate Google DeepMind’s Gemini Robotics models, enabling new AI capabilities for manufacturing automation. Photo: Agile Robots

Google is expanding its push into robotics through a new partnership between its DeepMind division and Munich-based robotics company Agile Robots, signaling a broader strategy to bring artificial intelligence from digital environments into physical industrial systems.

The collaboration will integrate Google’s Gemini Robotics foundation models with Agile Robots’ hardware platforms, including intelligent robotic arms and humanoid robots used in manufacturing environments. The goal is to improve robotic performance through large-scale deployment, real-world data collection and iterative model training.

The partnership reflects a growing consensus across the technology industry that robotics may become one of the most important applications of advanced AI models.

From Foundation Models to Physical Machines

DeepMind introduced Gemini Robotics and its extended-reasoning variant in 2025 as part of a broader effort to adapt generative AI models for physical control systems. Unlike traditional industrial automation software, these models are designed to translate high-level instructions into real-world robotic actions.

Integrating such models with physical hardware requires large amounts of real-world operational data, something software companies alone cannot easily generate.

Agile Robots offers a potential pathway to that data. The company has deployed more than 20,000 robotic systems globally, primarily focused on sensor-based industrial automation and advanced manipulation tasks.

Through the partnership, Gemini Robotics models will be tested and refined using these deployed systems. Engineers expect the combination of AI models and production environments to accelerate the development of robots capable of adapting to complex manufacturing scenarios.

According to Google DeepMind’s robotics leadership, the collaboration is intended to help build the next generation of AI systems capable of operating in the physical world.

Google’s Expanding Robotics Strategy

The partnership with Agile Robots is part of a wider series of robotics initiatives inside Google.

DeepMind has already announced collaborations with several robotics companies aimed at training AI models on real-world machines. Earlier this year the company revealed a research effort with Boston Dynamics to develop AI systems for the Atlas humanoid robot.

Google is also reorganizing its internal robotics efforts. Intrinsic, a robotics software company originally housed within Alphabet’s experimental “Other Bets” portfolio, has recently been moved closer to Google’s core operations. The company aims to build a standardized software platform for industrial robotics, sometimes described internally as an attempt to create the equivalent of Android for robots.

The increasing attention to robotics reflects a broader industry shift. While large language models initially transformed digital tasks such as search, coding and content generation, many researchers believe the next major frontier lies in enabling AI systems to act in the physical world.

Manufacturing as the First Large Market

For Google and its partners, manufacturing is likely to be the first large-scale proving ground for AI-powered robotics.

Industrial environments offer structured workflows, well-defined tasks and large amounts of operational data, making them ideal testing grounds for advanced autonomy systems.

Companies like Agile Robots have already built a significant presence in these environments, deploying robotic systems across factories and industrial facilities. Integrating AI models into these systems could allow robots to perform more flexible manipulation tasks, adapt to changing conditions and collaborate more effectively with human workers.

At the same time, the strategy positions Google to compete with other technology companies moving aggressively into robotics. Amazon continues to expand automation across its logistics network, while Tesla is investing heavily in humanoid robots designed for industrial and eventually consumer use.

Within the broader AI ecosystem, robotics is increasingly seen as the next stage of development for foundation models.

If successful, partnerships like the one between Google DeepMind and Agile Robots could help transform generative AI systems from tools that interpret language and images into platforms capable of controlling machines in the real world.

OpenAI Shuts Down Sora Video App to Refocus on Robotics and World Models

OpenAI will discontinue its standalone Sora video app months after launch, redirecting resources toward world simulation research aimed at advancing robotics and physical AI.

By Rachel Whitman | Edited by Kseniia Klichova Published:
OpenAI’s Sora video generation system demonstrated highly realistic simulated environments, technology the company now sees as foundational for training robots to understand the physical world. Photo: Kseniia Klichova / RobotsBeat

OpenAI has announced plans to shut down its standalone Sora video-generation app only months after its public launch, signaling a strategic shift toward robotics and simulation technologies designed to model the physical world.

The company confirmed the move in a message posted by the official Sora account, thanking users who had created videos with the platform and acknowledging that the decision would disappoint many in the community. The shutdown affects both the consumer-facing application and the developer API used in professional creative workflows.

While OpenAI did not initially provide detailed reasoning, company representatives later said the Sora team would redirect its work toward world simulation research aimed at advancing robotics and other systems capable of performing real-world tasks.

The decision underscores how generative media technologies, often seen primarily as creative tools, are increasingly tied to broader efforts to build embodied artificial intelligence.

From Viral App to Strategic Pivot

Sora first drew global attention when OpenAI demonstrated its ability to generate realistic video clips from text prompts. The system quickly became one of the most widely discussed generative AI tools after ChatGPT, producing scenes that convincingly simulated lighting, motion and physical interactions.

In 2025 the company expanded the technology into a dedicated mobile app designed as a social platform for AI video creation. Users could generate short clips in a range of visual styles, remix content and share the results in a feed resembling short-form video apps.

The tool rapidly went viral online, with creators producing surreal or hyper-realistic scenes that circulated widely on social media. For many observers, Sora represented a major leap in generative AI’s ability to simulate physical environments.

Yet the product’s momentum slowed in early 2026. Competition from other AI video platforms intensified, while limits on free usage and high computational costs constrained broader adoption. Analytics firms reported falling downloads and declining engagement after the initial surge.

Maintaining such systems is also extremely resource intensive. Video generation models require significant computing infrastructure both for training and for real-time inference, forcing companies to make difficult decisions about where to allocate resources.

Why Video Models Matter for Robotics

OpenAI’s explanation for the shutdown offers a glimpse into how the company views the deeper role of generative video technology.

Researchers increasingly see video generation systems as an important step toward building AI models capable of understanding physical reality. By learning to simulate how objects move, interact and respond to forces, such models can form the basis of “world models” used to train robots.

In robotics research, simulated environments are already widely used to train machines before they are deployed in the real world. More advanced simulation models could dramatically accelerate that process, allowing robots to learn complex behaviors from large-scale synthetic data.

The same underlying technology that enables text-to-video generation can therefore also help teach machines how the physical world behaves.

For OpenAI, shifting Sora’s development toward world simulation aligns with a broader industry trend toward embodied AI systems that operate outside purely digital environments.

Implications for the AI Industry

The shutdown also highlights the volatile economics of generative AI products. Consumer-facing tools can achieve rapid popularity, but sustaining them requires large investments in computing infrastructure and ongoing model development.

OpenAI has increasingly consolidated features inside its core ChatGPT platform rather than maintaining multiple standalone applications. Folding Sora’s capabilities into broader systems may reduce operational overhead while preserving the underlying technology.

The move also brings an abrupt end to a high-profile partnership announced in late 2025 that would have integrated Disney intellectual property into the Sora platform. That deal reportedly included plans for a substantial investment tied to the collaboration.

For Hollywood, Sora had represented both opportunity and disruption. Studios explored using the technology for pre-visualization and concept design, while industry groups raised concerns about deepfakes and the impact on creative labor.

Now the center of gravity appears to be shifting away from media production and toward physical-world applications.

If OpenAI’s strategy proves successful, the technology behind Sora may ultimately have greater impact in robotics laboratories and industrial automation systems than it ever did in social media feeds.

China’s Robot Trainers Are Teaching Humanoids How to Work

A new profession is emerging in China as human trainers teach humanoid robots real-world tasks, helping bridge the gap between laboratory demonstrations and practical deployment.

By Laura Bennett | Edited by Kseniia Klichova Published:
At training facilities across China, human instructors demonstrate tasks such as cooking and object handling to humanoid robots, generating data needed to prepare machines for real-world work. Photo: Hubei Humanoid Robot Innovation Center

Inside a robotics lab in central China, a humanoid robot stands beside its human instructor, mirroring every motion. When the trainer raises his arm, the robot lifts its own. When he grips a coffee grinder and pours hot water into a cup, the robot repeats the sequence step by step.

The exercise is not a demonstration but part of a daily routine for a new category of worker emerging in the robotics industry: the humanoid robot trainer.

At the Hubei Humanoid Robot Innovation Center, one of China’s largest facilities dedicated to humanoid robotics development, trainers spend their days teaching machines how to perform everyday tasks through physical demonstrations and repeated trial-and-error learning. The work reflects a broader shift in robotics as companies move beyond laboratory prototypes toward preparing humanoids for practical jobs.

Training Robots for Real-World Tasks

Robot trainers use motion capture suits, sensors and virtual reality systems to translate human movements into training data that robots can learn from. By demonstrating actions such as grasping objects, walking through environments or preparing food, trainers help engineers collect detailed information about motion trajectories, force application and tactile interaction.

This data is then used to refine control algorithms and embodied AI models so robots can reproduce the actions independently.

The process is far more labor-intensive than it might appear. Even simple movements can require hundreds or thousands of repetitions before robots perform them reliably.

In a typical workday, trainers spend hours repeating and correcting actions while engineers capture usable data from only a fraction of those trials. Tasks that humans execute effortlessly, such as adjusting a grip on a utensil or coordinating several actions in sequence, remain challenging for robotic systems.

The complexity arises because robots must learn not just individual commands but continuous sequences of actions that unfold in dynamic environments.

A New Workforce Around Humanoid Robots

The growing demand for robot trainers reflects the rapid expansion of China’s humanoid robotics industry.

According to recruitment data from the job platform Zhaopin, postings for humanoid robotics roles surged more than fourfold in the first months of 2025 compared with the previous year. The number of job seekers entering the sector rose at nearly the same pace.

China has become a central player in the emerging humanoid robotics market. Industry estimates suggest the country accounted for roughly 90 percent of global humanoid robot shipments in 2025.

Analysts expect that figure to grow as companies increase production and expand commercial applications. Morgan Stanley projects that China’s annual humanoid robot sales could reach around 28,000 units in 2026, roughly doubling from earlier levels.

Robot training centers are appearing across multiple provinces, including Anhui, Zhejiang and Shandong, as companies build the data pipelines required to scale humanoid deployment.

Preparing Robots for Everyday Environments

The rise of robot trainers highlights an often overlooked aspect of robotics development: the need for extensive real-world training data.

While advances in artificial intelligence have significantly improved robot perception and decision-making, translating those capabilities into reliable physical behavior remains difficult. Robots must learn how to manipulate objects of different shapes, move through cluttered spaces and respond to unexpected changes.

Facilities like the Hubei innovation center attempt to replicate real environments where robots may eventually operate. Training areas simulate hospitals, supermarkets, kitchens and office spaces, allowing engineers to expose robots to a wide range of scenarios.

The approach reflects China’s broader strategy of accelerating the commercialization of embodied AI technologies. Government plans for the coming decade identify robotics and intelligent manufacturing as key drivers of economic growth.

For the human trainers involved, however, the task remains more immediate: teaching machines how to perform the small actions that underpin everyday work.

Their efforts illustrate a fundamental reality of the robotics industry. Before humanoid robots can enter homes, factories or hospitals, someone has to teach them how the real world works.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Oxford Spinout Stateful Robotics Raises $4.8 Million to Build Long Horizon AI for Robots

Oxford spinout Stateful Robotics has raised $4.8 million to develop an AI platform designed to give robots memory and long-horizon planning capabilities for real-world industrial deployments.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Stateful Robotics is developing an AI layer designed to help robots remember past events, track task progress and plan over longer time horizons in industrial environments. Photo: Stateful Robotics

Stateful Robotics, a University of Oxford spinout developing software to improve robotic decision-making in real-world environments, has raised $4.8 million in a pre-seed funding round aimed at tackling one of the industry’s persistent challenges: giving robots the ability to plan and adapt over long time horizons.

The round was led by Amadeus Capital Partners and Oxford Science Enterprises, with additional investment from serial entrepreneur Stan Boland, founder of autonomous vehicle startup Five. The funding will support the expansion of Stateful Robotics’ engineering team and accelerate deployment of its platform with industrial partners.

The company is focused on what its founders describe as a missing layer in modern robotics: a persistent intelligence system that allows machines to remember previous events, track the progress of tasks, and adapt their behavior as conditions change.

The Limits of Today’s Robot Intelligence

Over the past several years, large AI models have significantly improved robot perception and environment understanding. Foundation models can now interpret visual scenes, recognize objects and generate task instructions with far greater sophistication than earlier systems.

But these advances have not solved a fundamental operational challenge. Most robotic systems still treat each decision as an isolated event, without maintaining a structured memory of what happened previously in a deployment environment.

In practical terms, that means robots often struggle when conditions deviate from carefully scripted workflows. A blocked corridor in a warehouse, a delayed delivery, or unexpected changes in lighting can disrupt operations because the system lacks the context needed to reason about longer-term consequences.

Stateful Robotics is attempting to address that limitation by introducing what it calls a “stateful” intelligence layer. Instead of evaluating each task independently, the platform maintains a continuously updated model of the environment, task history and operational performance.

According to the company’s founders, this persistent representation allows robots to learn from past outcomes and plan more effectively across hours or days of activity rather than reacting moment by moment.

A Software Layer for Scalable Robotics

The startup was founded by CEO Kirsty Lloyd-Jukes, previously chief executive of Oxford autonomous driving spinout Latent Logic, which was acquired by Waymo in 2019. She co-founded the company alongside robotics researchers Professor Nick Hawes, Professor David Parker and Dr Bruno Lacerda, whose work at Oxford has focused on autonomous decision-making and verification under uncertainty.

Their platform is designed to integrate data from robots, operational infrastructure and task management systems into a unified model that tracks how work actually unfolds in a real environment.

This model evolves continuously as robots operate, enabling systems to anticipate disruptions, adjust plans and coordinate across fleets or human-robot teams.

The concept addresses a growing concern in the robotics industry: that while hardware platforms have matured significantly, large-scale deployment is still constrained by the reliability of autonomy software.

Investors increasingly view the intelligence layer above robotics hardware as a key determinant of whether robots can operate productively in complex environments such as logistics hubs, factories, hospitals and infrastructure sites.

Bridging Research and Industrial Deployment

Backers of Stateful Robotics argue that long-horizon reasoning could become essential as robotics moves beyond controlled industrial settings into mixed human-machine environments.

In early industrial robotics deployments, machines typically operate in tightly structured spaces where tasks are repetitive and predictable. Newer generations of mobile robots, however, are expected to navigate dynamic environments alongside human workers and other autonomous systems.

That transition requires systems capable of maintaining situational awareness over extended periods, tracking evolving conditions and adjusting operational plans accordingly.

Stateful Robotics says its platform is already being tested with pilot customers in sectors including infrastructure and logistics. The company aims to position its technology as a software layer that can sit above existing robotic platforms rather than requiring entirely new hardware.

If successful, the approach could help address a long-standing bottleneck in robotics: turning promising pilot deployments into reliable systems that can scale across large industrial environments.

For investors, the bet reflects a broader shift in robotics toward embodied AI software that enables machines to operate continuously in complex real-world contexts.

Artificial Intelligence (AI), Business & Markets, News, Robots & Robotics, Science & Tech

Radiation Resistant Wi-Fi Chip Could Expand Robots in Nuclear Cleanup

Researchers in Japan have developed a radiation-tolerant Wi-Fi receiver capable of operating in extreme nuclear environments, potentially enabling wireless robots for decommissioning tasks.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Robots deployed at Fukushima Daiichi highlight the growing role of robotics in nuclear cleanup, where new radiation-tolerant wireless chips could enable fully untethered machines. Photo: Kseniia Klichova / RobotsBeat

A new radiation-resistant wireless chip developed by researchers in Japan could enable a new generation of untethered robots designed for nuclear cleanup operations.

Engineers at the Institute of Science Tokyo have created a 2.4 GHz Wi-Fi receiver capable of operating under radiation levels that would normally disable conventional electronics. The development addresses one of the persistent challenges facing robotics inside nuclear facilities: reliable communication in extreme radiation environments.

Robots have already become a critical tool in nuclear disaster response and decommissioning, particularly at Japan’s Fukushima Daiichi nuclear power plant following the 2011 earthquake and tsunami. Most of those machines, however, rely on wired connections to maintain control and communications with operators outside hazardous areas.

That reliance on cables restricts movement, complicates navigation through damaged structures, and limits how many robots can operate simultaneously.

Why Radiation Breaks Wireless Systems

High levels of gamma radiation create severe reliability problems for semiconductor electronics. Radiation can trap electrical charges inside insulating layers within transistors, leading to signal degradation, increased noise and eventual failure of communication circuits.

Wireless systems are especially vulnerable because they must detect weak signals and maintain stable amplification across sensitive radio-frequency components.

To address the issue, the research team redesigned the Wi-Fi receiver architecture to reduce the number of vulnerable components. Fewer transistors mean fewer sites where radiation-induced charge buildup can occur.

In one key modification, the engineers replaced a transistor traditionally used in the variable gain amplifier with an inductor. Because inductors are passive components rather than active semiconductor devices, they are far less sensitive to radiation exposure.

The chip also integrates a low-noise amplifier designed to strengthen weak incoming signals and maintain stable wireless communication even as radiation gradually affects the circuit.

Toward Untethered Robots in Nuclear Facilities

Testing showed that the chip could tolerate cumulative radiation exposure of up to 500 kilograys, levels associated with the intense gamma radiation emitted by fuel debris during nuclear decommissioning.

Even at those doses, the receiver’s signal gain declined only slightly while noise levels increased marginally, leaving overall performance comparable to standard commercial Wi-Fi receivers operating in normal environments.

According to the researchers, this level of resilience could make it possible to deploy wireless robots and drones inside heavily contaminated facilities without relying on physical communication cables.

Removing cables could dramatically expand how robots are used during nuclear cleanup operations. Autonomous or remotely operated machines could move through complex structures more freely, coordinate with multiple units simultaneously, and conduct inspections or debris removal in areas too dangerous for human workers.

The potential impact extends beyond Fukushima. Many aging nuclear facilities around the world face decades-long decommissioning projects where radiation exposure remains a constant hazard.

Radiation-tolerant wireless systems could enable fleets of robots to perform inspection, mapping and dismantling tasks while reducing the exposure risk for human operators.

The research also reflects a broader trend in robotics toward designing electronics specifically for extreme environments, from nuclear reactors and deep oceans to space exploration.

For nuclear cleanup efforts in particular, reliable wireless communication may become a key enabling technology for scaling robotic operations inside some of the most hostile environments on Earth.

News, Robots & Robotics, Science & Tech

Draganfly and Palladyne AI Advance Autonomous Drone Swarm Systems

Draganfly and Palladyne AI have completed a key integration milestone combining SwarmOS autonomy software with mission-ready drone hardware, advancing distributed drone swarm capabilities for defense applications.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Draganfly drone platforms integrated with Palladyne AI’s SwarmOS autonomy software represent a step toward decentralized drone swarms capable of operating in contested environments. Photo: Draganfly

Draganfly and Palladyne AI have reached a technical integration milestone that could accelerate the development of autonomous drone swarms for defense operations. The companies confirmed that Palladyne’s SwarmOS autonomy platform has been successfully integrated with Draganfly’s drone systems and validated through flight simulation testing.

The milestone represents an early step toward deploying distributed drone swarms capable of operating with limited communications and minimal centralized control – a capability that military planners increasingly view as critical for modern battlefield environments.

Unlike traditional drone systems that rely on pre-programmed flight paths or centralized command, the integrated platform is designed to allow drones to coordinate autonomously using onboard intelligence and real-time collaboration.

Decentralized Autonomy at the Edge

At the core of the joint system is Palladyne AI’s Decentralized Edge Collaborative Autonomy architecture, known as DECA, implemented through the SwarmOS platform.

The system is designed to give each drone in a swarm the ability to interpret its environment, make tactical decisions, and coordinate with other units in real time. Rather than relying on a constant connection to a command center, the drones exchange information locally and adapt collectively as mission conditions change.

This architecture is intended to address one of the central challenges in deploying drone swarms in contested environments: communications degradation. In scenarios where GPS signals are jammed or command links are disrupted, decentralized autonomy allows the swarm to continue operating.

The system is also designed to dynamically reconfigure itself if individual drones fail or are lost during a mission, allowing the remaining units to continue executing tasks without direct human control.

Cameron Chell, chief executive of Draganfly, described the integration milestone as a proof point that collaborative autonomy between drones is moving from concept toward operational capability.

Ben Wolff, president and CEO of Palladyne AI, emphasized that the goal of the system is not simply synchronized flight formations but distributed decision-making within the swarm.

Military Interest in Scalable Autonomous Systems

The collaboration comes as defense agencies are increasing investment in autonomous systems capable of operating in complex environments where communications may be unreliable or adversaries actively disrupt networks.

Drone swarms are increasingly viewed as a potential force multiplier for intelligence, surveillance and reconnaissance missions, logistics operations and tactical support roles.

The ability to deploy dozens or potentially hundreds of autonomous drones that can coordinate without continuous command links could significantly expand operational flexibility while reducing risk to human operators.

Draganfly has already been involved in defense-related deployments, including work supporting U.S. Air Force Special Operations Command as well as projects in surveillance, mapping and tactical drone operations.

The integration of Palladyne’s autonomy software with Draganfly’s hardware platforms reflects a broader shift in the drone industry toward combining advanced AI decision systems with mature aerial robotics hardware.

A Growing Market for Autonomous Swarms

While autonomous drone swarms have long been explored in research programs, operational deployment has been limited by technical challenges in coordination, reliability and communication resilience.

Advances in edge computing, sensor fusion and distributed artificial intelligence are beginning to address those limitations. The result is growing interest from defense agencies seeking systems that can operate across air, ground and maritime domains with minimal human intervention.

For companies like Draganfly and Palladyne AI, the integration milestone is less about a single product than about validating the architecture required for scalable autonomous systems.

If the approach proves reliable in real-world environments, it could help shift drone operations from remotely piloted platforms toward collaborative autonomous fleets capable of executing complex missions with limited oversight.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech