Archives
CES 2026: Kodiak and Bosch Partner to Scale Autonomous Trucking Hardware
Kodiak has entered a strategic agreement with Bosch to scale production-grade autonomous trucking hardware, aiming to accelerate commercial deployment of driverless trucks.
Kodiak AI has announced a strategic agreement with Bosch to scale the manufacturing of production-grade autonomous trucking hardware, marking a significant step toward large-scale deployment of driverless trucks. The collaboration was revealed ahead of CES 2026, where a Kodiak Driver-powered autonomous truck will be displayed at Bosch’s booth in Las Vegas.
The partnership focuses on building a redundant, automotive-grade platform that integrates hardware, firmware, and software interfaces required to deploy Kodiak’s AI-powered virtual driver at scale. By combining Kodiak’s autonomy software with Bosch’s manufacturing expertise and supply chain capabilities, the companies aim to move autonomous trucking beyond pilots and into sustained commercial operations.
Scaling Physical AI for Heavy-Duty Trucks
Kodiak’s autonomous system, known as the Kodiak Driver, is designed as a unified platform that blends AI-driven perception and planning software with modular, vehicle-agnostic hardware. The system can be integrated either directly on a truck production line or through aftermarket upfitters, giving fleet operators flexibility in how autonomous capability is deployed.
Under the agreement, Bosch will support the development of a redundant autonomous hardware platform, supplying key components such as sensors, steering systems, and other vehicle actuation technologies. These components are designed to meet automotive-grade reliability standards, a critical requirement for long-haul trucking applications where uptime and safety are paramount.
“Advancing the deployment of driverless trucks and physical AI requires not only robust autonomy software, but also manufacturing experience and a resilient supply chain,” said Don Burnette, founder and chief executive of Kodiak. He emphasized that Bosch’s industrial scale and system-level integration expertise are essential for commercial success.
From Commercial Pilots to Industrial Scale
Kodiak has already deployed trucks operating without human drivers in commercial service, positioning the company as one of the few autonomous trucking developers with real-world revenue-generating operations. The new agreement is intended to build on that foundation by enabling higher-volume production and standardized hardware configurations.
Bosch’s role extends beyond component supply. As the world’s largest automotive supplier, the company brings decades of experience in industrialization, quality assurance, and global manufacturing. This expertise is expected to help Kodiak transition from limited deployments to repeatable, scalable production suitable for fleet-wide adoption.
Paul Thomas, president of Bosch in North America and president of Bosch Mobility Americas, said the collaboration allows Bosch to deepen its understanding of real-world autonomous vehicle requirements while contributing production-grade systems to the broader autonomous mobility ecosystem.
CES 2026 and the Push Toward Autonomous Freight
Autonomous trucking emerged as a key theme at CES 2026, with increasing emphasis on commercialization rather than experimental prototypes. Kodiak and Bosch used the event to highlight how Physical AI systems are moving into operational environments where reliability, redundancy, and cost efficiency matter as much as technical performance.
The Kodiak Driver-powered truck on display demonstrates how the integrated platform brings together sensors, compute, and vehicle control into a single autonomous system. Unlike many earlier demonstrations, the focus is on readiness for deployment rather than future concepts.
Industry analysts view the partnership as a sign that autonomous trucking is entering a more mature phase, where partnerships with established automotive suppliers are essential to overcoming manufacturing and regulatory hurdles.
Broader Implications for Autonomous Logistics
For Kodiak, the deal supports its long-term vision of becoming a trusted provider of autonomous ground transportation across commercial and public-sector applications. The company has also positioned its technology for use in government and national security contexts, where reliability and safety standards are especially stringent.
The collaboration underscores a broader trend in robotics and automation, where autonomy developers increasingly rely on established industrial partners to bridge the gap between software innovation and large-scale deployment. As Physical AI systems move from test routes to highways and supply chains, the ability to manufacture and support hardware at scale becomes a decisive competitive advantage.
With CES 2026 as the backdrop, the Kodiak-Bosch agreement signals growing confidence that autonomous trucking is transitioning from experimentation to infrastructure, setting the stage for wider adoption in the years ahead.
CES 2026: LG Showcases CLOiD Home Robot That Cooks, Folds Laundry, and Manages Chores
LG Electronics demonstrated its AI-powered CLOiD home robot at CES 2026, highlighting autonomous cooking, laundry folding, and dishwasher management as part of its Zero Labor Home vision.
LG Electronics has unveiled its most advanced home robotics concept to date with the public debut of LG CLOiD, an AI-powered household robot designed to take over everyday domestic chores. Presented at CES 2026, the robot reflects LG’s long-term Zero Labor Home strategy, which aims to reduce the physical and mental effort required to manage a modern household through intelligent automation.
Unlike earlier home robots focused on narrow tasks, CLOiD is positioned as a general-purpose domestic assistant. LG demonstrated the robot performing a range of coordinated activities, including preparing simple meals, handling laundry from start to finish, and managing dishwashing tasks. The company says CLOiD is designed to operate as part of a fully connected home rather than as a standalone device.
Demonstrating End-to-End Household Automation
During live demonstrations, CLOiD retrieved food items from a refrigerator, placed pastries into an oven, and initiated cooking processes without human intervention. After occupants left the home, the robot was shown starting laundry cycles, transferring clothes to a dryer, and folding and stacking garments once complete. CLOiD also demonstrated the ability to unload a dishwasher and organize clean dishes.
These scenarios were designed to show how the robot understands sequences of tasks rather than executing isolated commands. CLOiD uses contextual awareness to determine when chores should begin and how appliances should be operated, adapting its actions to household routines and user preferences.
LG emphasized that the robot’s value lies in orchestration. Rather than replacing individual appliances, CLOiD coordinates them, acting as a mobile control layer that connects cooking, cleaning, and laundry into a single automated workflow.
Hardware Built for Domestic Environments
CLOiD features a wheeled base for stability and safe operation in homes with children or pets. The robot’s torso can raise or lower to adjust its working height, enabling it to reach objects on countertops, inside appliances, or closer to the floor. Two articulated arms with seven degrees of freedom each provide human-like mobility.
Each hand includes five independently controlled fingers, allowing CLOiD to grasp delicate items such as glassware as well as heavier objects like laundry baskets. LG selected a wheeled design over a bipedal form to reduce cost, improve reliability, and lower the risk of tipping during operation.
The navigation system builds on LG’s experience with robotic vacuum cleaners and autonomous home platforms. CLOiD can move smoothly between rooms, avoid obstacles, and precisely position itself for manipulation tasks in kitchens and laundry areas.
Physical AI and Smart Home Integration
At the core of CLOiD is LG’s Physical AI framework, which combines vision-based perception, language understanding, and action planning. The robot uses visual data from onboard cameras to recognize appliances, objects, and environments. This information is translated into structured understanding and then into physical actions, such as opening doors, transferring items, or adjusting appliance settings.
CLOiD’s head functions as a mobile AI home hub, housing its processor, sensors, display, speakers, and voice-based generative AI. The robot communicates with users through spoken dialogue and expressive visual cues while continuously learning household layouts and routines.
Deep integration with LG’s ThinQ and ThinQ ON platforms allows CLOiD to control and coordinate smart appliances across the home. This connectivity enables more complex automation scenarios, such as preparing meals based on available ingredients or scheduling chores around user absences.
Robotics Components and Long-Term Strategy
Alongside CLOiD, LG introduced AXIUM, a new family of robotic actuators designed for service robots and physical AI systems. Actuators control motion and force within robotic joints and are considered one of the most critical and cost-intensive components in advanced robots.
LG says its background in appliance component manufacturing provides an advantage in producing lightweight, compact, and high-torque actuators suitable for home robotics. Modular actuator designs also allow customization across different robot configurations and use cases.
Looking ahead, LG plans to expand robotics capabilities across both standalone home robots and robotized appliances. The company envisions refrigerators that open automatically as users approach and appliances that actively coordinate with home robots to complete tasks autonomously.
“The LG CLOiD home robot is designed to naturally engage with and understand the humans it serves, providing an optimized level of household help,” said Steve Baek, president of the LG Home Appliance Solution Company. “We will continue our efforts to achieve our Zero Labor Home vision.”
At CES 2026, LG positioned CLOiD as a glimpse into a future where household labor is largely delegated to intelligent machines, allowing people to spend more time on activities beyond routine chores.
World’s Smallest Programmable Autonomous Robots Can Swim, Sense, and Think
Researchers at the University of Pennsylvania and the University of Michigan unveiled microscopic robots that are fully programmable, autonomous, and capable of sensing and reacting to their environment over extended periods.
Researchers at the University of Pennsylvania and the University of Michigan have developed what they describe as the world’s smallest fully programmable autonomous robots, pushing robotics into a microscopic frontier. Each robot measures roughly 200 by 300 by 50 micrometers, smaller than a grain of salt, yet integrates computing, sensing, and propulsion into a single untethered system. The robots are designed to operate independently without external control, marking a significant step forward in microscale robotics.
Unlike earlier microrobots that relied on magnetic fields or external power sources, these robots are fully autonomous. They are powered by light, which activates onboard electronics and enables them to sense their surroundings and make basic decisions. In laboratory demonstrations, the robots were able to swim in liquid environments and adjust their motion without human intervention.
The robots can be produced using established semiconductor fabrication techniques, allowing them to be manufactured at scale. Researchers estimate the cost at roughly one cent per robot when produced in large quantities. Once activated, the devices can continue operating for months, making them suitable for long-duration experiments or deployments at microscopic scales.
Autonomous Microscale Motion and Control
Movement at microscopic scales presents unique challenges because fluid resistance dominates over inertia. To address this, the robots use an electrochemical propulsion method rather than mechanical parts. By generating electric fields, the robots interact with ions in the surrounding liquid, creating movement without the need for motors or moving limbs.
This approach allows the robots to swim at speeds of roughly one body length per second. The lack of moving components makes the robots mechanically robust and resistant to damage during handling. Researchers demonstrated that the devices could be transferred between samples using standard laboratory tools without losing functionality.
The propulsion method also enables precise directional control. By adjusting electrical signals, the robots can change direction, stop, or follow preprogrammed movement patterns. This capability is essential for future applications that require coordinated motion or navigation through confined environments.
Tiny Brains and Sensing Capabilities
A key breakthrough lies in the integration of a complete computing system at such a small scale. The robots include a processor, memory, and sensors embedded directly on the chip. Power is supplied by microscopic solar cells that generate approximately 75 nanowatts under LED illumination, an extremely small energy budget compared to consumer electronics.
Despite these constraints, the robots are capable of basic sensing and decision-making. They can detect temperature changes with high sensitivity and alter their behavior in response. Researchers also demonstrated simple communication by encoding information through movement patterns that can be observed under a microscope.
These capabilities allow the robots to respond dynamically rather than follow fixed paths. While the onboard intelligence is limited compared to larger robotic systems, it represents a major step toward autonomous behavior at microscopic dimensions.
Potential Applications and Next Steps
The researchers see strong potential for applications in biomedicine, where microscopic robots could one day monitor cellular environments or deliver targeted therapies. Their small size allows them to operate in spaces inaccessible to conventional devices, including narrow fluid channels and delicate biological systems.
In manufacturing and materials science, the robots could assist in assembling or inspecting microscale components. Because the platform is compatible with standard chip manufacturing processes, it could be adapted for large-scale production and customized for specific industrial tasks.
The current demonstrations were conducted in controlled laboratory conditions, and the researchers emphasize that further work is needed to expand functionality. Future efforts will focus on improving sensing, increasing computational complexity, and enabling operation in more complex environments. Even at this early stage, the work establishes a foundation for autonomous robotics at scales comparable to biological microorganisms.
UPS Buys Hundreds of Robots to Automate Truck Unloading Operations
UPS has purchased hundreds of warehouse robots designed to unload packages from trucks, expanding automation across its U.S. logistics network to address labor strain and efficiency demands.
United Parcel Service has taken another major step toward warehouse automation by purchasing hundreds of robots designed to unload packages from delivery trucks. The move reflects growing pressure on large logistics operators to increase throughput while reducing reliance on physically demanding manual labor. UPS says the robots will be deployed across multiple facilities in the United States.
Truck unloading is among the most physically taxing tasks in parcel logistics, requiring workers to handle thousands of packages per shift in confined trailer spaces. By automating this stage of the workflow, UPS aims to improve worker safety while maintaining consistent processing speeds during peak demand periods. The company has increasingly focused on automation as parcel volumes fluctuate and labor availability tightens.
Automating One of Logistics’ Hardest Jobs
The robots are designed to operate inside standard truck trailers, identifying packages of varying shapes and sizes and transferring them onto conveyor systems. Using machine vision and AI-based grasping systems, the robots can adapt to mixed loads without requiring pre-sorted shipments. This flexibility allows them to function in existing facilities without major structural changes.
UPS says the robotic unloading systems are capable of operating continuously and can handle thousands of packages per hour. While human workers will continue to oversee operations, the robots are intended to take over repetitive lifting and stacking tasks that have historically contributed to injuries and high turnover.
The company has been testing robotic unloading technology for several years through pilot programs. The decision to move forward with a large-scale purchase suggests those trials met internal benchmarks for reliability, safety, and return on investment.
Scaling Automation Across the Network
UPS operates one of the world’s largest logistics networks, processing millions of packages per day. Even small efficiency gains at individual facilities can translate into significant cost savings at scale. Automating truck unloading also helps standardize operations across sites, reducing performance variability tied to staffing levels.
The robots will be integrated into UPS facilities alongside existing automation systems, including conveyor networks, sorting machines, and warehouse management software. This layered approach allows UPS to automate specific bottlenecks without redesigning entire hubs.
While the company did not disclose the total value of the purchase, large-scale robotic deployments of this kind typically involve multi-year investments. Industry analysts view the move as part of a broader shift among parcel carriers toward targeted automation rather than full end-to-end robotic warehouses.
Labor, Safety, and the Future of Parcel Handling
UPS has emphasized that automation is intended to complement its workforce rather than replace it. By reducing the physical strain of unloading tasks, the company aims to reassign workers to roles that require oversight, coordination, and problem-solving.
Warehouse robotics adoption has accelerated across the logistics industry as operators confront rising service expectations, tight delivery timelines, and ongoing labor challenges. Robots capable of unloading trucks address one of the most difficult remaining manual processes in parcel handling.
As UPS continues deploying these systems, their performance in live operations will likely influence similar investments across the sector. The expansion underscores how robotics is moving deeper into everyday logistics tasks, shifting from experimental pilots to large-scale, operational deployments.
Humanoid Robots Take Center Stage at Silicon Valley Humanoids Summit as Doubts Persist
Humanoid robots dominated discussions at a Silicon Valley Humanoids Summit, but investors and engineers raised concerns about scalability, costs, and real-world deployment timelines.
Humanoid robots were the headline attraction at the Silicon Valley Humanoids Summit, where startups and researchers showcased rapid progress in mobility, perception, and manipulation. Demonstrations highlighted robots walking autonomously, handling objects, and interacting with human-built environments. Despite the enthusiasm, discussions repeatedly returned to unresolved challenges around cost, reliability, and commercial readiness.
The summit reflected a broader surge of interest in humanoid robotics driven by advances in artificial intelligence, sensors, and actuators. Investors, engineers, and corporate buyers attended sessions focused on how humanoid form factors could operate in warehouses, factories, and service environments. Yet many participants cautioned that impressive demonstrations do not always translate into scalable products.
Progress Meets Practical Constraints
Several companies presented humanoid robots designed to work in logistics and manufacturing, emphasizing their ability to navigate spaces built for humans without infrastructure changes. Developers argued that bipedal robots could eventually replace or support workers in tasks ranging from material handling to inspection. The appeal lies in flexibility, with a single robot potentially performing many roles rather than one specialized task.
However, experts at the summit noted that humanoid robots remain expensive to build and maintain. Power consumption, mechanical wear, and software robustness continue to limit continuous operation. While some robots can perform short demonstrations reliably, sustaining performance across long shifts in unpredictable environments remains a significant hurdle.
There was also skepticism about whether humanoid robots offer clear advantages over existing automation. In many warehouses and factories, wheeled robots, conveyors, and fixed automation already deliver higher efficiency at lower cost. Critics argued that humanoid designs may only make economic sense in narrow use cases where human-like mobility is essential.
Market Expectations and Deployment Reality
The summit highlighted growing tension between investor expectations and deployment timelines. Several startups predicted rapid adoption within the next few years, pointing to pilot programs and early commercial agreements. Others urged caution, warning that widespread deployment would likely take longer due to safety certification, workforce integration, and total cost of ownership considerations.
Labor dynamics were a recurring theme. Proponents suggested humanoid robots could help address worker shortages and reduce injury risk in physically demanding roles. Skeptics countered that deploying complex robots introduces new maintenance and oversight requirements that may offset labor savings, at least in the near term.
Regulatory uncertainty also surfaced during discussions. Humanoid robots operating alongside humans raise questions about liability, workplace standards, and certification processes. Industry observers noted that clear regulatory frameworks will be critical before large fleets can be deployed in active industrial settings.
A Sector at a Crossroads
By the end of the summit, humanoid robots had clearly captured attention, but consensus remained elusive. The technology is advancing rapidly, and real-world pilots are expanding, yet doubts persist about near-term scalability and economic viability. Many attendees described the current moment as a transition from hype-driven excitement to a more sober evaluation of practical constraints.
The discussions underscored that humanoid robotics is no longer a speculative concept, but neither is it a solved problem. As companies continue to test deployments and refine designs, the coming years are likely to determine whether humanoid robots become a mainstream industrial tool or remain a niche solution reserved for specific environments.
Mercado Libre Signs Deal with Agility Robotics to Deploy Digit Humanoid Robots
Mercado Libre has entered a commercial agreement with Agility Robotics to deploy Digit humanoid robots in its logistics operations, starting with a pilot facility in Texas.
Mercado Libre, Latin America’s largest commerce and fintech ecosystem, has signed a commercial agreement with Agility Robotics to deploy the Digit humanoid robot in its logistics operations. The partnership marks one of the first uses of commercially deployed humanoid robots in large-scale e-commerce fulfillment tied to Latin American operations. Initial deployment will take place at a Mercado Libre facility in San Antonio, Texas.
The companies say the collaboration is aimed at exploring how humanoid robots can support fulfillment workflows, improve workplace ergonomics, and address labor shortages in logistics. While the first deployment is limited to a U.S. site, both sides plan to evaluate broader use cases across Mercado Libre’s warehouse network in Latin America.
Digit Enters Live Commerce Operations
Digit is a human-scale bipedal robot designed to walk through existing warehouse aisles, lift and carry totes, and operate alongside human workers without requiring major infrastructure changes. Agility Robotics says Digit is already commercially deployed and has moved more than 100,000 totes in live commerce environments, demonstrating reliability in production settings.
At Mercado Libre, Digit will initially focus on tasks that support order fulfillment. These include repetitive and physically demanding activities that are often difficult to staff consistently. By automating such roles, the companies aim to reduce injury risk and free human workers for higher-value tasks.
“At Mercado Libre, we are constantly exploring how emerging technologies can elevate our operations and improve the experience for our employees and millions of users,” said Agustin Costa, Senior Vice President of Shipping at Mercado Libre. “Our partnership with Agility Robotics and the deployment of Digit in our facilities is a significant step forward in our vision to create a safer, more efficient, and adaptable logistics network.”
Costa added that the company is particularly interested in how humanoid robots can complement existing teams rather than replace them. The goal, he said, is to test how robotics can drive the next evolution of commerce logistics in the region.
Automation, AI, and Labor Challenges
Digit is designed to fill high-turnover roles using a combination of onboard autonomy and cloud-based fleet management. The robot leverages artificial intelligence to learn tasks, adapt to new workflows, and operate continuously in structured warehouse environments. Agility Robotics pairs Digit with Agility ARC, its cloud automation platform for deploying and managing fleets of robots.
Through this platform, Digit can coordinate with other automated systems such as autonomous mobile robots, conveyor belts, and warehouse management software. This approach allows companies to add humanoid robots to existing operations without disrupting current automation investments.
Agility Robotics positions Digit as a response to chronic labor shortages in logistics and manufacturing, where repetitive manual tasks often struggle to attract long-term workers. By handling physically taxing work, the company argues, humanoid robots can help stabilize throughput while improving safety and consistency.
“We are incredibly proud to be partnering with Mercado Libre to support their workforce and operations through the deployment of Agility’s humanoid robot Digit,” said Daniel Diez, Chief Business Officer of Agility Robotics. “Mercado Libre has demonstrated that it is a true innovator in both commerce and fintech, and we are excited to integrate our autonomous humanoid robots capable of performing meaningful work and delivering real value into their facilities.”
A Growing Market for Humanoid Robots
Mercado Libre joins a growing list of companies deploying Agility’s humanoid robots, including logistics provider GXO, German industrial manufacturer Schaeffler, and Amazon. These deployments signal increasing confidence that humanoid robots are moving beyond research and pilot programs into real-world industrial use.
While the current agreement focuses on evaluation and early deployment, the companies suggest the partnership could expand if results meet operational and economic expectations. For Mercado Libre, the project offers a way to test advanced automation while maintaining flexibility across a geographically diverse logistics network.
The deployment underscores a broader trend in robotics, where humanoid form factors are being tested not as novelties but as practical tools designed to work within human-built environments. As large logistics operators seek scalable solutions to labor and efficiency challenges, commercially deployed humanoid robots like Digit may play an increasingly visible role.
Engineers Use AI to Fine-Tune Robotic Prosthesis for Natural Hand Dexterity
Researchers at the University of Utah used artificial intelligence to improve control of a robotic prosthetic hand, reducing cognitive effort while increasing grip precision and stability.
Engineers at the University of Utah have developed an artificial intelligence system that significantly improves the dexterity and intuitiveness of robotic prosthetic hands. By combining advanced sensors with machine learning, the researchers enabled a prosthesis to grasp objects in a way that more closely resembles natural human movement. The approach reduces the mental effort required by users while increasing grip precision and reliability.
For many prosthesis users, even simple tasks such as holding a cup or picking up a small object require deliberate finger-by-finger control. This added cognitive burden is one of the main reasons advanced prosthetic devices are often abandoned. The Utah team focused on restoring the subconscious, automatic aspects of grasping that most people take for granted.
Sensors and AI Enable Autonomous Grasping
The researchers modified a commercially available prosthetic hand by equipping it with custom fingertips capable of sensing both pressure and proximity. Optical proximity sensors allow the fingers to detect objects before physical contact, while pressure sensors provide feedback once an object is grasped. Together, these inputs give the prosthesis a form of artificial touch.
An artificial neural network was trained on grasping postures using proximity data from each finger. This allows the prosthetic hand to autonomously position its fingers at the correct distance to form a stable grip. Because each finger operates with its own sensor, the system adjusts all digits in parallel, producing precise and adaptable grasping behavior across objects of different shapes and sizes.
In testing, participants using the AI-assisted prosthesis demonstrated greater grip security and precision compared to conventional control methods. They were also able to complete tasks using different grip styles without extensive training, suggesting the system adapts naturally to user intent.
Sharing Control Between Human and Machine
A central design challenge was ensuring that artificial intelligence supported the user rather than competing for control. To solve this, the researchers implemented a bioinspired framework that shares control between the human and the AI system. The prosthesis assists with fine motor adjustments while allowing the user to initiate, modify, or stop actions freely.
“What we don’t want is the user fighting the machine for control,” said Marshall Trout, a postdoctoral researcher involved in the work. “Here, the machine improved the precision of the user while also making the tasks easier.”
The system blends rapid reactive responses, such as preventing excessive grip force, with higher-level planning that anticipates how objects should be grasped. This mirrors how humans naturally coordinate instinctive reactions with learned motor patterns.
Study Leadership and Future Directions
The study was led by engineering professor Jacob A. George together with Trout at the Utah NeuroRobotics Lab and was published in the journal Nature Communications. The research involved experiments with four transradial amputees, whose amputations occurred between the elbow and wrist.
Participants completed standardized dexterity tests as well as everyday activities requiring fine motor control. Tasks such as lifting a lightweight plastic cup, which require careful force modulation, became more reliable with AI assistance.
“As lifelike as bionic arms are becoming, controlling them is still not easy or intuitive,” Trout said. “Nearly half of all users will abandon their prosthesis, often citing poor controls and cognitive burden.”
George emphasized that the long-term goal is to embed intelligence directly into prosthetic devices so users can interact with objects more naturally. The team is now exploring how this AI-driven grasping approach could be combined with implanted neural interfaces, enabling thought-based control and the return of tactile sensations. By merging sensing, intelligence, and neural input, the researchers aim to make robotic prostheses feel less like tools and more like natural extensions of the human body.
AI-Powered Robotic Dog Uses Memory and Vision for Search-and-Rescue Missions
Engineering students at Texas A&M University developed an AI-powered robotic dog that sees, remembers, and plans routes autonomously, targeting search-and-rescue and disaster response missions.
Researchers at Texas A&M University have developed an AI-powered robotic dog designed to operate in complex, unpredictable environments using memory-driven navigation and human-like decision-making. Built by graduate engineering students, the robot is capable of seeing, remembering where it has been, and responding dynamically to new situations. The system is aimed primarily at search-and-rescue and disaster response missions, where conditions are often chaotic and GPS signals are unavailable.
Unlike conventional robotic systems that rely on pre-mapped environments or simple obstacle avoidance, the robotic dog integrates vision, memory, and language-based reasoning. It understands voice commands, analyzes camera input in real time, and plans routes autonomously. The developers say this combination allows the robot to behave more like a human responder than a traditional machine.
Memory-Driven Navigation With Multimodal AI
At the core of the system is a memory-driven navigation architecture powered by a custom multimodal large language model. The model interprets visual data captured by onboard cameras and combines it with stored environmental memory to make navigation decisions. This enables the robot to recall previously traveled paths and reuse them, improving efficiency and reducing redundant exploration.
A hybrid control structure allows the robot to balance reactive behavior with high-level planning. It can quickly respond to immediate hazards, such as avoiding collisions, while simultaneously reasoning about longer-term navigation goals. According to the research team, this mirrors how humans navigate unfamiliar spaces by combining instinctive reactions with deliberate planning.
“Some academic and commercial systems have integrated language or vision models into robotics,” said Sandun Vitharana, an engineering technology master’s student involved in the project. “However, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.”
The robot’s navigation system was designed specifically for unstructured and unpredictable environments, such as disaster zones or remote areas. Traditional autonomous navigation methods often struggle in these conditions due to changing layouts, debris, and limited visibility.
From Disaster Response to Broader Applications
The project was led by Vitharana and Sanjaya Mallikarachchi, an interdisciplinary engineering doctoral student, with guidance from faculty at Texas A&M University. With support from the National Science Foundation, the team explored how multimodal AI models could be deployed at the edge, rather than relying on cloud-based processing.
“Moving forward, this kind of control structure will likely become a common standard for human-like robots,” Mallikarachchi said.
Beyond search-and-rescue operations, the researchers see broader potential applications for the technology. The robot’s ability to navigate large, complex spaces could make it useful in hospitals, warehouses, and other industrial facilities. Its memory-based system may also assist people with visual impairments, conduct reconnaissance in hazardous areas, or support exploration tasks where human access is limited.
Dr. Isuru Godage, an assistant professor in the Department of Engineering Technology and Industrial Distribution, emphasized the importance of deploying advanced AI directly on robotic platforms. “The core of our vision is deploying MLLM at the edge, which gives our robotic dog the immediate, high-level situational awareness previously impossible,” Godage said. “Our goal is to ensure this technology is not just a tool, but a truly first responder-ready system for unmapped environments.”
The robot was recently demonstrated at the 22nd International Conference on Ubiquitous Robots, where the team presented experimental results and system design details. The work highlights how advances in multimodal AI are beginning to reshape autonomous robotics, moving systems closer to adaptive, human-like behavior in real-world conditions.
MIT’s Aerial Microrobot Matches Insect Speed with AI-Controlled Flight
MIT engineers developed an AI-controlled aerial microrobot capable of flying with speed and agility comparable to insects, marking a major advance in micro-scale robotics.
Engineers at MIT have demonstrated an aerial microrobot capable of flying with speed and agility approaching that of real insects, overcoming a long-standing limitation in micro-scale robotics. The insect-sized robot can execute aggressive maneuvers, including repeated midair somersaults, while maintaining stability even in windy conditions. The breakthrough was enabled by a new artificial intelligence-based control system that dramatically improves flight performance.
Tiny flying robots have long been viewed as promising tools for applications such as search-and-rescue operations, where they could navigate through narrow gaps and unstable environments inaccessible to larger drones. Until now, however, aerial microrobots have been constrained to slow, smooth flight paths due to limitations in control systems and onboard computation.
AI-Based Control Unlocks Agile Flight
The MIT team designed a two-part, AI-driven control framework that balances high performance with real-time efficiency. Compared to previous versions of the robot, the new system increased flight speed by approximately 450 percent and acceleration by about 250 percent. In testing, the robot completed 10 consecutive somersaults in just 11 seconds while staying within a few centimeters of its intended trajectory.
“We want to be able to use these robots in scenarios that more traditional quadcopter robots would have trouble flying into, but that insects could navigate,” said Kevin Chen, an associate professor of electrical engineering and computer science at MIT and co-senior author of the study. “With our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and pitching angle.”
The microrobot itself is roughly the size of a microcassette and weighs less than a paperclip. It uses flexible artificial muscles to rapidly flap oversized wings, generating lift and enabling sharp directional changes. While recent hardware improvements made more aggressive flight possible, earlier versions of the robot relied on manually tuned controllers that limited overall performance.
From Predictive Planning to Real-Time Action
To overcome these constraints, Chen’s team collaborated with researchers led by Jonathan P. How in MIT’s Department of Aeronautics and Astronautics. Together, they developed a two-step control strategy that combines predictive planning with machine learning.
The first component is a model-predictive controller that uses a mathematical model of the robot’s dynamics to plan optimal flight maneuvers. This controller can account for uncertainties in aerodynamics, external disturbances such as wind, and physical limits on force and torque. While powerful, it is too computationally demanding to run continuously in real time.
To address this, the researchers used the predictive controller to train a deep-learning-based policy through imitation learning. The resulting AI model captures the behavior of the high-performance planner in a compact form that can run extremely fast, allowing the robot to respond instantly to changing conditions.
“If small errors creep in and you try to repeat a flip multiple times, the robot will crash,” How said. “We need robust flight control.”
Toward Real-World Deployment
The robot’s agility allowed it to demonstrate flight behaviors commonly seen in insects, including rapid pitch-and-stop maneuvers known as saccades. These movements help insects stabilize their vision and orient themselves in space, and similar capabilities could support future onboard sensing.
“This bio-mimicking flight behavior could help us when we start putting cameras and sensors on board the robot,” Chen said. “This work signals a paradigm shift. It shows that we can build control architectures that are both high-performing and computationally efficient, even at insect scale.”
The research team plans to focus next on adding onboard sensors and autonomy so the robots can operate outside laboratory motion-capture environments. Coordinated flight among multiple microrobots and collision avoidance are also areas of active investigation.
The study was published in the journal Science Advances and highlights how advanced AI control methods are enabling microrobots to move beyond experimental demonstrations toward practical, real-world use.