Archives
CES 2026: Caterpillar and NVIDIA Push Physical AI Into Heavy Industry
Caterpillar and NVIDIA deepened their partnership at CES 2026, outlining how Physical AI will transform construction, mining, manufacturing, and industrial supply chains.
CES 2026 marked another milestone in the rise of Physical AI, with Caterpillar and NVIDIA unveiling an expanded collaboration aimed at reshaping heavy industry. The partnership signals how artificial intelligence is moving beyond digital workflows and into the machines, factories, and jobsites that power the global economy.
“As AI moves beyond data to reshape the physical world, it is unlocking new opportunities for innovation – from job sites and factory floors to offices,” said Joe Creed, CEO of Caterpillar. “Caterpillar is committed to solving our customers’ toughest challenges by leading with advanced technology in our machines and every aspect of business. Our collaboration with NVIDIA is accelerating that progress like never before.”
For Caterpillar, the collaboration is about embedding intelligence directly into iron. For NVIDIA, it extends its AI platforms into some of the most demanding physical environments on earth – construction zones, mines, and industrial plants – where reliability, safety, and scale matter more than novelty.
Machines Built for the AI Era
At the core of the partnership is NVIDIA’s Jetson Thor platform, which Caterpillar plans to deploy across construction, mining, and power-generation equipment. Running advanced AI models at the edge allows Cat machines to process massive volumes of sensor data in real time, enabling smarter decision-making in unpredictable environments.
This shift lays the groundwork for AI-assisted and autonomous operations at scale. Caterpillar described future machines as part of a “digital nervous system” for jobsites, where fleets continuously analyze conditions, adapt to terrain, and optimize productivity. In-cab AI features will also play a growing role, providing operators with real-time coaching, safety alerts, and performance insights tailored to specific tasks and environments.
Rather than replacing operators, Caterpillar is positioning AI as an augmentation layer – one that helps crews work faster, safer, and with greater confidence as jobsites become more complex.
Debuting the Cat AI Assistant
One of the most visible announcements at CES 2026 was the debut of the Cat AI Assistant. Designed as a proactive digital partner, the assistant integrates voice-based interaction directly into Caterpillar’s onboard and digital systems. Built using NVIDIA’s Riva speech models, it delivers natural, conversational responses while drawing on Caterpillar’s own equipment and maintenance data.
In practical terms, this means operators and fleet managers can ask questions about machine health, parts, troubleshooting, or maintenance schedules and receive context-aware guidance instantly. Inside the cab, voice activation can adjust settings, guide diagnostics, and connect users to the right tools without interrupting work.
The assistant reflects a broader trend at CES 2026: Physical AI systems are increasingly conversational, intuitive, and embedded directly into workflows rather than accessed through separate dashboards.
NVIDIA AI Factory and the Reinvention of Industrial Operations
Beyond the jobsite, Caterpillar is leveraging NVIDIA AI Factory to transform manufacturing and supply chain operations. AI Factory provides the accelerated computing infrastructure, software frameworks, and AI libraries needed to train, deploy, and continuously improve large-scale industrial AI systems.
Caterpillar is using this infrastructure to automate and optimize core manufacturing processes such as production forecasting, scheduling, and quality control. By running these workloads on AI Factory, Caterpillar can process vast datasets faster, adapt to changing demand, and improve resilience across its global production network.
A major component of this effort is the creation of physically accurate digital twins of Caterpillar factories using NVIDIA Omniverse and OpenUSD technologies. These digital environments allow teams to simulate factory layouts, test production changes, and optimize workflows before implementing them in the real world — reducing downtime, risk, and cost.
Physical AI Moves From Concept to Infrastructure
The Caterpillar–NVIDIA collaboration fits squarely into the broader narrative of CES 2026, where Physical AI emerged as a unifying theme across robotics, autonomy, logistics, and heavy industry. From autonomous construction equipment to AI-driven factories, intelligence is becoming embedded directly into physical systems.
By combining Caterpillar’s century-long experience in industrial machinery with NVIDIA’s AI platforms and AI Factory infrastructure, the two companies are signaling that Physical AI is no longer experimental. It is becoming foundational infrastructure for how industries build, move, and power the world.
As Caterpillar CEO Joe Creed noted, AI is no longer just analyzing data – it is actively reshaping how work gets done. In heavy industry, that transformation is now moving at full speed.
Humanoid Builds HMND 01 Alpha in 7 Months Using NVIDIA Robotics Stack
London-based startup Humanoid moved from concept to a functional alpha prototype of its HMND 01 robot in seven months, compressing a development cycle that typically takes up to two years.
London-based robotics startup Humanoid has compressed the traditional hardware development timeline by moving from concept to a functional alpha prototype of its HMND 01 system in just seven months.
The milestone stands in contrast to the typical 18 to 24 months required to develop comparable humanoid or industrial robotic platforms, highlighting how simulation-first development and edge AI are reshaping robotics engineering.
The HMND 01 Alpha program includes two robot variants: a wheeled platform designed for near-term industrial deployment and a bipedal system intended primarily for research and future service or household applications.
Both platforms are currently undergoing field tests and proof-of-concept demonstrations, including a recent industrial evaluation with automotive supplier Schaeffler.
At the center of Humanoid’s accelerated development cycle is a tightly integrated software and hardware stack built on NVIDIA robotics technologies.
Edge Compute and Foundation Models at the Core
The HMND 01 Alpha robots use NVIDIA Jetson Thor as their primary edge computing platform. By consolidating compute, sensing, and control onto a single high-performance system, Humanoid simplified internal architecture, wiring, manufacturability, and field serviceability.
Jetson Thor allows the robots to run large robotic foundation models directly on-device rather than relying on cloud processing. This enables real-time execution of vision-language-action models that support perception, reasoning, and task execution in dynamic environments.
Humanoid reported that training these models using NVIDIA’s AI infrastructure has reduced post-training processing times to just a few hours. This faster turnaround significantly shortens the loop between data collection, model refinement, and deployment on physical robots, allowing the company to iterate at software speed rather than hardware speed.
Simulation-First Development and Hardware Optimization
Humanoid’s workflow is built around a simulation-to-reality pipeline using NVIDIA Isaac Lab and Isaac Sim. Engineers use Isaac Lab to train reinforcement learning policies for locomotion and manipulation, while Isaac Sim provides a high-fidelity environment for testing navigation, perception, and full-body control.
Through a custom hardware-in-the-loop validation system, Humanoid created digital twins that mirror the software interfaces of the physical robots. This allows middleware, control logic, teleoperation, and SLAM systems to be tested virtually before deployment on real hardware. According to the company, new control policies can be trained from scratch and deployed onto physical robots within roughly 24 hours.
Simulation also plays a direct role in mechanical engineering decisions. During development of the bipedal robot, Humanoid evaluated six different leg configurations in simulation, analyzing torque requirements, joint stability, and mass distribution before committing to physical prototypes.
Engineers also optimized actuator selection, sensor placement, and camera positioning using simulated perception data, reducing the risk of blind spots and interference in industrial settings.
These physics-based simulations contributed to the robots’ performance during early industrial trials and helped avoid costly redesigns later in the development cycle.
Toward Software-Defined Robotics Standards
Humanoid views HMND 01 as part of a broader shift toward software-defined robotics. The company is working with NVIDIA to move away from legacy industrial communication standards and toward modern networking architectures designed for AI-enabled robots.
“NVIDIA’s open robotics development platform helps the industry move past legacy industrial communication standards and make the most of modern networking capabilities,” said Jarad Cannon, chief technology officer of Humanoid.
He added that the company is collaborating on a new robotics networking system built on Jetson Thor and the Holoscan Sensor Bridge, with the goal of enabling more flexible and scalable robot architectures.
Founded in 2024 by Artem Sokolov, Humanoid has grown to more than 200 engineers and researchers across offices in London, Boston, and Vancouver. The company reports 20,500 pre-orders, six completed proof-of-concept projects, and three active pilot programs.
While the bipedal HMND 01 remains focused on research and long-term service robotics, the wheeled variant is positioned for near-term industrial use. Humanoid’s strategy emphasizes early deployment in operational environments to gather real-world data and continuously refine its software-defined architecture, signaling a shift in how humanoid and industrial robots are developed and brought to market.
CES 2026: Doosan Bobcat Unveils RX3 Autonomous Loader and AI Jobsite Tech
Doosan Bobcat introduced the RX3 autonomous concept loader and a suite of AI-powered jobsite technologies at CES 2026, signaling a shift toward smart, electrified construction equipment.
Doosan Bobcat has unveiled a new generation of autonomous and AI-enabled construction technologies at CES 2026, headlined by the RX3 autonomous concept loader and a growing ecosystem of intelligent jobsite systems. The announcements reflect the company’s push to integrate autonomy, electrification, and artificial intelligence into compact construction equipment designed for real-world deployment.
Presented during CES Media Day in Las Vegas, the technologies are part of what Bobcat describes as a “Smart Construction Jobsite,” where machines assist operators, reduce complexity, and improve safety and productivity. While several systems remain in concept or prototype form, the company emphasized that many are moving steadily toward commercialization.
RX3 Autonomous Concept Loader
The Bobcat RogueX3, or RX3, represents the third generation of Bobcat’s autonomous loader concept. The electric-powered machine is designed to match the size and footprint of existing manned Bobcat equipment, allowing it to operate in current jobsites without major workflow changes. It uses tracked mobility to provide traction across uneven or challenging surfaces while operating quietly and without emissions.
A key feature of the RX3 is its modular design. The platform can be configured with or without a cab, equipped with wheels or tracks, and paired with different lift arms depending on the task. Bobcat said the concept could ultimately support multiple powertrains, including electric, diesel, hybrid, and hydrogen, offering flexibility as energy infrastructure evolves.
“For nearly 70 years, Bobcat has led the compact equipment industry by solving real problems for real people,” said Scott Park, vice chairman and CEO of Doosan Bobcat. “As jobsites become more complex, we’re responding with intelligent systems that help people accomplish more, faster, and smarter.”
Bobcat is also working with Agtonomy as a technology partner, using its perception and fleet management software to enable autonomous and semi-autonomous operation in agricultural and construction contexts.
AI Comes Into the Cab
Alongside the RX3, Doosan Bobcat introduced the Bobcat Jobsite Companion, described as the compact equipment industry’s first AI voice control system. Powered by a proprietary large language model running entirely onboard, the system allows operators to manage more than 50 machine functions using natural voice commands.
Operators can adjust attachment settings, engine speed, lighting, and other controls without taking their hands off the controls. Because the system does not rely on cloud connectivity, it can respond in real time even in remote or connectivity-limited jobsites.
“Jobsite Companion lowers the barrier to entry for new operators while helping experienced professionals work faster and more precisely,” said Joel Honeyman, vice president of global innovation at Doosan Bobcat.
Bobcat also announced Service.AI, an AI-powered support platform designed for dealers and technicians. The system provides instant access to diagnostics, repair manuals, service histories, and troubleshooting guidance, aiming to reduce downtime and speed up maintenance.
Safety, Displays, and Energy Systems
Doosan Bobcat showcased several additional technologies that support its smart jobsite vision. A radar-based collision warning and avoidance system uses imaging radar to monitor surroundings and can automatically slow or stop a machine to prevent accidents.
The company also revealed an advanced display concept using transparent MicroLED screens integrated into cab windows. These displays overlay 360-degree camera views, machine performance data, alerts, and asset tracking directly into the operator’s field of vision.
Powering these systems is the Bobcat Standard Unit Pack, or BSUP, a modular and rugged battery system designed for harsh construction environments. The fast-charging packs are scalable across Bobcat’s equipment lineup and are intended to support broader electrification efforts, including potential use by other manufacturers.
Toward a Smarter Jobsite
Doosan Bobcat said the technologies unveiled at CES 2026 form an integrated ecosystem rather than isolated features. By combining AI, autonomy, electrification, and connectivity, the company aims to redefine how compact equipment is operated and supported.
“We’ll combine AI, autonomy, electrification, and connectivity to create new jobsite standards,” Park said during the Media Day presentation.
While the RX3 and several systems remain concept-stage, Bobcat’s messaging at CES emphasized near-term impact rather than distant vision. The company framed these developments as practical steps toward safer, more productive jobsites where intelligent machines actively support human workers.
CES 2026: Mobileye to Acquire Mentee Robotics for $900M to Accelerate Physical AI Push
Mobileye agreed to acquire humanoid robotics startup Mentee Robotics for $900 million, expanding its autonomy technology from vehicles into Physical AI and general-purpose humanoid robots.
Mobileye has agreed to acquire Mentee Robotics in a $900 million transaction that marks a major strategic shift beyond autonomous driving and into humanoid robotics and Physical AI. Announced during CES 2026 in Las Vegas, the deal positions Mobileye to apply its autonomy technology to machines designed to work directly alongside humans in physical environments.
The acquisition combines Mobileye’s large-scale perception, planning, and safety systems with Mentee’s vertically integrated humanoid robot platform. Together, the companies aim to build general-purpose robots capable of understanding context, inferring intent, and executing tasks safely and autonomously in real-world settings such as factories, warehouses, and industrial facilities.
From Vehicle Autonomy to Embodied Intelligence
Mobileye’s core business has been built around vision-based autonomy for vehicles, with systems designed to interpret complex scenes, predict behavior, and make safety-critical decisions. Those same challenges increasingly define humanoid robotics, where machines must navigate spaces built for people while interacting with objects, equipment, and coworkers.
The company said the acquisition represents a decisive move toward Physical AI, a class of systems that not only perceive the world but also act within it reliably and at scale. Mobileye’s autonomy stack has evolved beyond navigation toward context-aware and intent-aware reasoning, providing a foundation for robots that can operate productively without constant supervision.
The move also reflects Mobileye’s effort to diversify as competition intensifies in autonomous driving and commercialization timelines extend. By expanding into humanoid robotics, the company gains exposure to a parallel growing market where autonomy software may become the primary differentiator.
Mentee’s Humanoid Platform and Learning Approach
Founded four years ago, Mentee Robotics has developed a third-generation humanoid robot designed for scalable deployment rather than laboratory demonstrations. The platform is vertically integrated, with in-house development of hardware, embedded systems, and AI software.
Mentee’s approach emphasizes rapid learning and adaptability. Its robots are trained primarily in simulation, reducing reliance on large-scale real-world data collection and minimizing the gap between simulated and physical performance. The system is designed to acquire new skills through limited human demonstrations and intent cues, rather than continuous teleoperation.
This learning framework enables autonomous, end-to-end task execution, including locomotion, navigation, and safe manipulation of rigid objects. In demonstrations, Mentee robots have shown the ability to perform multi-step material handling tasks with stability and accuracy, supporting the company’s focus on real-world utility.
Deal Structure and Commercial Roadmap
Under the terms of the agreement, Mobileye will pay $900 million for Mentee Robotics, consisting of approximately $612 million in cash and up to 26.2 million shares of Mobileye Class A stock, subject to adjustments. The transaction is expected to close in the first quarter of 2026, pending customary approvals.
Mentee will operate as an independent unit within Mobileye, allowing continuity while gaining access to Mobileye’s AI infrastructure and production expertise. First customer proof-of-concept deployments are planned for 2026, with autonomous operation as a core requirement. Series production and broader commercialization are targeted for 2028.
Mobileye said the acquisition will modestly increase operating expenses in 2026 but aligns with its long-term growth strategy.
CES 2026 and the Rise of Physical AI
Physical AI emerged as a central theme at CES 2026, with humanoid robots, service robots, and embodied AI systems moving beyond concept stages. The Mobileye-Mentee announcement underscored how autonomy is becoming a shared foundation across vehicles and robots, rather than a domain-specific technology.
Mobileye highlighted strong momentum in its core automotive business, citing an $24.5 billion revenue pipeline over the next eight years. Company executives framed the acquisition as a way to extend that success into a second transformative market without abandoning its safety-first philosophy.
“Today marks a new chapter for robotics and automotive AI,” said Mobileye President and CEO Amnon Shashua. “By combining Mentee’s breakthroughs in humanoid robotics with Mobileye’s expertise in autonomy and productization, we have an opportunity to lead Physical AI at a global scale.”
Mentee CEO Lior Wolf said the partnership accelerates the company’s mission to deliver safe, cost-effective humanoid robots capable of meaningful work in human environments.
As CES 2026 made clear, the race to define Physical AI is accelerating. With this acquisition, Mobileye signals that the next phase of autonomy may unfold not just on roads, but across factories, warehouses, and workplaces worldwide.
CES 2026: Qualcomm Unveils Dragonwing Robotics Platform to Power Physical AI
Qualcomm introduced a comprehensive robotics technology stack at CES 2026, unveiling new processors and partnerships aimed at scaling Physical AI from service robots to full-size humanoids.
Qualcomm has expanded its ambitions beyond chips for smartphones and vehicles, unveiling a full-stack robotics platform at CES 2026 designed to power the next generation of Physical AI. The company introduced a comprehensive architecture that integrates hardware, software, and AI models to support robots ranging from household assistants to industrial autonomous mobile robots and full-size humanoids.
The announcement reflects a growing industry shift toward general-purpose robotics, where machines are expected to reason, adapt, and act safely in human environments. Qualcomm positioned its new platform as a bridge between laboratory prototypes and deployable systems, emphasizing power efficiency, scalability, and safety-grade performance as key enablers.
Dragonwing IQ10 and the “Brain of the Robot”
At the center of Qualcomm’s robotics push is the Dragonwing IQ10 Series, its latest premium-tier processor designed specifically for advanced robotics workloads. The company describes IQ10 as a high-performance, energy-efficient system-on-chip capable of serving as the primary compute engine for humanoid robots and sophisticated AMRs.
Built on Qualcomm’s experience in edge AI and low-power computing, the processor is optimized for mixed-criticality systems where perception, planning, and control must run simultaneously with strict safety requirements. The IQ10 expands Qualcomm’s existing robotics roadmap, which already supports a range of commercial robots through earlier Dragonwing processors.
The architecture enables advanced perception and motion planning using end-to-end AI models, including vision-language and vision-language-action systems. These capabilities are intended to support generalized manipulation, natural human-robot interaction, and continuous learning across diverse environments.
From Prototypes to Scalable Physical AI
Qualcomm framed its robotics platform as an end-to-end solution rather than a single chip. The architecture combines heterogeneous edge computing, AI acceleration, machine learning operations, and a data flywheel for collecting and retraining models. Developer tools and software frameworks are designed to shorten development cycles and reduce the complexity of deploying robots at scale.
This approach targets what Qualcomm described as the “last-mile” problem in robotics, where promising demonstrations often fail to translate into reliable, mass-produced systems. By providing a unified stack that scales across form factors, Qualcomm aims to accelerate adoption in retail, logistics, manufacturing, and service robotics.
“As pioneers in energy-efficient, high-performance Physical AI systems, we know what it takes to make complex robotics systems perform reliably, safely, and at scale,” said Nakul Duggal, executive vice president and group general manager at Qualcomm Technologies. He added that the company’s focus is on moving intelligent machines out of controlled environments and into real-world use.
Partnerships Across the Robotics Ecosystem
Qualcomm also highlighted a growing network of robotics partners adopting its platform. The company is working with manufacturers and integrators including Advantech, APLUX, AutoCore, Booster, VinMotion, Robotec.ai, and VinMotion to bring deployment-ready robots to market.
Humanoid robotics company Figure is collaborating with Qualcomm to define next-generation compute architectures as it scales its humanoid platforms. Brett Adcock, founder and chief executive of Figure, said Qualcomm’s combination of compute performance and power efficiency is a key building block in realizing general-purpose humanoid robots designed for industrial work.
Qualcomm said its Dragonwing processors already power several humanoid platforms in development, and discussions are underway with major industrial automation players on future robotics solutions.
CES 2026 Demonstrations and Industry Direction
At CES 2026, Qualcomm showcased robots powered by its Dragonwing processors, including VinMotion’s Motion 2 humanoid and Booster’s K1 Geek. The company also demonstrated a commercially available robotics development kit designed for rapid prototyping and deployment across multiple applications.
Additional demonstrations focused on teleoperation tools and AI data pipelines that enable robots to continuously acquire new skills. These capabilities underscore Qualcomm’s emphasis on lifelong learning and adaptability as defining characteristics of Physical AI.
The CES debut positions Qualcomm as a foundational technology provider for embodied intelligence, competing not just with chipmakers but with full-stack autonomy platforms. As humanoids and service robots move closer to commercial deployment, Qualcomm is betting that power-efficient, safety-grade compute will be a decisive advantage.
With Physical AI emerging as a central theme at CES 2026, Qualcomm’s announcement signals that the race to define the underlying infrastructure for intelligent machines is accelerating, and that robotics is becoming a core pillar of the company’s long-term strategy.
CES 2026: Samsung Unveils ‘Companion to AI Living’ Vision for Everyday AI
Samsung unveiled its “Companion to AI Living” vision at CES 2026, outlining how AI will connect entertainment, home appliances, health, and services into a unified ecosystem.
Samsung Electronics opened CES 2026 with a broad statement about the future of consumer technology, positioning artificial intelligence not as a feature but as the foundation of everyday living. At its annual First Look event in Las Vegas, the company introduced its “Companion to AI Living” vision, outlining how AI will connect devices, services, and experiences across the home.
Rather than focusing on a single product category, Samsung framed AI as a unifying layer across its ecosystem, spanning displays, home appliances, mobile devices, wearables, and services. Company executives emphasized that scale, connectivity, and on-device intelligence allow Samsung to move beyond basic automation toward more contextual and personalized experiences.
AI as the Core of the Entertainment Experience
Samsung’s display business showcased how AI is reshaping entertainment into a more interactive and lifestyle-oriented experience. The centerpiece of the lineup was a 130-inch Micro RGB display, which Samsung described as a major leap in screen size and color accuracy, driven by independent red, green, and blue light sources.
Supporting this hardware is Vision AI Companion, an AI system designed to act as an entertainment assistant rather than a passive interface. The system can recommend content, adjust sound and picture settings, and respond to natural language requests across Samsung’s 2026 TV lineup. AI-driven modes tailor experiences for sports, movies, and gaming, allowing users to fine-tune crowd noise, commentary, or background audio through voice commands.
Samsung also highlighted how Vision AI Companion extends beyond viewing. Users can ask for recipes based on food shown on screen, receive music recommendations to match their mood, or send content and instructions to other connected devices throughout the home. The goal, Samsung said, is to turn displays into active participants in daily routines.
Smart Homes That Anticipate Daily Needs
In the home appliance segment, Samsung presented AI-powered devices as companions that reduce friction in everyday tasks. Executives noted that SmartThings now serves more than 430 million users, giving Samsung a large data foundation to personalize experiences across households.
The Family Hub refrigerator remains central to this strategy. With an upgraded AI Vision system built on Google Gemini, the refrigerator can more accurately recognize and track food items, support meal planning, and automate grocery-related decisions. Features such as recipe recommendations, video-to-recipe conversion, and weekly food reports are designed to simplify decision-making rather than add complexity.
Samsung also showcased updates across laundry and home care. The Bespoke AI Laundry Combo removes the need to transfer loads between machines, while the latest AirDresser model uses air and steam to reduce wrinkles automatically. In floor care, the Bespoke AI Jet Bot Steam Ultra combines vision, 3D sensing, and conversational voice control to clean, monitor pets, and detect unusual activity while homeowners are away.
From Reactive Care to Proactive Wellbeing
Samsung’s long-term vision extends into digital health, where AI shifts care from reactive responses to proactive prevention. By connecting phones, wearables, appliances, and home devices, Samsung aims to detect early signs of health issues and provide personalized guidance for sleep, exercise, and nutrition.
The company described future scenarios in which connected devices suggest meals aligned with health goals, flag unusual patterns in mobility or sleep, and enable secure sharing of health data with providers through integrated platforms. Samsung also highlighted ongoing research into dementia detection, using wearables to identify subtle changes in movement, speech, and engagement over time.
Security remains a key pillar of this ecosystem. Samsung emphasized that Knox and Knox Matrix underpin its AI strategy, protecting user data across devices and continuously adapting to emerging AI-related risks.
By presenting AI as a companion woven into daily life rather than a collection of isolated tools, Samsung used CES 2026 to signal a shift toward more holistic, software-driven experiences. The company’s message was clear: the next phase of consumer technology will be defined not by individual devices, but by how intelligently they work together.
CES 2026: Boston Dynamics and Google Reunite to Power Next-Gen Atlas Humanoid
Boston Dynamics and Google have renewed their collaboration at CES 2026, combining advanced AI with the next generation of the Atlas humanoid robot.
Boston Dynamics and Google have reunited to showcase a new phase in humanoid robotics, unveiling progress on the next-generation Atlas robot at CES 2026. The collaboration brings together Boston Dynamics’ expertise in dynamic robot hardware with Google’s latest advances in artificial intelligence, signaling a renewed push toward more capable, adaptable humanoid systems.
The updated Atlas platform reflects a shift away from purely mechanical demonstrations toward robots that can understand context, plan actions, and learn from experience. At CES, the companies highlighted how AI-driven perception and decision-making are being integrated directly into Atlas, moving the humanoid closer to real-world industrial and commercial applications.
A Humanoid Built for Industrial Tasks
The new Atlas stands approximately 6.2 feet tall and features a reach of about 7.5 feet, allowing it to operate effectively in warehouses, factories, and logistics facilities designed for human workers. Its fully electric architecture supports quieter operation and improved energy efficiency compared to earlier hydraulic designs.
Atlas is capable of lifting payloads of up to roughly 110 pounds, enabling it to handle heavy objects such as totes, containers, and industrial components. The robot incorporates fully rotational joints across its body and offers a total of 56 degrees of freedom, supporting complex, whole-body movements and precise manipulation.
A newly designed four-fingered hand improves dexterity and grasp versatility, allowing Atlas to interact with a wide range of objects. The system is sealed to an industrial IP67 standard, providing protection against dust and water and making it suitable for harsh operating environments.
Power, Autonomy, and Control
Battery life for the new Atlas is rated at approximately four hours under typical operation. The robot is designed to swap its own battery packs without human assistance, reducing downtime and enabling longer deployment cycles in industrial settings.
Boston Dynamics highlighted multiple modes of operation for Atlas. The robot can function fully autonomously using AI-driven perception and planning, be remotely operated through a virtual reality interface, or be supervised and controlled using a tablet-based system. This flexibility allows customers to choose different levels of autonomy depending on task complexity and operational requirements.
By integrating Google’s AI technologies, Atlas gains enhanced perception, object recognition, and decision-making capabilities. The robot can interpret complex environments, adjust its actions in real time, and learn from repeated interactions rather than relying solely on predefined scripts.
Renewed Partnership and Market Implications
The collaboration marks a symbolic reunion between Boston Dynamics and Google, which previously worked together during Google’s ownership of the robotics firm more than a decade ago. This time, the focus is firmly on combining mature hardware with scalable AI systems that can support sustained commercial deployment.
Boston Dynamics positioned Atlas as a platform designed to operate within existing human-built environments without requiring major infrastructure changes. The goal is to reduce friction between robots and real-world workplaces, accelerating adoption in logistics, manufacturing, and material handling.
While the companies did not announce deployment timelines or customers at CES, the presentation signaled confidence that humanoid robots are moving closer to practical use. Challenges remain around cost, long-term durability, and large-scale fleet management, but the updated Atlas reflects a clear shift toward production readiness.
The CES 2026 debut suggests that Boston Dynamics and Google see humanoid robots as a cornerstone technology for the next generation of automation. By combining advanced mechanics with AI-driven autonomy, the partners aim to move Atlas beyond spectacle and into everyday industrial operations.
CES 2026: Kodiak and Bosch Partner to Scale Autonomous Trucking Hardware
Kodiak has entered a strategic agreement with Bosch to scale production-grade autonomous trucking hardware, aiming to accelerate commercial deployment of driverless trucks.
Kodiak AI has announced a strategic agreement with Bosch to scale the manufacturing of production-grade autonomous trucking hardware, marking a significant step toward large-scale deployment of driverless trucks. The collaboration was revealed ahead of CES 2026, where a Kodiak Driver-powered autonomous truck will be displayed at Bosch’s booth in Las Vegas.
The partnership focuses on building a redundant, automotive-grade platform that integrates hardware, firmware, and software interfaces required to deploy Kodiak’s AI-powered virtual driver at scale. By combining Kodiak’s autonomy software with Bosch’s manufacturing expertise and supply chain capabilities, the companies aim to move autonomous trucking beyond pilots and into sustained commercial operations.
Scaling Physical AI for Heavy-Duty Trucks
Kodiak’s autonomous system, known as the Kodiak Driver, is designed as a unified platform that blends AI-driven perception and planning software with modular, vehicle-agnostic hardware. The system can be integrated either directly on a truck production line or through aftermarket upfitters, giving fleet operators flexibility in how autonomous capability is deployed.
Under the agreement, Bosch will support the development of a redundant autonomous hardware platform, supplying key components such as sensors, steering systems, and other vehicle actuation technologies. These components are designed to meet automotive-grade reliability standards, a critical requirement for long-haul trucking applications where uptime and safety are paramount.
“Advancing the deployment of driverless trucks and physical AI requires not only robust autonomy software, but also manufacturing experience and a resilient supply chain,” said Don Burnette, founder and chief executive of Kodiak. He emphasized that Bosch’s industrial scale and system-level integration expertise are essential for commercial success.
From Commercial Pilots to Industrial Scale
Kodiak has already deployed trucks operating without human drivers in commercial service, positioning the company as one of the few autonomous trucking developers with real-world revenue-generating operations. The new agreement is intended to build on that foundation by enabling higher-volume production and standardized hardware configurations.
Bosch’s role extends beyond component supply. As the world’s largest automotive supplier, the company brings decades of experience in industrialization, quality assurance, and global manufacturing. This expertise is expected to help Kodiak transition from limited deployments to repeatable, scalable production suitable for fleet-wide adoption.
Paul Thomas, president of Bosch in North America and president of Bosch Mobility Americas, said the collaboration allows Bosch to deepen its understanding of real-world autonomous vehicle requirements while contributing production-grade systems to the broader autonomous mobility ecosystem.
CES 2026 and the Push Toward Autonomous Freight
Autonomous trucking emerged as a key theme at CES 2026, with increasing emphasis on commercialization rather than experimental prototypes. Kodiak and Bosch used the event to highlight how Physical AI systems are moving into operational environments where reliability, redundancy, and cost efficiency matter as much as technical performance.
The Kodiak Driver-powered truck on display demonstrates how the integrated platform brings together sensors, compute, and vehicle control into a single autonomous system. Unlike many earlier demonstrations, the focus is on readiness for deployment rather than future concepts.
Industry analysts view the partnership as a sign that autonomous trucking is entering a more mature phase, where partnerships with established automotive suppliers are essential to overcoming manufacturing and regulatory hurdles.
Broader Implications for Autonomous Logistics
For Kodiak, the deal supports its long-term vision of becoming a trusted provider of autonomous ground transportation across commercial and public-sector applications. The company has also positioned its technology for use in government and national security contexts, where reliability and safety standards are especially stringent.
The collaboration underscores a broader trend in robotics and automation, where autonomy developers increasingly rely on established industrial partners to bridge the gap between software innovation and large-scale deployment. As Physical AI systems move from test routes to highways and supply chains, the ability to manufacture and support hardware at scale becomes a decisive competitive advantage.
With CES 2026 as the backdrop, the Kodiak-Bosch agreement signals growing confidence that autonomous trucking is transitioning from experimentation to infrastructure, setting the stage for wider adoption in the years ahead.
CES 2026: LG Showcases CLOiD Home Robot That Cooks, Folds Laundry, and Manages Chores
LG Electronics demonstrated its AI-powered CLOiD home robot at CES 2026, highlighting autonomous cooking, laundry folding, and dishwasher management as part of its Zero Labor Home vision.
LG Electronics has unveiled its most advanced home robotics concept to date with the public debut of LG CLOiD, an AI-powered household robot designed to take over everyday domestic chores. Presented at CES 2026, the robot reflects LG’s long-term Zero Labor Home strategy, which aims to reduce the physical and mental effort required to manage a modern household through intelligent automation.
Unlike earlier home robots focused on narrow tasks, CLOiD is positioned as a general-purpose domestic assistant. LG demonstrated the robot performing a range of coordinated activities, including preparing simple meals, handling laundry from start to finish, and managing dishwashing tasks. The company says CLOiD is designed to operate as part of a fully connected home rather than as a standalone device.
Demonstrating End-to-End Household Automation
During live demonstrations, CLOiD retrieved food items from a refrigerator, placed pastries into an oven, and initiated cooking processes without human intervention. After occupants left the home, the robot was shown starting laundry cycles, transferring clothes to a dryer, and folding and stacking garments once complete. CLOiD also demonstrated the ability to unload a dishwasher and organize clean dishes.
These scenarios were designed to show how the robot understands sequences of tasks rather than executing isolated commands. CLOiD uses contextual awareness to determine when chores should begin and how appliances should be operated, adapting its actions to household routines and user preferences.
LG emphasized that the robot’s value lies in orchestration. Rather than replacing individual appliances, CLOiD coordinates them, acting as a mobile control layer that connects cooking, cleaning, and laundry into a single automated workflow.
Hardware Built for Domestic Environments
CLOiD features a wheeled base for stability and safe operation in homes with children or pets. The robot’s torso can raise or lower to adjust its working height, enabling it to reach objects on countertops, inside appliances, or closer to the floor. Two articulated arms with seven degrees of freedom each provide human-like mobility.
Each hand includes five independently controlled fingers, allowing CLOiD to grasp delicate items such as glassware as well as heavier objects like laundry baskets. LG selected a wheeled design over a bipedal form to reduce cost, improve reliability, and lower the risk of tipping during operation.
The navigation system builds on LG’s experience with robotic vacuum cleaners and autonomous home platforms. CLOiD can move smoothly between rooms, avoid obstacles, and precisely position itself for manipulation tasks in kitchens and laundry areas.
Physical AI and Smart Home Integration
At the core of CLOiD is LG’s Physical AI framework, which combines vision-based perception, language understanding, and action planning. The robot uses visual data from onboard cameras to recognize appliances, objects, and environments. This information is translated into structured understanding and then into physical actions, such as opening doors, transferring items, or adjusting appliance settings.
CLOiD’s head functions as a mobile AI home hub, housing its processor, sensors, display, speakers, and voice-based generative AI. The robot communicates with users through spoken dialogue and expressive visual cues while continuously learning household layouts and routines.
Deep integration with LG’s ThinQ and ThinQ ON platforms allows CLOiD to control and coordinate smart appliances across the home. This connectivity enables more complex automation scenarios, such as preparing meals based on available ingredients or scheduling chores around user absences.
Robotics Components and Long-Term Strategy
Alongside CLOiD, LG introduced AXIUM, a new family of robotic actuators designed for service robots and physical AI systems. Actuators control motion and force within robotic joints and are considered one of the most critical and cost-intensive components in advanced robots.
LG says its background in appliance component manufacturing provides an advantage in producing lightweight, compact, and high-torque actuators suitable for home robotics. Modular actuator designs also allow customization across different robot configurations and use cases.
Looking ahead, LG plans to expand robotics capabilities across both standalone home robots and robotized appliances. The company envisions refrigerators that open automatically as users approach and appliances that actively coordinate with home robots to complete tasks autonomously.
“The LG CLOiD home robot is designed to naturally engage with and understand the humans it serves, providing an optimized level of household help,” said Steve Baek, president of the LG Home Appliance Solution Company. “We will continue our efforts to achieve our Zero Labor Home vision.”
At CES 2026, LG positioned CLOiD as a glimpse into a future where household labor is largely delegated to intelligent machines, allowing people to spend more time on activities beyond routine chores.
World’s Smallest Programmable Autonomous Robots Can Swim, Sense, and Think
Researchers at the University of Pennsylvania and the University of Michigan unveiled microscopic robots that are fully programmable, autonomous, and capable of sensing and reacting to their environment over extended periods.
Researchers at the University of Pennsylvania and the University of Michigan have developed what they describe as the world’s smallest fully programmable autonomous robots, pushing robotics into a microscopic frontier. Each robot measures roughly 200 by 300 by 50 micrometers, smaller than a grain of salt, yet integrates computing, sensing, and propulsion into a single untethered system. The robots are designed to operate independently without external control, marking a significant step forward in microscale robotics.
Unlike earlier microrobots that relied on magnetic fields or external power sources, these robots are fully autonomous. They are powered by light, which activates onboard electronics and enables them to sense their surroundings and make basic decisions. In laboratory demonstrations, the robots were able to swim in liquid environments and adjust their motion without human intervention.
The robots can be produced using established semiconductor fabrication techniques, allowing them to be manufactured at scale. Researchers estimate the cost at roughly one cent per robot when produced in large quantities. Once activated, the devices can continue operating for months, making them suitable for long-duration experiments or deployments at microscopic scales.
Autonomous Microscale Motion and Control
Movement at microscopic scales presents unique challenges because fluid resistance dominates over inertia. To address this, the robots use an electrochemical propulsion method rather than mechanical parts. By generating electric fields, the robots interact with ions in the surrounding liquid, creating movement without the need for motors or moving limbs.
This approach allows the robots to swim at speeds of roughly one body length per second. The lack of moving components makes the robots mechanically robust and resistant to damage during handling. Researchers demonstrated that the devices could be transferred between samples using standard laboratory tools without losing functionality.
The propulsion method also enables precise directional control. By adjusting electrical signals, the robots can change direction, stop, or follow preprogrammed movement patterns. This capability is essential for future applications that require coordinated motion or navigation through confined environments.
Tiny Brains and Sensing Capabilities
A key breakthrough lies in the integration of a complete computing system at such a small scale. The robots include a processor, memory, and sensors embedded directly on the chip. Power is supplied by microscopic solar cells that generate approximately 75 nanowatts under LED illumination, an extremely small energy budget compared to consumer electronics.
Despite these constraints, the robots are capable of basic sensing and decision-making. They can detect temperature changes with high sensitivity and alter their behavior in response. Researchers also demonstrated simple communication by encoding information through movement patterns that can be observed under a microscope.
These capabilities allow the robots to respond dynamically rather than follow fixed paths. While the onboard intelligence is limited compared to larger robotic systems, it represents a major step toward autonomous behavior at microscopic dimensions.
Potential Applications and Next Steps
The researchers see strong potential for applications in biomedicine, where microscopic robots could one day monitor cellular environments or deliver targeted therapies. Their small size allows them to operate in spaces inaccessible to conventional devices, including narrow fluid channels and delicate biological systems.
In manufacturing and materials science, the robots could assist in assembling or inspecting microscale components. Because the platform is compatible with standard chip manufacturing processes, it could be adapted for large-scale production and customized for specific industrial tasks.
The current demonstrations were conducted in controlled laboratory conditions, and the researchers emphasize that further work is needed to expand functionality. Future efforts will focus on improving sensing, increasing computational complexity, and enabling operation in more complex environments. Even at this early stage, the work establishes a foundation for autonomous robotics at scales comparable to biological microorganisms.
UPS Buys Hundreds of Robots to Automate Truck Unloading Operations
UPS has purchased hundreds of warehouse robots designed to unload packages from trucks, expanding automation across its U.S. logistics network to address labor strain and efficiency demands.
United Parcel Service has taken another major step toward warehouse automation by purchasing hundreds of robots designed to unload packages from delivery trucks. The move reflects growing pressure on large logistics operators to increase throughput while reducing reliance on physically demanding manual labor. UPS says the robots will be deployed across multiple facilities in the United States.
Truck unloading is among the most physically taxing tasks in parcel logistics, requiring workers to handle thousands of packages per shift in confined trailer spaces. By automating this stage of the workflow, UPS aims to improve worker safety while maintaining consistent processing speeds during peak demand periods. The company has increasingly focused on automation as parcel volumes fluctuate and labor availability tightens.
Automating One of Logistics’ Hardest Jobs
The robots are designed to operate inside standard truck trailers, identifying packages of varying shapes and sizes and transferring them onto conveyor systems. Using machine vision and AI-based grasping systems, the robots can adapt to mixed loads without requiring pre-sorted shipments. This flexibility allows them to function in existing facilities without major structural changes.
UPS says the robotic unloading systems are capable of operating continuously and can handle thousands of packages per hour. While human workers will continue to oversee operations, the robots are intended to take over repetitive lifting and stacking tasks that have historically contributed to injuries and high turnover.
The company has been testing robotic unloading technology for several years through pilot programs. The decision to move forward with a large-scale purchase suggests those trials met internal benchmarks for reliability, safety, and return on investment.
Scaling Automation Across the Network
UPS operates one of the world’s largest logistics networks, processing millions of packages per day. Even small efficiency gains at individual facilities can translate into significant cost savings at scale. Automating truck unloading also helps standardize operations across sites, reducing performance variability tied to staffing levels.
The robots will be integrated into UPS facilities alongside existing automation systems, including conveyor networks, sorting machines, and warehouse management software. This layered approach allows UPS to automate specific bottlenecks without redesigning entire hubs.
While the company did not disclose the total value of the purchase, large-scale robotic deployments of this kind typically involve multi-year investments. Industry analysts view the move as part of a broader shift among parcel carriers toward targeted automation rather than full end-to-end robotic warehouses.
Labor, Safety, and the Future of Parcel Handling
UPS has emphasized that automation is intended to complement its workforce rather than replace it. By reducing the physical strain of unloading tasks, the company aims to reassign workers to roles that require oversight, coordination, and problem-solving.
Warehouse robotics adoption has accelerated across the logistics industry as operators confront rising service expectations, tight delivery timelines, and ongoing labor challenges. Robots capable of unloading trucks address one of the most difficult remaining manual processes in parcel handling.
As UPS continues deploying these systems, their performance in live operations will likely influence similar investments across the sector. The expansion underscores how robotics is moving deeper into everyday logistics tasks, shifting from experimental pilots to large-scale, operational deployments.
Humanoid Robots Take Center Stage at Silicon Valley Humanoids Summit as Doubts Persist
Humanoid robots dominated discussions at a Silicon Valley Humanoids Summit, but investors and engineers raised concerns about scalability, costs, and real-world deployment timelines.
Humanoid robots were the headline attraction at the Silicon Valley Humanoids Summit, where startups and researchers showcased rapid progress in mobility, perception, and manipulation. Demonstrations highlighted robots walking autonomously, handling objects, and interacting with human-built environments. Despite the enthusiasm, discussions repeatedly returned to unresolved challenges around cost, reliability, and commercial readiness.
The summit reflected a broader surge of interest in humanoid robotics driven by advances in artificial intelligence, sensors, and actuators. Investors, engineers, and corporate buyers attended sessions focused on how humanoid form factors could operate in warehouses, factories, and service environments. Yet many participants cautioned that impressive demonstrations do not always translate into scalable products.
Progress Meets Practical Constraints
Several companies presented humanoid robots designed to work in logistics and manufacturing, emphasizing their ability to navigate spaces built for humans without infrastructure changes. Developers argued that bipedal robots could eventually replace or support workers in tasks ranging from material handling to inspection. The appeal lies in flexibility, with a single robot potentially performing many roles rather than one specialized task.
However, experts at the summit noted that humanoid robots remain expensive to build and maintain. Power consumption, mechanical wear, and software robustness continue to limit continuous operation. While some robots can perform short demonstrations reliably, sustaining performance across long shifts in unpredictable environments remains a significant hurdle.
There was also skepticism about whether humanoid robots offer clear advantages over existing automation. In many warehouses and factories, wheeled robots, conveyors, and fixed automation already deliver higher efficiency at lower cost. Critics argued that humanoid designs may only make economic sense in narrow use cases where human-like mobility is essential.
Market Expectations and Deployment Reality
The summit highlighted growing tension between investor expectations and deployment timelines. Several startups predicted rapid adoption within the next few years, pointing to pilot programs and early commercial agreements. Others urged caution, warning that widespread deployment would likely take longer due to safety certification, workforce integration, and total cost of ownership considerations.
Labor dynamics were a recurring theme. Proponents suggested humanoid robots could help address worker shortages and reduce injury risk in physically demanding roles. Skeptics countered that deploying complex robots introduces new maintenance and oversight requirements that may offset labor savings, at least in the near term.
Regulatory uncertainty also surfaced during discussions. Humanoid robots operating alongside humans raise questions about liability, workplace standards, and certification processes. Industry observers noted that clear regulatory frameworks will be critical before large fleets can be deployed in active industrial settings.
A Sector at a Crossroads
By the end of the summit, humanoid robots had clearly captured attention, but consensus remained elusive. The technology is advancing rapidly, and real-world pilots are expanding, yet doubts persist about near-term scalability and economic viability. Many attendees described the current moment as a transition from hype-driven excitement to a more sober evaluation of practical constraints.
The discussions underscored that humanoid robotics is no longer a speculative concept, but neither is it a solved problem. As companies continue to test deployments and refine designs, the coming years are likely to determine whether humanoid robots become a mainstream industrial tool or remain a niche solution reserved for specific environments.
Mercado Libre Signs Deal with Agility Robotics to Deploy Digit Humanoid Robots
Mercado Libre has entered a commercial agreement with Agility Robotics to deploy Digit humanoid robots in its logistics operations, starting with a pilot facility in Texas.
Mercado Libre, Latin America’s largest commerce and fintech ecosystem, has signed a commercial agreement with Agility Robotics to deploy the Digit humanoid robot in its logistics operations. The partnership marks one of the first uses of commercially deployed humanoid robots in large-scale e-commerce fulfillment tied to Latin American operations. Initial deployment will take place at a Mercado Libre facility in San Antonio, Texas.
The companies say the collaboration is aimed at exploring how humanoid robots can support fulfillment workflows, improve workplace ergonomics, and address labor shortages in logistics. While the first deployment is limited to a U.S. site, both sides plan to evaluate broader use cases across Mercado Libre’s warehouse network in Latin America.
Digit Enters Live Commerce Operations
Digit is a human-scale bipedal robot designed to walk through existing warehouse aisles, lift and carry totes, and operate alongside human workers without requiring major infrastructure changes. Agility Robotics says Digit is already commercially deployed and has moved more than 100,000 totes in live commerce environments, demonstrating reliability in production settings.
At Mercado Libre, Digit will initially focus on tasks that support order fulfillment. These include repetitive and physically demanding activities that are often difficult to staff consistently. By automating such roles, the companies aim to reduce injury risk and free human workers for higher-value tasks.
“At Mercado Libre, we are constantly exploring how emerging technologies can elevate our operations and improve the experience for our employees and millions of users,” said Agustin Costa, Senior Vice President of Shipping at Mercado Libre. “Our partnership with Agility Robotics and the deployment of Digit in our facilities is a significant step forward in our vision to create a safer, more efficient, and adaptable logistics network.”
Costa added that the company is particularly interested in how humanoid robots can complement existing teams rather than replace them. The goal, he said, is to test how robotics can drive the next evolution of commerce logistics in the region.
Automation, AI, and Labor Challenges
Digit is designed to fill high-turnover roles using a combination of onboard autonomy and cloud-based fleet management. The robot leverages artificial intelligence to learn tasks, adapt to new workflows, and operate continuously in structured warehouse environments. Agility Robotics pairs Digit with Agility ARC, its cloud automation platform for deploying and managing fleets of robots.
Through this platform, Digit can coordinate with other automated systems such as autonomous mobile robots, conveyor belts, and warehouse management software. This approach allows companies to add humanoid robots to existing operations without disrupting current automation investments.
Agility Robotics positions Digit as a response to chronic labor shortages in logistics and manufacturing, where repetitive manual tasks often struggle to attract long-term workers. By handling physically taxing work, the company argues, humanoid robots can help stabilize throughput while improving safety and consistency.
“We are incredibly proud to be partnering with Mercado Libre to support their workforce and operations through the deployment of Agility’s humanoid robot Digit,” said Daniel Diez, Chief Business Officer of Agility Robotics. “Mercado Libre has demonstrated that it is a true innovator in both commerce and fintech, and we are excited to integrate our autonomous humanoid robots capable of performing meaningful work and delivering real value into their facilities.”
A Growing Market for Humanoid Robots
Mercado Libre joins a growing list of companies deploying Agility’s humanoid robots, including logistics provider GXO, German industrial manufacturer Schaeffler, and Amazon. These deployments signal increasing confidence that humanoid robots are moving beyond research and pilot programs into real-world industrial use.
While the current agreement focuses on evaluation and early deployment, the companies suggest the partnership could expand if results meet operational and economic expectations. For Mercado Libre, the project offers a way to test advanced automation while maintaining flexibility across a geographically diverse logistics network.
The deployment underscores a broader trend in robotics, where humanoid form factors are being tested not as novelties but as practical tools designed to work within human-built environments. As large logistics operators seek scalable solutions to labor and efficiency challenges, commercially deployed humanoid robots like Digit may play an increasingly visible role.
Engineers Use AI to Fine-Tune Robotic Prosthesis for Natural Hand Dexterity
Researchers at the University of Utah used artificial intelligence to improve control of a robotic prosthetic hand, reducing cognitive effort while increasing grip precision and stability.
Engineers at the University of Utah have developed an artificial intelligence system that significantly improves the dexterity and intuitiveness of robotic prosthetic hands. By combining advanced sensors with machine learning, the researchers enabled a prosthesis to grasp objects in a way that more closely resembles natural human movement. The approach reduces the mental effort required by users while increasing grip precision and reliability.
For many prosthesis users, even simple tasks such as holding a cup or picking up a small object require deliberate finger-by-finger control. This added cognitive burden is one of the main reasons advanced prosthetic devices are often abandoned. The Utah team focused on restoring the subconscious, automatic aspects of grasping that most people take for granted.
Sensors and AI Enable Autonomous Grasping
The researchers modified a commercially available prosthetic hand by equipping it with custom fingertips capable of sensing both pressure and proximity. Optical proximity sensors allow the fingers to detect objects before physical contact, while pressure sensors provide feedback once an object is grasped. Together, these inputs give the prosthesis a form of artificial touch.
An artificial neural network was trained on grasping postures using proximity data from each finger. This allows the prosthetic hand to autonomously position its fingers at the correct distance to form a stable grip. Because each finger operates with its own sensor, the system adjusts all digits in parallel, producing precise and adaptable grasping behavior across objects of different shapes and sizes.
In testing, participants using the AI-assisted prosthesis demonstrated greater grip security and precision compared to conventional control methods. They were also able to complete tasks using different grip styles without extensive training, suggesting the system adapts naturally to user intent.
Sharing Control Between Human and Machine
A central design challenge was ensuring that artificial intelligence supported the user rather than competing for control. To solve this, the researchers implemented a bioinspired framework that shares control between the human and the AI system. The prosthesis assists with fine motor adjustments while allowing the user to initiate, modify, or stop actions freely.
“What we don’t want is the user fighting the machine for control,” said Marshall Trout, a postdoctoral researcher involved in the work. “Here, the machine improved the precision of the user while also making the tasks easier.”
The system blends rapid reactive responses, such as preventing excessive grip force, with higher-level planning that anticipates how objects should be grasped. This mirrors how humans naturally coordinate instinctive reactions with learned motor patterns.
Study Leadership and Future Directions
The study was led by engineering professor Jacob A. George together with Trout at the Utah NeuroRobotics Lab and was published in the journal Nature Communications. The research involved experiments with four transradial amputees, whose amputations occurred between the elbow and wrist.
Participants completed standardized dexterity tests as well as everyday activities requiring fine motor control. Tasks such as lifting a lightweight plastic cup, which require careful force modulation, became more reliable with AI assistance.
“As lifelike as bionic arms are becoming, controlling them is still not easy or intuitive,” Trout said. “Nearly half of all users will abandon their prosthesis, often citing poor controls and cognitive burden.”
George emphasized that the long-term goal is to embed intelligence directly into prosthetic devices so users can interact with objects more naturally. The team is now exploring how this AI-driven grasping approach could be combined with implanted neural interfaces, enabling thought-based control and the return of tactile sensations. By merging sensing, intelligence, and neural input, the researchers aim to make robotic prostheses feel less like tools and more like natural extensions of the human body.
AI-Powered Robotic Dog Uses Memory and Vision for Search-and-Rescue Missions
Engineering students at Texas A&M University developed an AI-powered robotic dog that sees, remembers, and plans routes autonomously, targeting search-and-rescue and disaster response missions.
Researchers at Texas A&M University have developed an AI-powered robotic dog designed to operate in complex, unpredictable environments using memory-driven navigation and human-like decision-making. Built by graduate engineering students, the robot is capable of seeing, remembering where it has been, and responding dynamically to new situations. The system is aimed primarily at search-and-rescue and disaster response missions, where conditions are often chaotic and GPS signals are unavailable.
Unlike conventional robotic systems that rely on pre-mapped environments or simple obstacle avoidance, the robotic dog integrates vision, memory, and language-based reasoning. It understands voice commands, analyzes camera input in real time, and plans routes autonomously. The developers say this combination allows the robot to behave more like a human responder than a traditional machine.
Memory-Driven Navigation With Multimodal AI
At the core of the system is a memory-driven navigation architecture powered by a custom multimodal large language model. The model interprets visual data captured by onboard cameras and combines it with stored environmental memory to make navigation decisions. This enables the robot to recall previously traveled paths and reuse them, improving efficiency and reducing redundant exploration.
A hybrid control structure allows the robot to balance reactive behavior with high-level planning. It can quickly respond to immediate hazards, such as avoiding collisions, while simultaneously reasoning about longer-term navigation goals. According to the research team, this mirrors how humans navigate unfamiliar spaces by combining instinctive reactions with deliberate planning.
“Some academic and commercial systems have integrated language or vision models into robotics,” said Sandun Vitharana, an engineering technology master’s student involved in the project. “However, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.”
The robot’s navigation system was designed specifically for unstructured and unpredictable environments, such as disaster zones or remote areas. Traditional autonomous navigation methods often struggle in these conditions due to changing layouts, debris, and limited visibility.
From Disaster Response to Broader Applications
The project was led by Vitharana and Sanjaya Mallikarachchi, an interdisciplinary engineering doctoral student, with guidance from faculty at Texas A&M University. With support from the National Science Foundation, the team explored how multimodal AI models could be deployed at the edge, rather than relying on cloud-based processing.
“Moving forward, this kind of control structure will likely become a common standard for human-like robots,” Mallikarachchi said.
Beyond search-and-rescue operations, the researchers see broader potential applications for the technology. The robot’s ability to navigate large, complex spaces could make it useful in hospitals, warehouses, and other industrial facilities. Its memory-based system may also assist people with visual impairments, conduct reconnaissance in hazardous areas, or support exploration tasks where human access is limited.
Dr. Isuru Godage, an assistant professor in the Department of Engineering Technology and Industrial Distribution, emphasized the importance of deploying advanced AI directly on robotic platforms. “The core of our vision is deploying MLLM at the edge, which gives our robotic dog the immediate, high-level situational awareness previously impossible,” Godage said. “Our goal is to ensure this technology is not just a tool, but a truly first responder-ready system for unmapped environments.”
The robot was recently demonstrated at the 22nd International Conference on Ubiquitous Robots, where the team presented experimental results and system design details. The work highlights how advances in multimodal AI are beginning to reshape autonomous robotics, moving systems closer to adaptive, human-like behavior in real-world conditions.