Archives
Davos 2026: Elon Musk Predicts Robot-Dominated Future in Davos Debut
Elon Musk told World Economic Forum attendees that humanoid robots could outnumber humans and enter mass markets as early as 2027, reshaping labor, economics, and daily life.
Elon Musk used his first appearance at the World Economic Forum to deliver some of his boldest predictions yet about the future of robotics, artificial intelligence, and human labor. Speaking to business and political leaders in Davos, Musk said humanoid robots could become widespread within the next few years and eventually surpass humans in number.
“Humanoid robots are coming very fast,” Musk said, predicting that commercially viable systems could reach the market by 2027. He described a future in which robots perform much of the physical work currently done by humans, fundamentally altering economic structures and labor markets worldwide.
Musk framed robotics as the next major technological wave after AI software, arguing that intelligence without a physical body is only part of the equation. Once AI systems can move, manipulate objects, and operate autonomously in the real world, he said, their economic impact will expand dramatically.
Robots, Labor, and Economic Disruption
According to Musk, the widespread adoption of humanoid robots could eliminate many forms of manual labor while creating entirely new economic models. He suggested that abundance driven by robotics could eventually reduce the cost of goods and services to near zero, forcing societies to rethink employment and income distribution.
“Once you have robots that can do everything humans can do physically, the output of the economy becomes enormous,” Musk said. He added that this transition could make traditional notions of scarcity less relevant, though it would also require careful management to avoid social disruption.
"It's better for your quality of life to be an optimist who's wrong, than a pessimist who's right," says @elonmusk in his final remarks at #WEF26. pic.twitter.com/OB2CFZXMDo
— World Economic Forum (@wef) January 22, 2026
Musk acknowledged that such a shift would raise serious questions about jobs and inequality, but argued that technological progress has historically created more opportunity than it has destroyed. He pointed to automation in manufacturing and agriculture as precedents, though he conceded that humanoid robots would operate at a far broader scale.
Humanoid Robots by 2027
Musk’s timeline places humanoid robots closer to mass deployment than many industry analysts currently expect. He reiterated that Tesla is working toward commercializing its Optimus humanoid robot, initially targeting factory and industrial environments before expanding to broader applications.
He said early versions of humanoid robots would likely focus on repetitive and physically demanding tasks, helping to address labor shortages in manufacturing, logistics, and construction. Over time, Musk suggested, these systems could move into service roles and even household environments.
While Musk has previously set aggressive timelines that slipped, his Davos remarks reflect growing confidence across the technology sector that physical AI is approaching a tipping point. Advances in perception, motion control, and large-scale AI models are making robots more adaptable and economically viable.
Beyond Robotics
In addition to robotics, Musk also touched on broader themes including artificial general intelligence, space exploration, and long-term human survival. He reiterated his belief that AI development must be managed carefully to ensure it benefits humanity, while also expressing optimism that technological progress can solve many global challenges.
Despite skepticism from some attendees, Musk’s appearance underscored why robotics and physical AI have become central topics at Davos. As governments and corporations grapple with slowing productivity growth and demographic shifts, robots are increasingly viewed as both an opportunity and a disruption.
Musk concluded by suggesting that the question is no longer whether robots will become ubiquitous, but how societies choose to integrate them. If his predictions hold, the next decade could redefine not just technology, but the structure of human work itself.
Airbus Orders Six-Figure Humanoid Robots From UBTech for Aircraft Manufacturing
Airbus has placed a six-figure order for humanoid robots from UBTech, marking one of the largest industrial deployments of humanoid robotics in aerospace manufacturing to date.
Airbus has taken a significant step toward automation in aircraft production by ordering a six-figure number of humanoid robots from UBTech Robotics. The purchase represents one of the largest commercial commitments yet for humanoid robots in heavy manufacturing and highlights how physical AI is beginning to move from pilot projects into real industrial workflows.
Meet the first humanoid robotic worker at SANY RE!
Watch UBTECH Walker S2 in action at China's first 5G-enabled wind power smart factory, where every move is a step toward a cleaner, more automated tomorrow.
From precise sorting to adaptive manipulation, this is the new face of… pic.twitter.com/DakT8oXWmJ
— UBTECH Robotics (@UBTECHRobotics) January 20, 2026
The robots are expected to be deployed across Airbus facilities to support repetitive and physically demanding tasks involved in aircraft assembly. While Airbus has long used industrial automation, the move toward humanoid robots reflects a shift toward more flexible systems that can operate in environments originally designed for human workers.
UBTech’s shares surged following reports of the deal, underlining growing investor confidence that humanoid robotics is transitioning from experimentation to scalable industrial use.
From Industrial Automation to Humanoid Labor
Unlike traditional industrial robots that are fixed in place and optimized for a narrow set of motions, humanoid robots are designed to navigate complex workspaces, manipulate tools, and interact with equipment built for human hands. For aerospace manufacturing, where production lines involve tight spaces, variable tasks, and frequent reconfiguration, this flexibility is increasingly valuable.
Meet UBTECH New-Gen of #Industrial #Humanoid #Robot——#WalkerS2 makes industry-leading breakthroughs!
• Building the dual-loop AI system: #BrainNet 2.0 & Co-Agent
• 3mins autonomous battery swapping & 24/7 continuous operation
• "Human-eye" Binocular Stereo Vision Perception pic.twitter.com/uPhxoYP5w0— UBTECH Robotics (@UBTECHRobotics) July 23, 2025
Airbus has been evaluating humanoid robots as a way to address labor shortages, improve ergonomics, and increase consistency in assembly operations. The robots are expected to assist with material handling, inspection, and other repetitive processes that can strain human workers over long shifts.
The order places Airbus among a small but growing group of global manufacturers experimenting with humanoid robotics at meaningful scale, alongside automotive and logistics operators.
UBTech’s Industrial Push
UBTech, best known for its humanoid robot platforms designed for research and service applications, has been expanding aggressively into industrial markets. The company’s latest humanoid systems are built to operate autonomously in factory environments, combining computer vision, motion planning, and AI-driven manipulation.
The Airbus deal signals growing confidence in UBTech’s ability to meet industrial reliability and safety requirements, which remain a major barrier for humanoid robots operating alongside human workers. Aerospace manufacturing, in particular, demands high precision, repeatability, and compliance with strict safety standards.
For UBTech, the agreement represents a major validation of its strategy to position humanoid robots as a practical workforce augmentation tool rather than a futuristic novelty.
A Broader Shift Toward Physical AI
The Airbus order reflects a wider trend across global manufacturing, where companies are exploring physical AI systems that can reason, adapt, and act in real-world environments. Unlike conventional automation, humanoid robots promise to reduce the need for costly factory redesigns by fitting into existing workflows.
As labor markets tighten and production complexity increases, manufacturers are increasingly willing to test new forms of automation that offer both flexibility and scalability. Humanoid robots, while still early in their adoption curve, are emerging as a potential bridge between human labor and fully automated systems.
For Airbus, the deployment is expected to begin incrementally, with performance data guiding future expansion. If successful, humanoid robots could become a permanent fixture on aircraft assembly lines, reshaping how aerospace manufacturing is performed.
The deal underscores a turning point for the humanoid robotics industry: major industrial players are no longer just experimenting. They are beginning to place real orders.
Asus Halts New Smartphone Development to Pivot Toward AI and Robotics
Asus is winding down new smartphone development as it redirects resources toward AI computing, robotics, and physical AI systems for industrial and enterprise markets.
ASUS is scaling back new smartphone development as part of a broader strategic shift toward artificial intelligence, robotics, and advanced computing systems. The move reflects changing priorities inside the Taiwanese technology company, which is reallocating engineering talent and capital away from consumer handsets toward higher-growth segments tied to physical AI and intelligent automation.
Asus will continue to sell and support existing smartphone models in select markets, but it no longer plans to invest heavily in new flagship phone platforms. The decision follows years of intense competition in the global smartphone market, where margins have tightened and growth has slowed, particularly outside of Apple and Samsung’s ecosystems.
Company executives have indicated that future innovation efforts will focus on AI infrastructure, edge computing, robotics platforms, and intelligent devices designed for enterprise and industrial use cases.
From Consumer Phones to Physical AI
Asus has been steadily expanding its footprint in AI hardware, including servers optimized for accelerated computing, edge AI platforms, and embedded systems used in robotics and automation. These systems are increasingly deployed in factories, logistics centers, healthcare environments, and smart infrastructure projects.
The company has also invested in robotics-related research, including autonomous mobile systems, AI vision platforms, and human-machine interfaces. Rather than building consumer-facing robots, Asus is positioning itself as an enabling technology provider, supplying the compute, sensing, and control systems that power physical AI applications.
This transition mirrors a broader industry shift, where growth is increasingly concentrated in AI-driven systems that operate in the physical world. Robotics, autonomous machines, and intelligent infrastructure require high-performance, energy-efficient computing platforms, an area where Asus believes it can compete more effectively than in consumer smartphones.
Robotics and Edge AI as Growth Drivers
Asus’ robotics strategy centers on edge intelligence, where AI models run directly on devices rather than relying on cloud infrastructure. This approach is critical for robots and autonomous systems that must operate with low latency, high reliability, and strong data privacy guarantees.
The company’s hardware portfolio now includes AI-ready industrial PCs, robotic controllers, and edge servers designed to support computer vision, motion planning, and real-time decision-making. These systems are being adopted across manufacturing, smart cities, and healthcare automation.
By exiting aggressive smartphone development, Asus frees up resources to deepen partnerships in robotics ecosystems and accelerate product cycles in AI-driven markets. Industry analysts view this as a pragmatic move, given the capital-intensive nature of smartphone development and the uncertain returns in a saturated market.
A Broader Industry Realignment
Asus’ pivot comes amid a wider reassessment of consumer electronics strategies across the technology sector. As smartphones mature, many manufacturers are looking beyond handsets for long-term growth, turning instead to AI infrastructure, robotics, and intelligent systems that can scale across industries.
Physical AI, which combines perception, reasoning, and action in real-world environments, is emerging as a central theme in this transition. Robotics platforms require continuous upgrades in compute performance, sensing accuracy, and software integration, creating recurring demand for specialized hardware and systems.
For Asus, the shift represents a move from volume-driven consumer markets toward fewer, higher-value deployments. While smartphones once defined the company’s consumer identity, its future growth is increasingly tied to the machines, factories, and autonomous systems that will shape the next phase of industrial digitization.
The decision underscores a growing consensus in the technology industry: the next major wave of innovation will not be defined by screens in pockets, but by intelligent machines operating alongside humans in the physical world.
Elon Musk Says Tesla’s Robots Will Surpass Its EV Business
Elon Musk says Tesla’s humanoid robots could eventually become more valuable than its electric vehicle business, positioning robotics as the company’s long-term growth engine.
Elon Musk has once again pushed Tesla’s ambitions beyond cars, predicting that the company’s humanoid robots could ultimately outgrow its electric vehicle business. Speaking in recent public remarks and investor discussions, Musk framed robotics not as a side project, but as a future pillar that could redefine Tesla’s identity over the coming decade.
Tesla’s humanoid robot, known as Optimus, is designed to perform repetitive and physically demanding tasks in factories, warehouses, and eventually homes. Musk has repeatedly argued that the long-term economic value of autonomous labor far exceeds that of vehicle manufacturing, particularly as global labor shortages intensify and wages rise across industrial economies.
While Tesla remains one of the world’s most valuable automakers, Musk suggested that robots could unlock an entirely new market measured in trillions of dollars. Unlike cars, which are constrained by consumer purchasing power and replacement cycles, general-purpose robots could be deployed continuously across manufacturing, logistics, healthcare, and domestic services.
Robots as Tesla’s Next Growth Engine
Tesla’s robotics program draws heavily from the same core technologies that power its vehicles. The Optimus platform uses Tesla’s full self-driving neural networks, computer vision systems, and custom AI chips, allowing the company to reuse years of autonomy research. Musk has emphasized that this software-first approach gives Tesla a structural advantage over robotics startups that must build perception, planning, and control systems from scratch.
Optimus is expected to operate initially inside Tesla’s own factories, handling material transport and basic assembly tasks. These controlled environments allow the company to train robots at scale while generating real operational value. Musk has indicated that internal deployments could begin ramping before broader commercial availability.
Over time, Tesla envisions Optimus evolving into a general-purpose worker capable of understanding instructions, navigating complex spaces, and manipulating objects with human-like dexterity. If successful, this would place Tesla among a small group of companies attempting to commercialize humanoid robots at scale.
Physical AI Beyond Autonomous Vehicles
Musk’s comments reflect a broader industry shift toward what many executives now call physical AI — systems that can perceive, reason, and act in the real world. Unlike digital AI products, physical AI must meet far higher safety, reliability, and cost constraints, especially when operating alongside humans.
Tesla’s strategy mirrors developments across the robotics sector, where companies are racing to combine large-scale AI models with real-world embodiment. Musk argues that once robots reach sufficient intelligence and reliability, manufacturing capacity becomes the primary constraint, not demand.
He has suggested that a mature robotics business could eventually dwarf Tesla’s vehicle revenues, even if EV sales continue to grow. In Musk’s framing, cars may become just one application of a much larger AI and robotics platform.
Skepticism and Execution Risk
Despite the bold vision, significant challenges remain. Humanoid robots must operate safely in unpredictable environments, manipulate a vast range of objects, and perform tasks reliably over long periods. Battery life, actuator durability, and cost-efficient manufacturing all remain open questions.
Analysts also note that Tesla has a history of ambitious timelines that often slip. While Optimus prototypes have demonstrated walking, object handling, and basic autonomy, large-scale commercial deployment is still unproven.
Nevertheless, Musk’s prediction underscores Tesla’s long-term direction. Rather than viewing robotics as experimental, Tesla is positioning humanoid robots as a central business line that could redefine how work is performed across the global economy.
If Tesla succeeds, the company best known for electric cars could ultimately be remembered for something far more transformative: machines that replace human labor at scale.
NEURA Robotics and Bosch Join Forces to Scale German-Made Humanoid Robotics
NEURA Robotics and Bosch have formed a strategic partnership to industrialize humanoid robots and physical AI in Germany, combining real-world data collection with large-scale manufacturing expertise.
NEURA Robotics and Bosch have announced a strategic partnership aimed at accelerating the industrial deployment of humanoid robots and physical AI technologies developed in Germany. The collaboration brings together NEURA’s fast-moving robotics platform and Bosch’s manufacturing scale, signaling a coordinated European push into one of the most competitive emerging technology markets.
“NEURA aims to position Europe as the global leader in one of the most significant future markets, humanoid robotics,” said David Reger, founder and CEO of NEURA Robotics. “Our mission is to set the global benchmark for physical AI and humanoid robotics, establishing a European alternative to the major platform players in the U.S. and China. The partnership with Bosch is a powerful signal that Germany and Europe are investing in next-generation technologies developed independently.”
Reger said access to real-world physical training data remains the largest constraint in robotics development. “Physical training data is the biggest challenge in robotics; no one has it,” he said. “At NEURA, we have turned this challenge into our competitive advantage, and now, with Bosch, we have the opportunity to capture, structure, and leverage real-world data.”
The partnership is positioned around a shared objective: moving humanoid robots from experimental systems into reliable, scalable tools for real-world work environments. Both companies describe humanoid robotics as a technological shift comparable in impact to the rise of the personal computer or smartphone.
Building the Data Foundation for Humanoid Robots
A central pillar of the collaboration is the joint collection of real-world physical data inside Bosch facilities. Using advanced sensor suits, the partners will capture human motion, task execution, and environmental interaction data during everyday industrial work. This type of physical training data is scarce but critical for teaching humanoid robots how to move, manipulate objects, and operate safely alongside people.
By grounding robot learning in real workplace conditions rather than purely simulated environments, NEURA and Bosch aim to accelerate deployment timelines and improve reliability. The data will feed directly into NEURA’s AI models, enabling faster learning cycles and more adaptable robotic behavior across diverse tasks.
In parallel, the companies will co-develop AI-based core software, functional robotics modules, and intuitive user interfaces designed for industrial use. This software collaboration is intended to bridge perception, reasoning, and physical action into a cohesive operating layer for humanoid robots.
From Scale-Up Innovation to Industrial Production
Bosch will play a key role in supporting NEURA’s transition from development to large-scale production. This includes optimizing manufacturing workflows, scaling embedded software, and potentially supplying robotic components such as motors and actuators. The partnership also leaves room for Bosch to support final assembly and motor production for future humanoid platforms.
NEURA enters the collaboration with a reported order book exceeding one billion euros and is actively expanding its production capacity. The company’s industrial scaling effort is led by executives with deep experience in Bosch’s own manufacturing systems, reinforcing the operational alignment between the two organizations.
The focus on industrialization reflects a broader industry shift. Customers are increasingly demanding robots that can deliver consistent performance, meet safety requirements, and integrate into existing facilities without extensive redesign.
An Open Ecosystem for Physical AI
At the software level, NEURA is advancing an open robotics ecosystem known as the Neuraverse. The concept centers on connected humanoid robots that share skills, data, and learned behaviors across a distributed network. Improvements made by one robot can propagate across the fleet through software updates, creating a continuous feedback loop between deployment and development.
Combined with Bosch’s manufacturing and systems expertise, this approach is designed to accelerate innovation while maintaining industrial reliability. Rather than closed, application-specific robots, the partners are betting on adaptable, general-purpose systems that improve over time.
The partnership underscores a broader ambition to establish a European alternative in humanoid robotics, at a time when major efforts are concentrated in the United States and China. By pairing real-world data acquisition with scalable production, NEURA and Bosch are positioning Germany as a central hub for the next phase of physical AI.
Skild AI Raises $1.4B, Reaching $14B Valuation in Physical AI Bet
Skild AI raised nearly $1.4 billion in a funding round led by SoftBank, valuing the robotics AI company at more than $14 billion as it scales a unified foundation model for robots.
Skild AI has raised close to $1.4 billion in new funding, pushing the Pittsburgh-based robotics AI company to a valuation exceeding $14 billion. The round was led by SoftBank Group and included participation from NVIDIA Ventures, Macquarie Capital, Jeff Bezos, and a wide range of strategic and institutional investors, underscoring growing confidence in foundational AI models for the physical world.
The financing positions Skild AI as one of the most highly valued companies in the emerging Physical AI sector. Rather than building robots themselves, the company is focused on developing a general-purpose robotics foundation model, known as the Skild Brain, designed to operate across virtually any robotic body and task.
Building a General Brain for Robots
At the core of Skild AI’s strategy is the idea of an omni-bodied intelligence. Unlike traditional robotics systems that are tightly coupled to specific hardware, the Skild Brain is designed to control robots without prior knowledge of their exact physical form. The model can operate humanoids, quadrupeds, mobile manipulators, tabletop arms, and other machines that can move.
Skild AI says this approach allows robots to perform a wide range of activities, from everyday household tasks such as cleaning, loading a dishwasher, or cooking simple meals, to physically demanding work like navigating unstable terrain or handling heavy payloads. The company’s long-term vision is that any machine capable of motion could eventually be operated by the same underlying AI brain.
A central challenge in robotics AI is the lack of large-scale, standardized training data. Unlike language models, which can be trained on vast amounts of text, robots do not have an equivalent “internet of robotics.” Skild AI addresses this gap by pre-training its model on a combination of human video data and large-scale physics-based simulations, allowing it to learn general physical behavior without being tied to a single robot design.
Adaptation Without Retraining
One of Skild AI’s key technical claims is that the Skild Brain can adapt in real time to unexpected changes without retraining or fine-tuning. This includes scenarios such as damaged limbs, jammed wheels, increased payloads, or being deployed on an entirely new robotic body.
According to the company, this adaptability is driven by in-context learning. When the model encounters a new environment or embodiment where its initial actions fail, it adjusts its behavior based on live experience. Skild AI describes this as a major departure from conventional robotics approaches, which often require extensive retraining for each new scenario.
“The Skild Brain can control robots it has never trained on, adapting in real time to extreme changes in form or environments,” said Deepak Pathak, CEO and co-founder of Skild AI. He added that forcing the model to adapt rather than memorize is critical to building intelligence that works reliably in the real world.
Rapid Growth and Commercial Focus
Skild AI is also reporting rapid commercial traction. The company said it grew from zero to approximately $30 million in revenue within a few months in 2025 and is deploying its technology across multiple enterprise settings. Current use cases include security and facility inspection, warehouse operations, manufacturing, data centers, construction, and delivery tasks.
While consumer robotics remains a long-term goal, Skild AI is prioritizing enterprise and industrial deployments, where robots can be rolled out at scale and generate continuous data to improve the model. The company believes this creates a reinforcing data flywheel, allowing the Skild Brain to improve with every deployment regardless of hardware type or task.
Abhinav Gupta, co-founder and president of Skild AI, said this generality is essential for building intelligent systems that can operate safely and dynamically alongside humans. He described omni-bodied learning as a foundational requirement for bringing advanced AI into everyday physical environments.
Investors Signal Strategic Importance
The breadth of the investor group reflects both commercial and strategic interest in Physical AI. In addition to financial investors, the round included strategic backers such as Samsung, LG, Schneider, CommonSpirit, and Salesforce, pointing to potential applications across manufacturing, healthcare, and enterprise automation.
SoftBank Investment Advisers described Skild AI as foundational infrastructure for the future of robotics, while other investors emphasized the long-term economic and strategic significance of solving intelligence for the physical world.
Founded in 2023, Skild AI operates across Pittsburgh, the San Francisco Bay Area, and Bengaluru. With its latest funding, the company plans to scale training of its foundation model and expand deployments, aiming to establish a shared intelligence layer for robots across industries as Physical AI moves closer to mainstream adoption.
Wing and Walmart Expand Drone Delivery to 150 Stores in Coast-to-Coast Push
Wing and Walmart will add drone delivery to 150 more stores, aiming to reach over 40 million Americans and build the largest residential drone delivery network in the U.S.
Wing and Walmart are significantly expanding their drone delivery partnership, announcing plans to add drone service to 150 additional Walmart stores across the United States over the next year. The move is designed to transform drone delivery from a regional convenience into a nationwide retail option, ultimately reaching more than 40 million Americans.
The expansion builds on years of testing and commercial operations in select markets, particularly in the Dallas-Fort Worth area and Metro Atlanta. In those regions, drone delivery has moved beyond novelty status and become a routine part of shopping behavior for many customers. According to the companies, usage has accelerated sharply, with deliveries tripling over the past six months and a core group of customers placing multiple orders per week.
By 2027, Wing and Walmart expect to operate more than 270 drone delivery locations across the country, forming what they describe as the largest residential drone delivery network in the world.
From Regional Pilots to National Coverage
The next phase of expansion will introduce drone delivery to major metropolitan areas including Los Angeles, St. Louis, Cincinnati, and Miami. These additions build on previously announced rollouts in cities such as Houston, Orlando, Tampa, and Charlotte. Operations in Houston are scheduled to begin in mid-January, marking one of the largest single-market launches to date.
The companies said the goal is not simply geographic growth, but consistency and scale. Drone delivery is being integrated directly into Walmart’s existing store operations, allowing orders to be fulfilled from local inventory rather than specialized distribution centers. This approach shortens delivery times and reduces friction for customers placing last-minute or urgent orders.
Wing’s drones are primarily used for lightweight items, including groceries, household essentials, and over-the-counter medications. Orders are delivered in minutes, often faster than traditional same-day or curbside options.
Why Drone Delivery Is Gaining Traction
Executives from both companies argue that the growing adoption reflects a shift in consumer expectations rather than a short-term experiment. Walmart sees drone delivery as a way to address immediate needs, particularly for time-sensitive purchases that do not justify a full shopping trip.
“Drone delivery plays an important role in our ability to deliver what customers want, exactly when they want it,” said Greg Cathey, Walmart’s senior vice president of digital fulfillment transformation. He pointed to strong adoption in existing markets as evidence that customers are willing to embrace the technology when it is reliable and easy to use.
Wing, which is owned by Alphabet, has spent years refining its aircraft, navigation systems, and air traffic coordination to support dense residential operations. The company emphasizes that its drones are designed to operate autonomously while meeting strict safety and noise standards, a key factor in securing regulatory approvals and community acceptance.
The Economics of Ultra-Fast Delivery
The scale of the expansion highlights a broader shift in retail logistics. While drone delivery has often been viewed as expensive or experimental, Wing and Walmart argue that high utilization changes the equation. In markets where demand is strong, drones can complete many short trips per hour, lowering the cost per delivery and reducing reliance on human drivers.
“We believe even the smallest package deserves the speed and reliability of a great delivery service,” said Adam Woodworth, chief executive of Wing. He said working with Walmart has allowed the company to demonstrate that drone delivery can operate as part of everyday retail, not just as a premium or niche offering.
Industry analysts note that Walmart’s national footprint gives the partnership a structural advantage. Thousands of stores located close to residential neighborhoods make it easier to launch drone services without building new infrastructure. If successful, the model could pressure other retailers to accelerate their own investments in autonomous delivery.
A Glimpse of Retail’s Next Phase
The coast-to-coast rollout signals growing confidence that drone delivery can move beyond pilot programs and into mainstream commerce. While regulatory hurdles and airspace coordination remain challenges, the scale of this expansion suggests that large retailers now see drones as a practical complement to trucks, vans, and gig-economy drivers.
For consumers, the promise is simple: faster access to everyday essentials. For the retail industry, the partnership represents a test of whether autonomous delivery can reliably operate at national scale.
As Wing and Walmart extend their network from Los Angeles to Miami, the question may no longer be whether drone delivery works, but how quickly it becomes an expected part of shopping in American cities.
CES 2026 Puts Physical AI and Robots at the Center of Tech’s Next Wave
CES 2026 marked a turning point for Physical AI, as humanoids, autonomous machines, and AI-driven platforms moved from experiments to scalable, real-world systems.
CES 2026 made one thing clear: artificial intelligence is no longer just software. This year’s show was defined by Physical AI, a category where intelligence is embedded directly into machines that move, lift, drive, and operate in the real world. From humanoid robots and autonomous trucks to construction equipment and household helpers, robotics shifted from the fringes of CES to its core narrative.
What distinguished CES 2026 from previous years was not the novelty of robots, but their readiness. Across industries, companies emphasized scale, safety, and integration, signaling that Physical AI is moving beyond pilots and into sustained deployment.
From Vision to Deployment Across Industries
In construction, Doosan Bobcat showcased how AI and autonomy are reshaping worksites. Its RX3 autonomous concept loader illustrated how compact equipment can operate quietly, electrically, and with modular configurations, while AI-driven systems like Jobsite Companion and collision avoidance highlighted how intelligence is being embedded directly into machines operators already use.
Logistics and transportation saw similar momentum. Kodiak AI and Bosch presented a production-grade autonomous trucking platform designed for scale, emphasizing redundant hardware and automotive-grade components. The message was clear: autonomy is no longer confined to test routes but is being engineered for industrial reliability.
At the software and platform level, Mobileye’s acquisition of Mentee Robotics underscored how autonomy stacks developed for vehicles are converging with humanoid robotics. The deal positioned Physical AI as a shared foundation across cars and robots, built on perception, planning, and safety systems that can operate in human environments.
Humanoids Grow Up
Humanoid robots were among the most visible symbols of CES 2026’s shift toward Physical AI. Boston Dynamics, working again with Google, revealed a product-ready version of Atlas, highlighting industrial-grade specifications, multiple control modes, and AI-driven autonomy. The focus was less on acrobatics and more on repeatable, real-world work.
In the home, LG Electronics introduced its CLOiD home robot as part of a broader “Zero Labor Home” vision. Rather than a standalone gadget, CLOiD was positioned as a mobile AI hub capable of coordinating appliances, understanding routines, and performing household tasks through vision-based Physical AI.
These examples reflected a broader trend: humanoids are no longer pitched as distant futures, but as platforms designed to fit into existing environments, whether warehouses, factories, or homes.
Platforms and Chips Power the Physical AI Stack
Underpinning many of these robots were platform providers positioning themselves as the infrastructure layer for Physical AI. Qualcomm used CES to introduce its Dragonwing robotics platform and IQ10 processor, framing them as the “brain of the robot” for everything from service robots to full-size humanoids. The emphasis on power efficiency, edge AI, and safety-grade performance highlighted how critical compute platforms are becoming as robots scale.
Across the show floor, the same themes repeated: end-to-end stacks, simulation-first development, and software-defined robotics. Companies increasingly described robots as evolving systems, improved through data and deployment rather than fixed-function machines.
A Turning Point for CES and Robotics
CES has long been known for bold prototypes, but CES 2026 felt more grounded. The conversation shifted from what robots might do someday to how they are being deployed today. Partnerships, acquisitions, and production-ready platforms dominated announcements, reflecting a more mature phase of the robotics cycle.
Physical AI now sits at the intersection of several forces: advances in foundation models, cheaper and more capable hardware, and rising demand for automation across labor-constrained industries. CES 2026 captured that convergence in real time.
If earlier CES editions introduced robots as curiosities, CES 2026 presented them as infrastructure. As Physical AI moves from show floors into jobsites, roads, and homes, this year’s event may be remembered as the moment robotics stopped being a sideshow and became one of technology’s main stages.
CES 2026: Caterpillar and NVIDIA Push Physical AI Into Heavy Industry
Caterpillar and NVIDIA deepened their partnership at CES 2026, outlining how Physical AI will transform construction, mining, manufacturing, and industrial supply chains.
CES 2026 marked another milestone in the rise of Physical AI, with Caterpillar and NVIDIA unveiling an expanded collaboration aimed at reshaping heavy industry. The partnership signals how artificial intelligence is moving beyond digital workflows and into the machines, factories, and jobsites that power the global economy.
“As AI moves beyond data to reshape the physical world, it is unlocking new opportunities for innovation – from job sites and factory floors to offices,” said Joe Creed, CEO of Caterpillar. “Caterpillar is committed to solving our customers’ toughest challenges by leading with advanced technology in our machines and every aspect of business. Our collaboration with NVIDIA is accelerating that progress like never before.”
For Caterpillar, the collaboration is about embedding intelligence directly into iron. For NVIDIA, it extends its AI platforms into some of the most demanding physical environments on earth – construction zones, mines, and industrial plants – where reliability, safety, and scale matter more than novelty.
Machines Built for the AI Era
At the core of the partnership is NVIDIA’s Jetson Thor platform, which Caterpillar plans to deploy across construction, mining, and power-generation equipment. Running advanced AI models at the edge allows Cat machines to process massive volumes of sensor data in real time, enabling smarter decision-making in unpredictable environments.
This shift lays the groundwork for AI-assisted and autonomous operations at scale. Caterpillar described future machines as part of a “digital nervous system” for jobsites, where fleets continuously analyze conditions, adapt to terrain, and optimize productivity. In-cab AI features will also play a growing role, providing operators with real-time coaching, safety alerts, and performance insights tailored to specific tasks and environments.
Rather than replacing operators, Caterpillar is positioning AI as an augmentation layer – one that helps crews work faster, safer, and with greater confidence as jobsites become more complex.
Debuting the Cat AI Assistant
One of the most visible announcements at CES 2026 was the debut of the Cat AI Assistant. Designed as a proactive digital partner, the assistant integrates voice-based interaction directly into Caterpillar’s onboard and digital systems. Built using NVIDIA’s Riva speech models, it delivers natural, conversational responses while drawing on Caterpillar’s own equipment and maintenance data.
In practical terms, this means operators and fleet managers can ask questions about machine health, parts, troubleshooting, or maintenance schedules and receive context-aware guidance instantly. Inside the cab, voice activation can adjust settings, guide diagnostics, and connect users to the right tools without interrupting work.
The assistant reflects a broader trend at CES 2026: Physical AI systems are increasingly conversational, intuitive, and embedded directly into workflows rather than accessed through separate dashboards.
NVIDIA AI Factory and the Reinvention of Industrial Operations
Beyond the jobsite, Caterpillar is leveraging NVIDIA AI Factory to transform manufacturing and supply chain operations. AI Factory provides the accelerated computing infrastructure, software frameworks, and AI libraries needed to train, deploy, and continuously improve large-scale industrial AI systems.
Caterpillar is using this infrastructure to automate and optimize core manufacturing processes such as production forecasting, scheduling, and quality control. By running these workloads on AI Factory, Caterpillar can process vast datasets faster, adapt to changing demand, and improve resilience across its global production network.
A major component of this effort is the creation of physically accurate digital twins of Caterpillar factories using NVIDIA Omniverse and OpenUSD technologies. These digital environments allow teams to simulate factory layouts, test production changes, and optimize workflows before implementing them in the real world — reducing downtime, risk, and cost.
Physical AI Moves From Concept to Infrastructure
The Caterpillar–NVIDIA collaboration fits squarely into the broader narrative of CES 2026, where Physical AI emerged as a unifying theme across robotics, autonomy, logistics, and heavy industry. From autonomous construction equipment to AI-driven factories, intelligence is becoming embedded directly into physical systems.
By combining Caterpillar’s century-long experience in industrial machinery with NVIDIA’s AI platforms and AI Factory infrastructure, the two companies are signaling that Physical AI is no longer experimental. It is becoming foundational infrastructure for how industries build, move, and power the world.
As Caterpillar CEO Joe Creed noted, AI is no longer just analyzing data – it is actively reshaping how work gets done. In heavy industry, that transformation is now moving at full speed.
Humanoid Builds HMND 01 Alpha in 7 Months Using NVIDIA Robotics Stack
London-based startup Humanoid moved from concept to a functional alpha prototype of its HMND 01 robot in seven months, compressing a development cycle that typically takes up to two years.
London-based robotics startup Humanoid has compressed the traditional hardware development timeline by moving from concept to a functional alpha prototype of its HMND 01 system in just seven months.
The milestone stands in contrast to the typical 18 to 24 months required to develop comparable humanoid or industrial robotic platforms, highlighting how simulation-first development and edge AI are reshaping robotics engineering.
The HMND 01 Alpha program includes two robot variants: a wheeled platform designed for near-term industrial deployment and a bipedal system intended primarily for research and future service or household applications.
Both platforms are currently undergoing field tests and proof-of-concept demonstrations, including a recent industrial evaluation with automotive supplier Schaeffler.
At the center of Humanoid’s accelerated development cycle is a tightly integrated software and hardware stack built on NVIDIA robotics technologies.
Edge Compute and Foundation Models at the Core
The HMND 01 Alpha robots use NVIDIA Jetson Thor as their primary edge computing platform. By consolidating compute, sensing, and control onto a single high-performance system, Humanoid simplified internal architecture, wiring, manufacturability, and field serviceability.
Jetson Thor allows the robots to run large robotic foundation models directly on-device rather than relying on cloud processing. This enables real-time execution of vision-language-action models that support perception, reasoning, and task execution in dynamic environments.
Humanoid reported that training these models using NVIDIA’s AI infrastructure has reduced post-training processing times to just a few hours. This faster turnaround significantly shortens the loop between data collection, model refinement, and deployment on physical robots, allowing the company to iterate at software speed rather than hardware speed.
Simulation-First Development and Hardware Optimization
Humanoid’s workflow is built around a simulation-to-reality pipeline using NVIDIA Isaac Lab and Isaac Sim. Engineers use Isaac Lab to train reinforcement learning policies for locomotion and manipulation, while Isaac Sim provides a high-fidelity environment for testing navigation, perception, and full-body control.
Through a custom hardware-in-the-loop validation system, Humanoid created digital twins that mirror the software interfaces of the physical robots. This allows middleware, control logic, teleoperation, and SLAM systems to be tested virtually before deployment on real hardware. According to the company, new control policies can be trained from scratch and deployed onto physical robots within roughly 24 hours.
Simulation also plays a direct role in mechanical engineering decisions. During development of the bipedal robot, Humanoid evaluated six different leg configurations in simulation, analyzing torque requirements, joint stability, and mass distribution before committing to physical prototypes.
Engineers also optimized actuator selection, sensor placement, and camera positioning using simulated perception data, reducing the risk of blind spots and interference in industrial settings.
These physics-based simulations contributed to the robots’ performance during early industrial trials and helped avoid costly redesigns later in the development cycle.
Toward Software-Defined Robotics Standards
Humanoid views HMND 01 as part of a broader shift toward software-defined robotics. The company is working with NVIDIA to move away from legacy industrial communication standards and toward modern networking architectures designed for AI-enabled robots.
“NVIDIA’s open robotics development platform helps the industry move past legacy industrial communication standards and make the most of modern networking capabilities,” said Jarad Cannon, chief technology officer of Humanoid.
He added that the company is collaborating on a new robotics networking system built on Jetson Thor and the Holoscan Sensor Bridge, with the goal of enabling more flexible and scalable robot architectures.
Founded in 2024 by Artem Sokolov, Humanoid has grown to more than 200 engineers and researchers across offices in London, Boston, and Vancouver. The company reports 20,500 pre-orders, six completed proof-of-concept projects, and three active pilot programs.
While the bipedal HMND 01 remains focused on research and long-term service robotics, the wheeled variant is positioned for near-term industrial use. Humanoid’s strategy emphasizes early deployment in operational environments to gather real-world data and continuously refine its software-defined architecture, signaling a shift in how humanoid and industrial robots are developed and brought to market.
CES 2026: Doosan Bobcat Unveils RX3 Autonomous Loader and AI Jobsite Tech
Doosan Bobcat introduced the RX3 autonomous concept loader and a suite of AI-powered jobsite technologies at CES 2026, signaling a shift toward smart, electrified construction equipment.
Doosan Bobcat has unveiled a new generation of autonomous and AI-enabled construction technologies at CES 2026, headlined by the RX3 autonomous concept loader and a growing ecosystem of intelligent jobsite systems. The announcements reflect the company’s push to integrate autonomy, electrification, and artificial intelligence into compact construction equipment designed for real-world deployment.
Presented during CES Media Day in Las Vegas, the technologies are part of what Bobcat describes as a “Smart Construction Jobsite,” where machines assist operators, reduce complexity, and improve safety and productivity. While several systems remain in concept or prototype form, the company emphasized that many are moving steadily toward commercialization.
RX3 Autonomous Concept Loader
The Bobcat RogueX3, or RX3, represents the third generation of Bobcat’s autonomous loader concept. The electric-powered machine is designed to match the size and footprint of existing manned Bobcat equipment, allowing it to operate in current jobsites without major workflow changes. It uses tracked mobility to provide traction across uneven or challenging surfaces while operating quietly and without emissions.
A key feature of the RX3 is its modular design. The platform can be configured with or without a cab, equipped with wheels or tracks, and paired with different lift arms depending on the task. Bobcat said the concept could ultimately support multiple powertrains, including electric, diesel, hybrid, and hydrogen, offering flexibility as energy infrastructure evolves.
“For nearly 70 years, Bobcat has led the compact equipment industry by solving real problems for real people,” said Scott Park, vice chairman and CEO of Doosan Bobcat. “As jobsites become more complex, we’re responding with intelligent systems that help people accomplish more, faster, and smarter.”
Bobcat is also working with Agtonomy as a technology partner, using its perception and fleet management software to enable autonomous and semi-autonomous operation in agricultural and construction contexts.
AI Comes Into the Cab
Alongside the RX3, Doosan Bobcat introduced the Bobcat Jobsite Companion, described as the compact equipment industry’s first AI voice control system. Powered by a proprietary large language model running entirely onboard, the system allows operators to manage more than 50 machine functions using natural voice commands.
Operators can adjust attachment settings, engine speed, lighting, and other controls without taking their hands off the controls. Because the system does not rely on cloud connectivity, it can respond in real time even in remote or connectivity-limited jobsites.
“Jobsite Companion lowers the barrier to entry for new operators while helping experienced professionals work faster and more precisely,” said Joel Honeyman, vice president of global innovation at Doosan Bobcat.
Bobcat also announced Service.AI, an AI-powered support platform designed for dealers and technicians. The system provides instant access to diagnostics, repair manuals, service histories, and troubleshooting guidance, aiming to reduce downtime and speed up maintenance.
Safety, Displays, and Energy Systems
Doosan Bobcat showcased several additional technologies that support its smart jobsite vision. A radar-based collision warning and avoidance system uses imaging radar to monitor surroundings and can automatically slow or stop a machine to prevent accidents.
The company also revealed an advanced display concept using transparent MicroLED screens integrated into cab windows. These displays overlay 360-degree camera views, machine performance data, alerts, and asset tracking directly into the operator’s field of vision.
Powering these systems is the Bobcat Standard Unit Pack, or BSUP, a modular and rugged battery system designed for harsh construction environments. The fast-charging packs are scalable across Bobcat’s equipment lineup and are intended to support broader electrification efforts, including potential use by other manufacturers.
Toward a Smarter Jobsite
Doosan Bobcat said the technologies unveiled at CES 2026 form an integrated ecosystem rather than isolated features. By combining AI, autonomy, electrification, and connectivity, the company aims to redefine how compact equipment is operated and supported.
“We’ll combine AI, autonomy, electrification, and connectivity to create new jobsite standards,” Park said during the Media Day presentation.
While the RX3 and several systems remain concept-stage, Bobcat’s messaging at CES emphasized near-term impact rather than distant vision. The company framed these developments as practical steps toward safer, more productive jobsites where intelligent machines actively support human workers.
CES 2026: Mobileye to Acquire Mentee Robotics for $900M to Accelerate Physical AI Push
Mobileye agreed to acquire humanoid robotics startup Mentee Robotics for $900 million, expanding its autonomy technology from vehicles into Physical AI and general-purpose humanoid robots.
Mobileye has agreed to acquire Mentee Robotics in a $900 million transaction that marks a major strategic shift beyond autonomous driving and into humanoid robotics and Physical AI. Announced during CES 2026 in Las Vegas, the deal positions Mobileye to apply its autonomy technology to machines designed to work directly alongside humans in physical environments.
The acquisition combines Mobileye’s large-scale perception, planning, and safety systems with Mentee’s vertically integrated humanoid robot platform. Together, the companies aim to build general-purpose robots capable of understanding context, inferring intent, and executing tasks safely and autonomously in real-world settings such as factories, warehouses, and industrial facilities.
From Vehicle Autonomy to Embodied Intelligence
Mobileye’s core business has been built around vision-based autonomy for vehicles, with systems designed to interpret complex scenes, predict behavior, and make safety-critical decisions. Those same challenges increasingly define humanoid robotics, where machines must navigate spaces built for people while interacting with objects, equipment, and coworkers.
The company said the acquisition represents a decisive move toward Physical AI, a class of systems that not only perceive the world but also act within it reliably and at scale. Mobileye’s autonomy stack has evolved beyond navigation toward context-aware and intent-aware reasoning, providing a foundation for robots that can operate productively without constant supervision.
The move also reflects Mobileye’s effort to diversify as competition intensifies in autonomous driving and commercialization timelines extend. By expanding into humanoid robotics, the company gains exposure to a parallel growing market where autonomy software may become the primary differentiator.
Mentee’s Humanoid Platform and Learning Approach
Founded four years ago, Mentee Robotics has developed a third-generation humanoid robot designed for scalable deployment rather than laboratory demonstrations. The platform is vertically integrated, with in-house development of hardware, embedded systems, and AI software.
Mentee’s approach emphasizes rapid learning and adaptability. Its robots are trained primarily in simulation, reducing reliance on large-scale real-world data collection and minimizing the gap between simulated and physical performance. The system is designed to acquire new skills through limited human demonstrations and intent cues, rather than continuous teleoperation.
This learning framework enables autonomous, end-to-end task execution, including locomotion, navigation, and safe manipulation of rigid objects. In demonstrations, Mentee robots have shown the ability to perform multi-step material handling tasks with stability and accuracy, supporting the company’s focus on real-world utility.
Deal Structure and Commercial Roadmap
Under the terms of the agreement, Mobileye will pay $900 million for Mentee Robotics, consisting of approximately $612 million in cash and up to 26.2 million shares of Mobileye Class A stock, subject to adjustments. The transaction is expected to close in the first quarter of 2026, pending customary approvals.
Mentee will operate as an independent unit within Mobileye, allowing continuity while gaining access to Mobileye’s AI infrastructure and production expertise. First customer proof-of-concept deployments are planned for 2026, with autonomous operation as a core requirement. Series production and broader commercialization are targeted for 2028.
Mobileye said the acquisition will modestly increase operating expenses in 2026 but aligns with its long-term growth strategy.
CES 2026 and the Rise of Physical AI
Physical AI emerged as a central theme at CES 2026, with humanoid robots, service robots, and embodied AI systems moving beyond concept stages. The Mobileye-Mentee announcement underscored how autonomy is becoming a shared foundation across vehicles and robots, rather than a domain-specific technology.
Mobileye highlighted strong momentum in its core automotive business, citing an $24.5 billion revenue pipeline over the next eight years. Company executives framed the acquisition as a way to extend that success into a second transformative market without abandoning its safety-first philosophy.
“Today marks a new chapter for robotics and automotive AI,” said Mobileye President and CEO Amnon Shashua. “By combining Mentee’s breakthroughs in humanoid robotics with Mobileye’s expertise in autonomy and productization, we have an opportunity to lead Physical AI at a global scale.”
Mentee CEO Lior Wolf said the partnership accelerates the company’s mission to deliver safe, cost-effective humanoid robots capable of meaningful work in human environments.
As CES 2026 made clear, the race to define Physical AI is accelerating. With this acquisition, Mobileye signals that the next phase of autonomy may unfold not just on roads, but across factories, warehouses, and workplaces worldwide.
CES 2026: Qualcomm Unveils Dragonwing Robotics Platform to Power Physical AI
Qualcomm introduced a comprehensive robotics technology stack at CES 2026, unveiling new processors and partnerships aimed at scaling Physical AI from service robots to full-size humanoids.
Qualcomm has expanded its ambitions beyond chips for smartphones and vehicles, unveiling a full-stack robotics platform at CES 2026 designed to power the next generation of Physical AI. The company introduced a comprehensive architecture that integrates hardware, software, and AI models to support robots ranging from household assistants to industrial autonomous mobile robots and full-size humanoids.
The announcement reflects a growing industry shift toward general-purpose robotics, where machines are expected to reason, adapt, and act safely in human environments. Qualcomm positioned its new platform as a bridge between laboratory prototypes and deployable systems, emphasizing power efficiency, scalability, and safety-grade performance as key enablers.
Dragonwing IQ10 and the “Brain of the Robot”
At the center of Qualcomm’s robotics push is the Dragonwing IQ10 Series, its latest premium-tier processor designed specifically for advanced robotics workloads. The company describes IQ10 as a high-performance, energy-efficient system-on-chip capable of serving as the primary compute engine for humanoid robots and sophisticated AMRs.
Built on Qualcomm’s experience in edge AI and low-power computing, the processor is optimized for mixed-criticality systems where perception, planning, and control must run simultaneously with strict safety requirements. The IQ10 expands Qualcomm’s existing robotics roadmap, which already supports a range of commercial robots through earlier Dragonwing processors.
The architecture enables advanced perception and motion planning using end-to-end AI models, including vision-language and vision-language-action systems. These capabilities are intended to support generalized manipulation, natural human-robot interaction, and continuous learning across diverse environments.
From Prototypes to Scalable Physical AI
Qualcomm framed its robotics platform as an end-to-end solution rather than a single chip. The architecture combines heterogeneous edge computing, AI acceleration, machine learning operations, and a data flywheel for collecting and retraining models. Developer tools and software frameworks are designed to shorten development cycles and reduce the complexity of deploying robots at scale.
This approach targets what Qualcomm described as the “last-mile” problem in robotics, where promising demonstrations often fail to translate into reliable, mass-produced systems. By providing a unified stack that scales across form factors, Qualcomm aims to accelerate adoption in retail, logistics, manufacturing, and service robotics.
“As pioneers in energy-efficient, high-performance Physical AI systems, we know what it takes to make complex robotics systems perform reliably, safely, and at scale,” said Nakul Duggal, executive vice president and group general manager at Qualcomm Technologies. He added that the company’s focus is on moving intelligent machines out of controlled environments and into real-world use.
Partnerships Across the Robotics Ecosystem
Qualcomm also highlighted a growing network of robotics partners adopting its platform. The company is working with manufacturers and integrators including Advantech, APLUX, AutoCore, Booster, VinMotion, Robotec.ai, and VinMotion to bring deployment-ready robots to market.
Humanoid robotics company Figure is collaborating with Qualcomm to define next-generation compute architectures as it scales its humanoid platforms. Brett Adcock, founder and chief executive of Figure, said Qualcomm’s combination of compute performance and power efficiency is a key building block in realizing general-purpose humanoid robots designed for industrial work.
Qualcomm said its Dragonwing processors already power several humanoid platforms in development, and discussions are underway with major industrial automation players on future robotics solutions.
CES 2026 Demonstrations and Industry Direction
At CES 2026, Qualcomm showcased robots powered by its Dragonwing processors, including VinMotion’s Motion 2 humanoid and Booster’s K1 Geek. The company also demonstrated a commercially available robotics development kit designed for rapid prototyping and deployment across multiple applications.
Additional demonstrations focused on teleoperation tools and AI data pipelines that enable robots to continuously acquire new skills. These capabilities underscore Qualcomm’s emphasis on lifelong learning and adaptability as defining characteristics of Physical AI.
The CES debut positions Qualcomm as a foundational technology provider for embodied intelligence, competing not just with chipmakers but with full-stack autonomy platforms. As humanoids and service robots move closer to commercial deployment, Qualcomm is betting that power-efficient, safety-grade compute will be a decisive advantage.
With Physical AI emerging as a central theme at CES 2026, Qualcomm’s announcement signals that the race to define the underlying infrastructure for intelligent machines is accelerating, and that robotics is becoming a core pillar of the company’s long-term strategy.
CES 2026: Samsung Unveils ‘Companion to AI Living’ Vision for Everyday AI
Samsung unveiled its “Companion to AI Living” vision at CES 2026, outlining how AI will connect entertainment, home appliances, health, and services into a unified ecosystem.
Samsung Electronics opened CES 2026 with a broad statement about the future of consumer technology, positioning artificial intelligence not as a feature but as the foundation of everyday living. At its annual First Look event in Las Vegas, the company introduced its “Companion to AI Living” vision, outlining how AI will connect devices, services, and experiences across the home.
Rather than focusing on a single product category, Samsung framed AI as a unifying layer across its ecosystem, spanning displays, home appliances, mobile devices, wearables, and services. Company executives emphasized that scale, connectivity, and on-device intelligence allow Samsung to move beyond basic automation toward more contextual and personalized experiences.
AI as the Core of the Entertainment Experience
Samsung’s display business showcased how AI is reshaping entertainment into a more interactive and lifestyle-oriented experience. The centerpiece of the lineup was a 130-inch Micro RGB display, which Samsung described as a major leap in screen size and color accuracy, driven by independent red, green, and blue light sources.
Supporting this hardware is Vision AI Companion, an AI system designed to act as an entertainment assistant rather than a passive interface. The system can recommend content, adjust sound and picture settings, and respond to natural language requests across Samsung’s 2026 TV lineup. AI-driven modes tailor experiences for sports, movies, and gaming, allowing users to fine-tune crowd noise, commentary, or background audio through voice commands.
Samsung also highlighted how Vision AI Companion extends beyond viewing. Users can ask for recipes based on food shown on screen, receive music recommendations to match their mood, or send content and instructions to other connected devices throughout the home. The goal, Samsung said, is to turn displays into active participants in daily routines.
Smart Homes That Anticipate Daily Needs
In the home appliance segment, Samsung presented AI-powered devices as companions that reduce friction in everyday tasks. Executives noted that SmartThings now serves more than 430 million users, giving Samsung a large data foundation to personalize experiences across households.
The Family Hub refrigerator remains central to this strategy. With an upgraded AI Vision system built on Google Gemini, the refrigerator can more accurately recognize and track food items, support meal planning, and automate grocery-related decisions. Features such as recipe recommendations, video-to-recipe conversion, and weekly food reports are designed to simplify decision-making rather than add complexity.
Samsung also showcased updates across laundry and home care. The Bespoke AI Laundry Combo removes the need to transfer loads between machines, while the latest AirDresser model uses air and steam to reduce wrinkles automatically. In floor care, the Bespoke AI Jet Bot Steam Ultra combines vision, 3D sensing, and conversational voice control to clean, monitor pets, and detect unusual activity while homeowners are away.
From Reactive Care to Proactive Wellbeing
Samsung’s long-term vision extends into digital health, where AI shifts care from reactive responses to proactive prevention. By connecting phones, wearables, appliances, and home devices, Samsung aims to detect early signs of health issues and provide personalized guidance for sleep, exercise, and nutrition.
The company described future scenarios in which connected devices suggest meals aligned with health goals, flag unusual patterns in mobility or sleep, and enable secure sharing of health data with providers through integrated platforms. Samsung also highlighted ongoing research into dementia detection, using wearables to identify subtle changes in movement, speech, and engagement over time.
Security remains a key pillar of this ecosystem. Samsung emphasized that Knox and Knox Matrix underpin its AI strategy, protecting user data across devices and continuously adapting to emerging AI-related risks.
By presenting AI as a companion woven into daily life rather than a collection of isolated tools, Samsung used CES 2026 to signal a shift toward more holistic, software-driven experiences. The company’s message was clear: the next phase of consumer technology will be defined not by individual devices, but by how intelligently they work together.
CES 2026: Boston Dynamics and Google Reunite to Power Next-Gen Atlas Humanoid
Boston Dynamics and Google have renewed their collaboration at CES 2026, combining advanced AI with the next generation of the Atlas humanoid robot.
Boston Dynamics and Google have reunited to showcase a new phase in humanoid robotics, unveiling progress on the next-generation Atlas robot at CES 2026. The collaboration brings together Boston Dynamics’ expertise in dynamic robot hardware with Google’s latest advances in artificial intelligence, signaling a renewed push toward more capable, adaptable humanoid systems.
The updated Atlas platform reflects a shift away from purely mechanical demonstrations toward robots that can understand context, plan actions, and learn from experience. At CES, the companies highlighted how AI-driven perception and decision-making are being integrated directly into Atlas, moving the humanoid closer to real-world industrial and commercial applications.
A Humanoid Built for Industrial Tasks
The new Atlas stands approximately 6.2 feet tall and features a reach of about 7.5 feet, allowing it to operate effectively in warehouses, factories, and logistics facilities designed for human workers. Its fully electric architecture supports quieter operation and improved energy efficiency compared to earlier hydraulic designs.
Atlas is capable of lifting payloads of up to roughly 110 pounds, enabling it to handle heavy objects such as totes, containers, and industrial components. The robot incorporates fully rotational joints across its body and offers a total of 56 degrees of freedom, supporting complex, whole-body movements and precise manipulation.
A newly designed four-fingered hand improves dexterity and grasp versatility, allowing Atlas to interact with a wide range of objects. The system is sealed to an industrial IP67 standard, providing protection against dust and water and making it suitable for harsh operating environments.
Power, Autonomy, and Control
Battery life for the new Atlas is rated at approximately four hours under typical operation. The robot is designed to swap its own battery packs without human assistance, reducing downtime and enabling longer deployment cycles in industrial settings.
Boston Dynamics highlighted multiple modes of operation for Atlas. The robot can function fully autonomously using AI-driven perception and planning, be remotely operated through a virtual reality interface, or be supervised and controlled using a tablet-based system. This flexibility allows customers to choose different levels of autonomy depending on task complexity and operational requirements.
By integrating Google’s AI technologies, Atlas gains enhanced perception, object recognition, and decision-making capabilities. The robot can interpret complex environments, adjust its actions in real time, and learn from repeated interactions rather than relying solely on predefined scripts.
Renewed Partnership and Market Implications
The collaboration marks a symbolic reunion between Boston Dynamics and Google, which previously worked together during Google’s ownership of the robotics firm more than a decade ago. This time, the focus is firmly on combining mature hardware with scalable AI systems that can support sustained commercial deployment.
Boston Dynamics positioned Atlas as a platform designed to operate within existing human-built environments without requiring major infrastructure changes. The goal is to reduce friction between robots and real-world workplaces, accelerating adoption in logistics, manufacturing, and material handling.
While the companies did not announce deployment timelines or customers at CES, the presentation signaled confidence that humanoid robots are moving closer to practical use. Challenges remain around cost, long-term durability, and large-scale fleet management, but the updated Atlas reflects a clear shift toward production readiness.
The CES 2026 debut suggests that Boston Dynamics and Google see humanoid robots as a cornerstone technology for the next generation of automation. By combining advanced mechanics with AI-driven autonomy, the partners aim to move Atlas beyond spectacle and into everyday industrial operations.