Humanoid Robots May Soon Challenge Human Sprint Records

Advances in humanoid robotics are pushing machines closer to elite human sprinting speeds. Researchers say new systems may soon approach or surpass the pace of the fastest human runners.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Humanoid robot prototype runs on an athletics track during testing, illustrating how advances in robotics locomotion and AI control are pushing machines closer to human sprinting performance. Photo: Kseniia Klichova / RobotsBeat

Humanoid robots are approaching a milestone that until recently belonged exclusively to elite human athletes: running as fast as the world’s best sprinters.

Recent advances in robotics suggest that machines capable of matching – or potentially surpassing – human sprint speeds may soon become a reality. According to Wang Xingxing, founder of Chinese robotics company Unitree Robotics, improvements in mechanical design and AI-driven control systems are rapidly closing the gap between human and robotic running performance.

Speaking at the Yabuli China Entrepreneurs Forum, Wang said humanoid robots could eventually complete the 100-meter dash in under 10 seconds, a benchmark associated with world-class sprinters.

While no humanoid robot has yet achieved such performance in real-world conditions, research prototypes are moving closer to that level.

The Engineering Race for Speed

The latest developments illustrate how quickly humanoid locomotion technology is advancing.

Earlier this year, researchers from Zhejiang University and Shanghai-based JingShi Technology unveiled a full-size humanoid robot called “Bolt” capable of reaching speeds of roughly 10 meters per second. That performance places it within striking distance of the pace achieved by Olympic champion Usain Bolt during his record-setting 100-meter sprint.

Bolt’s 2009 world record of 9.58 seconds corresponds to an average speed slightly above 10 meters per second, with peak velocities even higher during the race.

Closing that remaining gap requires solving a series of complex engineering problems.

Unlike wheeled robots, humanoid machines must maintain stability while balancing on two legs during rapid acceleration and stride cycles. High-speed running demands precise synchronization between sensors, actuators, and control algorithms, ensuring that each step maintains balance while maximizing propulsion.

Small errors in timing or force distribution can destabilize the robot, making high-speed locomotion one of the most difficult challenges in humanoid robotics.

Beyond Athletic Benchmarks

Although comparisons to Olympic sprinters capture attention, the implications extend beyond sports.

High-speed locomotion is closely tied to broader advances in embodied AI, the field focused on creating machines that can move, perceive, and interact with the physical world.

Robots capable of running quickly and stably would be better suited for tasks such as search-and-rescue operations, disaster response, and industrial inspections where mobility across complex terrain is essential.

At the same time, researchers acknowledge that achieving elite athletic performance does not necessarily translate into practical autonomy.

Wang noted that one of the biggest hurdles for humanoid robotics remains generalization. Many robots perform well in controlled environments or under carefully trained conditions but struggle when encountering unpredictable terrain or dynamic obstacles.

Bridging that gap between laboratory performance and real-world reliability remains a central challenge for the industry.

Even so, the rapid progress in humanoid locomotion highlights how quickly robotics is evolving. As advances in hardware, control algorithms, and machine learning converge, machines are steadily approaching physical capabilities that once seemed uniquely human.

Whether robots ultimately break sprint records may be less important than what the race represents: a new phase in which artificial intelligence is no longer confined to digital tasks but increasingly expressed through physical movement.

News, Robots & Robotics, Science & Tech

McDonald’s Tests Humanoid Robots in Shanghai Restaurant

A McDonald’s restaurant in Shanghai is testing humanoid robots for customer-facing roles, offering a glimpse into how service robotics could gradually enter everyday retail environments.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
McDonald’s restaurant exterior as the global fast-food chain experiments with robotics and automation in select locations, including humanoid service robots in pilot programs. Photo: McDonald’s

A McDonald’s restaurant in Shanghai has begun testing humanoid robots in customer-facing roles, offering a visible example of how service robotics may gradually enter everyday retail environments.

Videos circulating online show bipedal robots greeting customers, interacting with diners, and assisting with basic front-of-house tasks. The machines are supplied by Chinese robotics company Keenon Robotics, which has previously developed delivery robots and service robots used in restaurants and hotels.

While the robots in the trial remain relatively limited in capability, the experiment reflects a broader shift underway in the hospitality industry as companies explore automation to address labor challenges and rising operational costs.

The test also highlights how robotics is increasingly moving from industrial environments into highly visible consumer spaces.

Service Robots Move into Restaurants

Restaurants have long experimented with automation, though most deployments so far have relied on specialized machines rather than humanoid systems.

Robotic kitchen equipment, autonomous floor cleaners, and mobile delivery robots are already used in some restaurants and hotels. Keenon Robotics itself has deployed wheeled delivery robots in thousands of hospitality venues across Asia.

Humanoid robots introduce a different approach. Because they are designed with a human-like form, they can interact with spaces built for people, including counters, seating areas, and walkways. In theory, that allows them to perform a wider range of tasks without requiring significant redesign of restaurant layouts.

In the Shanghai pilot, the robots appear primarily focused on greeting guests and providing entertainment rather than handling complex food preparation or order management. Their presence functions as both a service experiment and a marketing attraction, drawing curiosity from customers.

For large restaurant chains, even incremental automation could eventually reduce pressure on staffing for repetitive customer-service roles.

A Labor Puzzle for Service Industries

The experiment also reflects a broader labor dynamic that has emerged in China and other major economies.

While youth unemployment remains elevated in some regions, service industries often report difficulty filling low-wage or repetitive roles. Restaurant work, which frequently involves long hours and physically demanding tasks, has become less attractive to younger workers.

At the same time, China’s population is aging, shrinking the available workforce over the long term.

These overlapping trends have encouraged companies to explore robotics as a way to fill operational gaps rather than fully replace human workers.

In practice, most robotics deployments in hospitality are likely to remain hybrid systems for the foreseeable future. Human employees continue to handle complex interactions and decision-making tasks, while robots take on routine or customer-engagement roles.

The Long Road to Fully Automated Restaurants

Despite the attention generated by humanoid robots, the technology remains far from running an entire restaurant.

Current systems still struggle with tasks that humans perform effortlessly, such as dexterous food preparation, nuanced customer interaction, and navigating crowded spaces during peak hours.

As a result, industry observers expect service robotics to evolve gradually. Robots may first appear as greeters, food runners, or cleaning assistants before taking on more complex responsibilities.

For companies like McDonald’s, pilot programs provide a way to test how customers respond to robots while evaluating their reliability in real operational settings.

Even limited deployments in such high-visibility environments suggest a broader trend: as robotics technology improves, machines may increasingly appear not only in factories and warehouses but also in the everyday places where people work, shop, and eat.

US Lawmakers Raise Security Concerns over Chinese AI and Robotics Systems

U.S. lawmakers are examining potential security risks posed by Chinese AI and robotics systems entering global markets. A congressional hearing highlighted concerns over infrastructure exposure and technology supply chains.

By Rachel Whitman | Edited by Kseniia Klichova Published:
A humanoid robot developed by Chinese robotics companies performs a demonstration as U.S. lawmakers examine potential security risks tied to foreign AI and robotics technologies. Photo: Unitree Robotics

As robotics and artificial intelligence systems move deeper into industrial and public infrastructure, U.S. policymakers are beginning to examine the national security implications of foreign-developed technologies operating inside critical systems.

A congressional hearing held by the House Subcommittee on Cybersecurity and Infrastructure Protection this week focused on the potential risks posed by AI, robotics, and autonomous sensing technologies developed by companies linked to China. Lawmakers raised concerns that rapidly advancing robotics platforms, including humanoid robots, could eventually become embedded in sensitive infrastructure sectors.

The hearing reflects a growing geopolitical dimension in the global race to develop advanced robotics and AI. As these systems transition from research environments into real-world deployment across manufacturing, logistics, and infrastructure, governments are increasingly evaluating how technological supply chains intersect with national security.

Chairman Andy Ogles said the hearing aimed to assess whether existing procurement safeguards and supply chain policies are adequate to manage risks associated with foreign-developed AI and robotics technologies.

Robotics Enters the Security Debate

The discussion centered on Chinese technology firms developing AI systems and robotic platforms that are gaining traction internationally.

Among the companies mentioned during the hearing were AI developer DeepSeek and robotics manufacturer Unitree Robotics, which has become widely known for producing agile quadruped and humanoid robots used in research and industrial settings.

Lawmakers expressed concern that technologies developed within China’s technology ecosystem could potentially expose U.S. infrastructure to security vulnerabilities if widely deployed in government networks or critical industrial systems.

These concerns mirror debates already underway in other technology sectors such as telecommunications and semiconductors, where governments have increasingly scrutinized supply chains tied to strategic technologies.

In the case of robotics, however, the stakes may extend beyond data security. Autonomous systems are increasingly being integrated into physical infrastructure, including logistics networks, manufacturing plants, and energy facilities.

As a result, policymakers are beginning to evaluate whether robotics hardware and software could introduce operational risks if deployed in sensitive environments.

Humanoid Robotics Adds a New Dimension

The discussion comes at a time when humanoid robotics development is accelerating globally, with companies in the United States, Europe, and China racing to commercialize machines designed to operate in human environments.

Chinese robotics companies have been particularly active in this area, producing increasingly capable quadruped and humanoid systems that are gaining attention across the research and robotics industries.

While many of these machines are currently used for experimentation or demonstration, lawmakers suggested that future versions could potentially be integrated into sectors such as logistics, construction, or security operations.

That possibility raises questions about oversight and regulatory frameworks governing the deployment of robotics technologies developed abroad.

Experts participating in the hearing emphasized that evaluating such risks requires balancing national security concerns with the global nature of technology supply chains. Robotics development increasingly involves international collaborations, open-source software ecosystems, and globally distributed manufacturing.

A Broader Technology Competition

The hearing highlights how robotics and physical AI are becoming part of a wider geopolitical competition over emerging technologies.

Governments around the world are investing heavily in domestic robotics development, viewing intelligent machines as a strategic capability that could influence economic productivity, industrial competitiveness, and military logistics.

For the United States, policymakers are now examining how to strengthen domestic robotics manufacturing and supply chains while ensuring that critical infrastructure remains secure.

The debate reflects a broader reality of the AI era: as intelligent machines begin to operate in the physical world, the question of who builds those systems – and where their technologies originate – is becoming a matter not only of economic opportunity but also national security.

RealSense Demonstrates Autonomous Humanoid Navigation with LimX Dynamics at GTC

RealSense and LimX Dynamics demonstrated autonomous humanoid navigation at NVIDIA GTC, highlighting how dense 3D perception systems could enable safer operation of legged robots in human environments.

By Daniel Krauss | Edited by Kseniia Klichova Published:
A LimX Dynamics humanoid robot navigates using RealSense depth perception technology during a demonstration at NVIDIA GTC, highlighting advances in 3D awareness for humanoid robotics. Photo: RealSense

As humanoid robots move closer to real-world deployment, one of the most difficult engineering challenges remains basic mobility: safely navigating complex environments designed for humans.

At NVIDIA’s GTC conference, RealSense demonstrated a perception system designed to address that problem, showing autonomous humanoid navigation developed in collaboration with Shenzhen-based robotics company LimX Dynamics. The system combines dense 3D depth sensing with visual simultaneous localization and mapping (vSLAM) to help legged robots understand and move through complex spaces.

The demonstration reflects a broader shift in robotics development. While much attention has focused on robot manipulation and artificial intelligence reasoning, reliable perception and navigation remain essential prerequisites for robots to operate safely alongside people.

RealSense CEO Nadav Orbach described the challenge as one of responsibility as much as technical capability. For robots working in human environments, perception systems must function as the machine’s “visual cortex”, enabling accurate localization, obstacle avoidance, and stable motion in constantly changing surroundings.

Why Humanoids Need a Different Navigation Stack

Navigation systems for traditional wheeled robots rely heavily on two-dimensional sensing and relatively predictable movement across flat surfaces. Technologies such as lidar and wheel odometry have proven effective for applications including robotic vacuums, warehouse robots, and autonomous carts.

Humanoid and legged robots introduce a more complex problem. Instead of moving along a fixed plane, they operate in three-dimensional space with shifting points of contact, uneven terrain, and dynamic obstacles.

Tasks such as stepping over objects, navigating stairs, or adjusting foot placement on irregular surfaces require far more detailed spatial awareness. Standard sensing systems used for wheeled robots often lack the full 3D context required for stable locomotion.

RealSense’s system addresses that gap using dense depth cameras that generate a detailed 3D understanding of the environment. Combined with visual SLAM algorithms and odometry data, the robot can continuously map its surroundings while tracking its own position.

The system demonstrated with LimX Dynamics integrates RealSense depth cameras with NVIDIA’s cuVSLAM technology, allowing the robot to localize itself and plan motion in real time.

Bridging Simulation and Real World Robotics

A key part of the project involved training and validating the navigation system in simulation before deploying it on a physical humanoid robot.

LimX Dynamics used NVIDIA Isaac Lab as a high-fidelity simulation environment where reinforcement learning models could practice locomotion and navigation behaviors. The simulated environment allowed engineers to test complex movement scenarios and refine control policies before transferring them to the physical robot.

This “simulation-first” approach is increasingly common in robotics development as companies attempt to close the gap between experimental demonstrations and reliable real-world performance.

By validating navigation behavior in simulation, developers can expose robots to a wide range of environments and edge cases that would be difficult or unsafe to reproduce physically.

In the case of humanoid robots, that includes scenarios such as uneven terrain, changing floor heights, or obstacles moving unpredictably through the robot’s path.

Perception as a Bottleneck for Humanoid Deployment

The demonstration highlights a broader challenge facing the emerging humanoid robotics industry.

While companies are making rapid progress in mechanical design and AI control systems, real-world deployment depends heavily on perception technologies capable of interpreting complex environments in real time.

Dense 3D sensing enables robots to identify hazards such as edges, stairs, and sudden elevation changes, reducing the risk of falls or unstable movements. It also allows robots to move in ways that appear more predictable and understandable to people sharing the same space.

As humanoids move from laboratory demonstrations to industrial or service environments, reliable perception may become one of the most critical components determining whether robots can safely operate in public settings.

RealSense, which was spun out from Intel in 2025, has been building depth sensing technologies for more than a decade. Its systems are already used across autonomous mobile robots, industrial automation, and healthcare devices.

The company’s latest demonstration suggests that the next phase of robotics development will depend not only on better robot bodies or smarter AI models, but also on perception systems capable of giving machines a stable and trustworthy understanding of the physical world.

News, Robots & Robotics, Science & Tech

Gecko Robotics Wins Largest US Navy Robotics Contract for Fleet Maintenance

Gecko Robotics has secured its largest U.S. Navy contract to deploy inspection robots that build digital twins of naval vessels. The systems aim to reduce costly maintenance delays and improve fleet readiness.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
A Gecko Robotics inspection robot crawls along a ship surface collecting structural data used to build a digital twin of naval vessels for predictive maintenance. Photo: Gecko Robotics

The U.S. Navy is expanding its use of robotics to monitor and maintain its fleet, awarding Gecko Robotics its largest defense contract to date as the military seeks to reduce maintenance delays and improve ship readiness.

The Pittsburgh-based robotics company announced a five-year agreement with the U.S. Navy and the U.S. General Services Administration to deploy its inspection robots and software across naval vessels. The deal begins with an initial $54 million award and has a potential ceiling of $71 million.

Under the agreement, Gecko’s robots will begin inspecting ships in the Navy’s Pacific Fleet, initially covering 18 vessels. The machines are designed to crawl across ship surfaces and enter confined areas to gather detailed structural data that human inspectors often struggle to access safely.

The data collected by the robots feeds into software that creates digital replicas of each vessel, enabling engineers to track the condition of critical systems and identify maintenance issues before they lead to failures.

Digital Twins for Ship Maintenance

A central element of the project is the creation of digital twins – continuously updated digital models that represent the real-world condition of complex assets.

Gecko’s robots use sensors and imaging systems to scan ship structures, capturing measurements related to corrosion, coating degradation, and structural wear. That information is used to generate a detailed model of the vessel’s condition.

According to Gecko founder and CEO Jake Loosararian, the digital models allow maintenance teams to understand the health of ships in far greater detail than traditional inspection methods.

Once a ship’s digital representation exists, engineers can monitor changes over time and plan repairs more precisely. The goal is to reduce the amount of time ships spend in maintenance yards while preventing unexpected failures that could disrupt operations.

The approach reflects a broader shift across industrial sectors toward predictive maintenance systems that combine robotics, sensor networks, and digital modeling.

Addressing Fleet Readiness Challenges

For the Navy, maintenance efficiency has become a critical operational issue.

Officials have set a goal of achieving roughly 80 percent fleet readiness by 2027. Currently, however, maintenance cycles leave a large portion of ships unavailable at any given time. Aging vessels, complex systems, and lengthy inspection processes contribute to the challenge.

Maintenance for the Navy’s fleet costs an estimated $13 billion to $20 billion annually, and delays in repair schedules can cascade across shipyards and deployment timelines.

By using robotic inspection systems, the Navy hopes to gather more comprehensive data while reducing the time required for manual inspections. Robots can operate in confined spaces and hazardous environments, allowing inspections to occur more frequently and with less disruption to operations.

Gecko Robotics has been working with the Navy for several years prior to the new agreement. The relationship began when a Navy engineer requested an evaluation of the company’s technology, leading to pilot programs that demonstrated the robots’ ability to capture detailed structural data.

The new contract expands that collaboration into a broader deployment across naval assets.

Robotics Expands into Infrastructure Monitoring

The Navy contract highlights a growing role for robotics in monitoring large-scale infrastructure.

Gecko Robotics originally developed its systems to inspect industrial assets such as power plants, refineries, and heavy manufacturing facilities. Many of these environments share characteristics with naval vessels, including difficult-to-access structures and high safety requirements.

The company’s long-term vision is to create continuously updated digital models of critical infrastructure, enabling operators to detect problems early and schedule repairs without waiting for major failures.

For defense organizations responsible for maintaining complex fleets, such predictive systems could reshape how maintenance cycles are managed.

If the approach proves effective at scale, inspection robots and digital twin platforms may become a standard component of asset management not only for military fleets but also for power generation, transportation infrastructure, and industrial facilities.

Foxconn Turns to Physical AI Robotics as AI Server Boom Reshapes Manufacturing

Foxconn is expanding its robotics strategy as demand for AI infrastructure surges. The company is working with partners including Skild AI, ABB, and NVIDIA to deploy intelligent robots in electronics assembly.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Robotic assembly systems operate on an electronics production line at Foxconn, where the company is exploring AI-powered robotics platforms to support the growing demand for AI hardware. Photo: Foxconn

The rapid expansion of artificial intelligence infrastructure is reshaping the world’s largest electronics manufacturing operations, and Foxconn is increasingly turning to robotics to keep pace.

The Taiwanese manufacturing giant, formally known as Hon Hai Precision Industry, said strong demand for AI servers is expected to drive growth in 2026, even as geopolitical tensions and supply chain pressures continue to affect global technology markets. At the same time, the company is expanding partnerships aimed at introducing AI-driven robotics into its production lines.

Foxconn has traditionally been known for assembling consumer electronics, most notably Apple’s iPhone. But the company has spent the past several years shifting toward higher-value sectors including AI infrastructure, electric vehicles, and advanced manufacturing automation.

Chairman Young Liu told analysts that demand for AI servers remains strong and is expected to accelerate further. AI-related hardware has become one of the fastest-growing segments of the company’s business, reflecting the global surge in spending on data centers and machine learning infrastructure.

Robotics Moves into Electronics Assembly

As manufacturing volumes for AI hardware grow, Foxconn is experimenting with new robotics systems designed to increase precision and throughput in complex assembly tasks.

The company is piloting an AI robotics platform developed with ABB and NVIDIA, aimed at bringing advanced perception and decision-making capabilities to industrial robots working on electronics assembly lines. The system uses simulation tools and digital twin technology to model factory operations before deploying robots on the production floor.

Another initiative involves integrating a generalized robotics intelligence system developed by Skild AI. The technology is designed as a shared “robot brain” that can be deployed across different types of robots and tasks, allowing machines to adapt to multiple workflows without extensive reprogramming.

Foxconn plans to use the system to support electronics assembly processes tied to its AI hardware production, particularly as the complexity of advanced computing components continues to rise.

The push reflects a broader shift in robotics, where manufacturers are moving away from narrowly programmed automation toward AI-enabled systems that can adapt to changing production environments.

AI Hardware Demand Reshapes Foxconn’s Business

Foxconn’s move into robotics coincides with a major change in its revenue mix driven by AI infrastructure.

Cloud and networking products, which include AI servers, now represent a significantly larger share of the company’s business than in previous years. The segment accounted for roughly 40 percent of revenue in 2025, up from about 30 percent the year before.

The growth comes as technology companies worldwide increase spending on computing power required to train and operate large AI models. Foxconn is one of the key manufacturers producing servers used in these systems, including hardware built for NVIDIA’s AI platforms.

The company reported net profit of NT$189.4 billion in 2025, a 24 percent increase from the previous year, with total revenue reaching NT$8.1 trillion.

At the same time, executives acknowledged that the broader environment remains uncertain. Tariffs, geopolitical tensions, and supply chain disruptions continue to affect global technology manufacturing. Rising energy prices linked to international conflicts have also introduced cost pressures across logistics and industrial operations.

Despite those challenges, Liu said the company expects strong growth in AI server shipments, forecasting high double-digit quarter-on-quarter increases in AI rack demand early in 2026.

For Foxconn, the combination of AI infrastructure demand and robotics deployment signals a strategic shift in how electronics manufacturing will evolve. As factories become more automated and AI-driven, manufacturers may increasingly rely on intelligent robotic systems not just for efficiency but to manage the growing complexity of advanced computing hardware.

RoboForce Raises $52 Million to Deploy Physical AI Robots for Industrial Labor

RoboForce has raised $52 million in a round led by YZi Labs to expand deployment of its TITAN physical AI robots. The company is targeting labor shortages across sectors including solar energy, logistics, mining, and data center construction.

By Rachel Whitman | Edited by Kseniia Klichova Published:
RoboForce’s TITAN industrial robot is designed for demanding field environments such as solar construction and logistics infrastructure, where companies face growing labor shortages. Photo: RoboForce

A new robotics startup focused on industrial labor automation has raised $52 million to accelerate deployment of physical AI systems designed for some of the most demanding jobs in modern infrastructure.

Silicon Valley-based RoboForce announced the funding round led by YZi Labs, with additional backing from investors including technology entrepreneurs and institutional partners. The company is developing a full-stack robotics platform aimed at replacing or augmenting human labor in sectors such as renewable energy construction, logistics, mining, and data center development.

The investment signals growing investor interest in what many industry leaders describe as the next phase of artificial intelligence: machines capable of operating in the physical world rather than purely digital environments.

RoboForce’s flagship system, known as TITAN, is designed to work in environments where heat, repetition, and safety risks make human labor increasingly difficult to sustain. The company says it has already received letters of intent representing demand for more than 11,000 robots as it transitions from pilot deployments to larger-scale production.

Automation for the Hardest Industrial Jobs

The company’s founding thesis emerged from firsthand observation of labor-intensive industrial work.

Co-founder and CEO Leo Ma, who previously worked on autonomous systems and mobility technologies, has described visiting numerous industrial sites where the same challenge repeatedly appeared: physically demanding jobs that were difficult to staff consistently.

Solar energy construction offers a clear example. Utility-scale solar installations require workers to secure millions of panels across large outdoor sites, often in extreme heat. In the United States alone, labor shortages contributed to delays affecting tens of gigawatts of solar capacity in recent years.

Similar gaps exist across logistics hubs, mining operations, and infrastructure construction. These jobs require endurance, precision, and safety compliance, but often struggle to attract or retain workers.

RoboForce is positioning its robots as a solution to this structural workforce gap. TITAN is designed for millimeter-level precision and sustained operation in harsh environments, allowing it to perform tasks such as assembly, installation, and materials handling in large industrial projects.

Building a Physical AI Data Flywheel

Beyond the hardware itself, the company’s strategy centers on what it calls a “physical AI data flywheel”.

Each deployed robot collects operational data from real-world environments. That data feeds back into RoboForce’s foundation model, allowing the system to improve its capabilities over time and adapt to new industrial tasks.

The concept mirrors trends in autonomous vehicles and large-scale AI systems, where real-world data becomes a key competitive advantage. The more robots operating in the field, the faster the learning cycle accelerates.

RoboForce is developing its platform in collaboration with NVIDIA’s robotics ecosystem. Its systems use NVIDIA Jetson Thor for edge computing while relying on Isaac simulation tools, Isaac Lab training frameworks, and Cosmos world models to train robotic behaviors before deploying them in physical environments.

The approach allows robots to practice tasks in simulation and refine them with real-world feedback, narrowing the gap between experimental demonstrations and production deployment.

The company’s visibility increased when NVIDIA CEO Jensen Huang highlighted RoboForce’s technology during a keynote presentation at GTC, framing AI-powered robotics as a key driver of a broader industrial transformation.

Investors Bet on Physical AI Infrastructure

For YZi Labs, which manages more than $10 billion in assets, the investment reflects a growing conviction that robotics will become a central layer of future infrastructure.

Ella Zhang, managing partner and head of the firm, said the investment aligns with the belief that the next wave of AI innovation will extend beyond digital applications into machines that interact directly with the physical world.

Zhang will join RoboForce’s board as part of the investment.

The company was founded in 2023 by engineers and researchers from organizations including Carnegie Mellon University, the University of Michigan, Amazon Robotics, Google, Waymo, Tesla Robotics, and ABB.

The funding will be used to expand the company’s robot foundation models, scale manufacturing of its robotic systems, and convert existing pilot programs into full production deployments.

For the broader robotics sector, the deal reflects a wider shift in how automation is framed. Rather than focusing solely on factory efficiency, a growing number of companies are targeting labor-intensive sectors where workforce shortages threaten economic growth.

If those systems prove reliable at scale, robots may increasingly become a structural component of infrastructure development itself – helping build the energy systems, data centers, and logistics networks that underpin the global economy.

NVIDIA Expands Physical AI Ecosystem to Accelerate Real World Robotics Deployment

NVIDIA is expanding its robotics platform with new world models, simulation frameworks, and partnerships with leading robot manufacturers. The move aims to accelerate the deployment of AI-powered robots across manufacturing, logistics, healthcare, and humanoid robotics.

By Laura Bennett | Edited by Kseniia Klichova Published:
NVIDIA CEO Jensen Huang presents new physical AI infrastructure at GTC, highlighting partnerships with global robotics manufacturers building next-generation intelligent machines. Photo: NVIDIA

The race to bring artificial intelligence into the physical world is accelerating, and NVIDIA is positioning itself at the center of the emerging robotics stack.

At its recent announcements surrounding the GTC conference, the company unveiled a broader physical AI platform combining simulation software, world models, and robotics foundation models designed to support the development and deployment of intelligent machines. The initiative is backed by partnerships with major robotics companies including ABB Robotics, FANUC, KUKA, Agility Robotics, Figure, Universal Robots, and Yaskawa.

The effort reflects a wider shift across the robotics industry. As robots become more autonomous and adaptable, companies are moving beyond traditional automation toward systems that can perceive environments, reason about tasks, and act with greater flexibility.

NVIDIA founder and CEO Jensen Huang framed the shift as a structural change in industrial technology. “Physical AI has arrived,” Huang said, arguing that many industrial companies will increasingly operate as robotics companies as intelligent machines become embedded in manufacturing, logistics, infrastructure, and transportation systems.

Building the Infrastructure for Robot Intelligence

The company’s robotics strategy centers on providing the underlying computational and software infrastructure required to train and operate intelligent robots at scale.

New components include updated NVIDIA Isaac simulation frameworks, the Cosmos family of world models, and Isaac GR00T robot foundation models designed to help robots learn generalized skills across different environments. Together, these tools allow developers to generate synthetic environments, train policies in simulation, and transfer those behaviors to real machines.

Simulation plays a central role. Industrial robotics companies including ABB, FANUC, Yaskawa, and KUKA are integrating NVIDIA’s Omniverse and Isaac technologies to create digital twins of production lines, allowing engineers to design and test robotic systems virtually before deploying them on factory floors.

The companies are also incorporating NVIDIA Jetson edge computing modules into their controllers to enable real-time AI inference directly on robots. With millions of industrial robots already operating globally, these integrations aim to gradually layer advanced intelligence onto existing automation infrastructure.

The approach reflects a broader industry consensus that robotics development will increasingly rely on large-scale simulation, synthetic data generation, and foundation models rather than traditional rule-based programming.

A Push Toward General Purpose Robot Brains

Another key focus of the initiative is the development of generalized robotic intelligence.

Companies such as Skild AI and FieldAI are using NVIDIA’s Cosmos world models and Isaac simulation environments to train AI systems that can operate across different robotic embodiments. Instead of building task-specific software for every application, developers are attempting to create “robot brains” capable of adapting to new environments and tasks with limited retraining.

One of the most visible deployment efforts involves Skild AI working with ABB Robotics and Universal Robots to integrate generalized AI systems into widely deployed industrial and collaborative robots. The goal is to expand automation into more dynamic tasks that traditionally required human adaptability.

Skild AI is also collaborating with Foxconn on assembly systems used in NVIDIA’s Blackwell chip production lines. These systems rely on AI-driven dual-arm manipulators designed to perform highly precise electronics assembly operations.

The broader strategy aligns with NVIDIA’s belief that the next generation of robots will combine the reliability of industrial automation with the adaptability of modern AI systems.

Humanoid Robots and Surgical Systems Join the Platform

Beyond industrial automation, NVIDIA’s ecosystem now extends into humanoid robotics and healthcare.

Developers including Agility Robotics, Figure, NEURA Robotics, and AGIBOT are using the company’s simulation tools and robotics models to accelerate development of humanoid robots capable of operating in human environments. Building such machines requires integrating perception, locomotion, dexterous manipulation, and decision-making within tightly constrained safety requirements.

Healthcare robotics is another area of expansion. Companies including CMR Surgical, Johnson & Johnson MedTech, and Medtronic are using NVIDIA simulation and computing platforms to train and validate AI-assisted surgical systems before clinical deployment.

These applications require particularly strict validation processes, making simulation and digital twin technology especially valuable.

The expansion of NVIDIA’s robotics ecosystem comes as demand for AI computing continues to surge. Huang recently projected that AI chip sales could eventually reach $1 trillion annually as industries transition toward what he described as a new computing era driven by AI systems embedded across both digital and physical infrastructure.

For robotics, the implication is clear: as machines become more capable of perceiving and interacting with the real world, the boundary between AI software and industrial hardware is increasingly dissolving. Companies that control the infrastructure connecting those layers may shape how quickly intelligent machines move from research labs into everyday operations.

Tennis Playing Humanoid Robot Learns from Imperfect Data and Beats Its Creator

Researchers from Tsinghua University and Galbot have developed a humanoid robot that learned to play tennis using imperfect human motion data. Within days of deployment, the robot improved enough to outperform its human creator.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
A Unitree G1 humanoid robot trained through the LATENT learning system returns a tennis ball during testing, demonstrating how robots can learn complex athletic motion from imperfect human data. Photo: Tsinghua University / Galbot

The ability to teach robots complex physical skills has long depended on carefully curated training data and highly controlled demonstrations. A new research project suggests that requirement may be loosening.

Researchers from Tsinghua University and robotics company Galbot have demonstrated a humanoid robot capable of learning tennis using imperfect human motion clips rather than idealized training data. The system, called LATENT, enabled a Unitree G1 humanoid robot to improve rapidly in real-world play – eventually defeating the researcher who trained it.

The project highlights a growing shift in robotics toward learning systems that can extract usable behaviors from messy, incomplete demonstrations. Instead of relying on precise instruction, the robot learns how to combine imperfect examples into effective motion strategies.

According to project lead Zhikai Zhang, the robot’s progress was strikingly fast. On its first day of real-world deployment it failed to return a single serve. By the final day of testing, Zhang reported that he could no longer beat the robot in rallies.

Teaching Robots with Imperfect Demonstrations

Traditional robotic skill learning typically requires large datasets of clean, carefully labeled demonstrations. Capturing these datasets can be expensive and time consuming, particularly for tasks involving dynamic whole-body motion such as sports.

The LATENT system approaches the problem differently. Instead of relying on perfect motion capture data, the system trains on fragmented human tennis clips that include errors, inconsistencies, and incomplete movements.

From these clips, the model constructs what the researchers call a latent action space – a structured representation of primitive movements extracted from imperfect examples. A higher-level AI policy then functions as a coordinating controller, selecting and refining those primitive actions to produce effective gameplay behavior.

The training process occurs first in simulation, where the robot practices thousands of interactions without risk. Once the controller stabilizes, the policy is transferred to a physical robot using sim-to-real techniques, allowing the learned behaviors to operate on the Unitree G1 humanoid platform.

This approach allows the robot to learn usable skills even when the input demonstrations are flawed, something that more traditional robotics pipelines struggle to accommodate.

Why Messy Data May Be the Future of Robot Training

While a tennis-playing robot may appear like a demonstration project, the underlying method addresses a broader challenge in robotics: scaling physical skill acquisition.

Most robots today still depend on highly structured environments and carefully engineered training pipelines. In real-world settings such as warehouses, construction sites, or disaster response zones, collecting perfect demonstrations is rarely practical.

Learning from imperfect data could significantly lower the barrier to training robots for complex tasks. Instead of requiring a flawless example every time, robots could observe ordinary human activity and gradually assemble functional behaviors.

This shift mirrors trends already underway in large-scale AI systems, where models increasingly learn from vast amounts of noisy real-world data rather than tightly curated datasets.

For robotics, the implications are particularly significant because physical tasks often involve unpredictable conditions, subtle motor control, and continuous feedback from the environment. Systems that can tolerate imperfect training signals may adapt more quickly to these realities.

A Testbed for Embodied AI

Sports have become a useful proving ground for embodied AI research because they combine perception, motion planning, balance control, and fast decision making.

Tennis, in particular, requires precise timing, whole-body coordination, and real-time reaction to an unpredictable opponent. Successfully sustaining rallies demonstrates that a robot can integrate visual perception with dynamic locomotion and arm control.

In this case, the tennis court served as a compact test environment for evaluating how well the LATENT system could convert imperfect demonstrations into coordinated action.

The research team has made the project details and code publicly available, allowing other researchers to replicate and extend the approach.

If the underlying method proves scalable, it could influence how robots are trained for tasks far beyond sports – from industrial manipulation to collaborative human-robot work. Instead of waiting for perfect datasets, robots may increasingly learn from the same imperfect movements humans produce every day.

News, Robots & Robotics, Science & Tech

Pokémon Go Data Is Now Training Delivery Robots

Location data collected from millions of Pokémon Go players is now being used to train delivery robots, highlighting the unexpected role of consumer AR games in robotics development.

By Laura Bennett | Edited by Kseniia Klichova Published:
A small delivery robot navigates a city sidewalk as spatial data originally collected through mobile AR games helps improve robotic navigation in urban environments. Photo: Coco Robotics

Millions of people who spent years chasing virtual Pokémon through city streets unknowingly helped create one of the largest real-world datasets now being used to train robots.

Niantic Spatial, a company spun out of the augmented reality developer behind Pokémon Go, has partnered with Coco Robotics to improve navigation for urban delivery robots. The collaboration uses spatial mapping data collected from players of Niantic’s games to help robots move through complex city environments.

The project highlights an unexpected intersection between gaming, artificial intelligence, and robotics: the same technology used to place digital creatures on a phone screen can also guide autonomous vehicles through real-world streets.

Turning AR Gameplay into Spatial Intelligence

When Pokémon Go launched in 2016, millions of players explored cities while using their smartphones to capture virtual creatures layered onto real-world environments.

Behind the scenes, the game relied on Niantic’s Visual Positioning System (VPS), a technology designed to understand a user’s location by analyzing surrounding landmarks rather than relying solely on GPS signals.

Players contributed to this system by scanning buildings, monuments, and other public spaces from different angles using their phones.

Over time, these scans created detailed three-dimensional maps of real-world locations.

The data helped Niantic improve AR accuracy in its games, but it also built a massive spatial dataset describing how cities look from ground level – exactly the type of information robots need to navigate sidewalks and intersections.

The Same Problem as Catching Pikachu

Niantic Spatial now aims to apply that dataset to robotics.

The company’s first robotics partnership is with Coco Robotics, which operates a fleet of small autonomous delivery robots designed to transport food and groceries through city streets.

Coco’s robots currently operate in several cities, including Los Angeles, Chicago, Miami, Jersey City, and Helsinki.

Navigating urban environments is one of the hardest problems in robotics. Tall buildings interfere with GPS signals, sidewalks are crowded with pedestrians, and conditions change constantly.

Niantic’s VPS technology helps solve this problem by allowing robots to identify their exact location by comparing camera images to its spatial database of landmarks.

According to Niantic Spatial CEO John Hanke, the underlying technical challenge is surprisingly similar to what players experienced in the AR game.

In both cases, software must understand how objects move through a complex physical world.

A Dataset Built by Millions of Players

Much of the data powering the system was collected indirectly by players.

Niantic introduced features that encouraged users to photograph and scan locations in exchange for rewards within the game, such as items or rare Pokémon.

These contributions helped build a detailed visual model of cities under different lighting conditions, weather, and viewing angles.

While Niantic has long acknowledged that its games collect environmental data, the revelation that these datasets are now helping train robots may surprise some players.

Still, for robotics developers, such datasets are extremely valuable.

Unlike simulation environments or small-scale robotics experiments, Niantic’s data reflects the messy complexity of real-world cities.

Gaming Data Meets Urban Robotics

The collaboration illustrates a broader trend in robotics development: companies are increasingly relying on large-scale datasets collected outside traditional robotics research.

Consumer technologies – including smartphones, cameras, and games – are generating vast amounts of real-world visual information that can help train autonomous systems.

For delivery robots attempting to navigate dense urban environments, that data may prove critical.

If the partnership succeeds, the hours players spent exploring parks, sidewalks, and landmarks in search of digital creatures may end up helping robots find their way through the same streets.

In other words, catching Pikachu may have helped teach a robot how to deliver pizza.

Automation, News, Robots & Robotics

Samsung Targets Robotic Hands as the Next Breakthrough in Humanoid Robotics

Samsung has launched a dedicated robotics research group focused on developing advanced robotic hands, betting that dexterity will be the key to unlocking practical humanoid robots.

By Rachel Whitman | Edited by Kseniia Klichova Published:
A robotic hand prototype designed for precision manipulation illustrates the growing focus on dexterous robotics systems for manufacturing and humanoid robots. Photo: Kseniia Klichova / RobotsBeat

Samsung Electronics is placing a major strategic bet on one of the most difficult problems in robotics: building robot hands capable of manipulating objects with human-like precision.

The company recently established a new research group called Hand Lab within its Future Robotics Task Force. The initiative focuses on developing advanced robotic hands that could eventually enable humanoid robots and automated manufacturing systems to handle delicate tasks currently performed by humans.

Industry analysts view the move as a signal that Samsung intends to compete more aggressively in the emerging humanoid robotics market.

While robots have become increasingly capable of walking, navigating environments, and maintaining balance, engineers say the real challenge lies elsewhere – dexterous manipulation.

Why Robotic Hands Matter

In robotics research, the ability to move like a human is no longer the primary obstacle.

Modern robots can climb stairs, recover from falls, and navigate complex environments with increasing reliability. But performing tasks that humans consider simple – tightening a screw, picking up a fragile object, or assembling small components – remains extremely difficult.

These tasks require a combination of force control, tactile feedback, and coordinated finger motion that traditional industrial robots struggle to achieve.

Most factory robots rely on simple grippers designed for highly structured environments. Humanoid robots, however, must interact with tools, components, and devices originally designed for human hands.

The result is a growing consensus within robotics research that the future of humanoid robots depends heavily on hand design.

Samsung’s decision to create a specialized research group dedicated to robotic hands reflects this shift in priorities.

A Tendon-Driven Approach to Dexterity

According to industry reports, Samsung’s robotic hand project is exploring a tendon-driven design, a system inspired by the anatomy of the human hand.

Instead of placing motors directly inside each finger, artificial tendons – cables running through the arm – pull and control finger movements. This architecture allows for smoother motion, finer force control, and potentially greater energy efficiency.

The approach is significantly more complex to engineer and manufacture than conventional robotic grippers, which is why most industrial robots avoid it.

However, tendon-driven systems can produce more natural and adaptable movements, making them well suited for humanoid robotics.

Samsung also plans to incorporate tactile sensors that allow robotic fingers to detect pressure, texture, and contact forces. These signals could feed into machine-learning systems that help robots adjust their grip in real time.

Such capabilities are considered essential for what researchers increasingly call physical AI – systems that combine artificial intelligence with real-world robotic interaction.

Building a Robotics Ecosystem

Samsung’s focus on robotic manipulation is part of a broader strategy to build a vertically integrated robotics ecosystem.

Over the past several years, the company has expanded its investments in robotics technology across multiple business units.

Samsung SDI is developing batteries tailored for robotics systems, while Samsung Electro-Mechanics is working on actuators and components for robotic motion.

The company also acquired a controlling stake in Korean robotics developer Rainbow Robotics, known for its humanoid and dual-arm robotic platforms.

Together, these initiatives could allow Samsung to integrate hardware, sensors, computing, and AI into a unified robotics platform.

The company has also outlined a longer-term plan to create AI-powered autonomous factories by 2030, where intelligent robots perform tasks ranging from logistics and inspection to complex assembly.

In such environments, robotic hands capable of delicate manipulation could become the key enabling technology.

Global Competition Intensifies

Samsung’s push into robotic manipulation also reflects rising global competition in humanoid robotics.

China’s robotics sector is expanding rapidly, with analysts projecting tens of thousands of humanoid robots could be produced annually within the next few years.

Chinese manufacturers have already achieved scale in service robots such as delivery and cleaning machines, often competing on cost.

Samsung appears to be taking a different approach – focusing on technological differentiation rather than mass production.

If the company succeeds in developing robotic hands capable of human-level dexterity, it could unlock new applications not only in electronics manufacturing but also across logistics, construction, and industrial automation.

Within robotics circles, engineers often summarize the challenge with a simple observation:

Many robots can walk.

Very few can truly use their hands.

Samsung’s new Hand Lab is designed to change that.

Humanoid Robot Escorted by Police After Startling Pedestrian in Macau

A humanoid robot used for promotional activities in Macau was escorted away by police after startling a pedestrian, highlighting new challenges as robots increasingly appear in public spaces.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
A humanoid robot stands on a city street during a public demonstration as police intervene after the machine startled a pedestrian in Macau. Photo: Kseniia Klichova / RobotsBeat

A humanoid robot operating in a residential neighborhood in Macau was escorted away by police after startling an elderly pedestrian, an incident that has sparked debate about how robots should operate in public spaces.

The episode occurred near a housing complex in the Patane district when a Unitree G1 humanoid robot was reportedly following behind a woman walking along the street. According to local reports, the 70-year-old pedestrian became alarmed after noticing the robot behind her while she was checking her phone.

A video circulating online shows the woman confronting the robot while several onlookers watch. Police later arrived at the scene and escorted the robot away from the area.

The woman was later taken to a hospital after reporting that she felt unwell. Authorities said she was examined and discharged, and that no physical contact or injuries occurred.

A Promotional Robot in the Wrong Place

Local officials said the robot belonged to an educational center and had been used for promotional activities in the neighborhood.

According to representatives from the organization operating the machine, the encounter was accidental. The woman had stopped in the middle of the walkway while looking at her phone, and the robot – unable to navigate around her – remained stationary behind her until she turned around and noticed it.

The robot’s illuminated sensors and late-evening timing may have contributed to the surprise.

Police returned the machine to its operator and advised him to exercise greater caution when using robots in public areas.

Public Robots and Social Reactions

The incident quickly spread online after video footage showed officers walking the humanoid robot away from the scene in what many viewers described as an unusual “perp walk”.

The moment triggered widespread discussion across social media platforms.

Some observers treated the situation humorously, joking that the robot had been “arrested”. Others used the moment to raise more serious questions about safety, consent, and etiquette when robots interact with people in public environments.

Humanoid robots such as the Unitree G1 have become increasingly visible in Chinese cities and online videos, where they are often used for demonstrations, entertainment, and social media content.

However, their presence in everyday public spaces remains relatively new.

The Challenge of Robots in Public Spaces

As robotics technology becomes more accessible, incidents like the Macau encounter highlight the growing need for guidelines governing how machines operate in shared environments.

Unlike industrial robots or delivery robots that follow defined paths, humanoid robots can move through public spaces designed for human interaction.

That flexibility raises new questions about awareness, human perception, and social expectations.

Even when operating safely, robots can surprise or confuse people who are not accustomed to encountering autonomous machines in everyday settings.

Researchers studying human-robot interaction note that designing robots that behave predictably – and communicate their intentions clearly – will be essential as machines become more visible in public life.

For now, the Macau incident serves as a reminder that the social side of robotics may be just as important as the technical one.

News, Robots & Robotics

STMicroelectronics Plans Humanoid Robots and Worker Retraining to Modernize Chip Factories

STMicroelectronics plans to deploy humanoid robots and retrain workers in older semiconductor plants as the European chipmaker looks to boost efficiency and avoid factory closures.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
A robotic system handles silicon wafer carriers inside a semiconductor fabrication facility as chipmakers explore automation to modernize aging plants. Photo: STMicroelectronics

European semiconductor manufacturer STMicroelectronics is preparing to introduce humanoid robots into its production facilities while retraining employees for new roles, as the company attempts to modernize aging chip factories without shutting them down.

The plan was outlined by company executives during a semiconductor industry conference in Sopot, Poland, where STMicroelectronics’ manufacturing leadership demonstrated early robotics deployments designed to automate repetitive production tasks.

The initiative reflects the growing role of robotics in semiconductor manufacturing as companies attempt to maintain competitiveness against newer, highly automated facilities in Asia.

Automating Repetitive Tasks in Older Fabs

According to STMicroelectronics, robots are being introduced to handle physically demanding or repetitive work inside fabrication plants.

One demonstration showed a robotic system loading silicon wafer carriers into production equipment, a task that requires precision and repetition across continuous shifts.

Executives indicated that the company could deploy more than 100 humanoid robots across its facilities over the next few years.

The machines are expected to work alongside existing automation systems already used in semiconductor manufacturing.

While fabs are already among the most automated industrial environments, certain tasks still require human operators, particularly in older plants that were built before current automation technologies became standard.

Humanoid robots are being explored as a way to bridge that gap without requiring complete redesigns of existing factories.

Modernizing Aging Semiconductor Facilities

The strategy comes as European semiconductor companies face increasing competition from highly automated production lines in countries such as China.

Many of Europe’s chip fabrication plants were built decades ago and require substantial investment to remain competitive.

However, rebuilding or replacing existing fabs can be prohibitively expensive, particularly in regions where regulatory processes and labor negotiations add complexity to large industrial projects.

By introducing robotics and improving workforce skills, STMicroelectronics hopes to extend the lifespan of its older facilities while improving productivity.

The company has stated that maintaining manufacturing capacity in Europe remains a priority.

Workforce Transition and Skills Development

Alongside robotics deployment, the company plans to retrain workers for higher-skilled roles in semiconductor manufacturing.

The shift reflects a broader transformation taking place across industrial sectors where automation is increasingly changing the nature of factory work.

Instead of eliminating jobs entirely, the company aims to move employees into positions that require technical expertise in operating and maintaining advanced manufacturing systems.

Industry executives say these skills are already in short supply.

Humanoid robots are expected to cover certain repetitive roles across multiple shifts. In factories that operate continuously, a single robot could potentially replace several shift positions performing the same task.

However, company leaders emphasize that the goal of the initiative is to improve efficiency rather than close facilities.

Robotics and Europe’s Semiconductor Strategy

The move also intersects with broader debates about Europe’s semiconductor strategy.

Government programs such as the European Chips Act are designed primarily to support new semiconductor projects rather than upgrades to older facilities.

Industry groups are now advocating for expanded support in a potential “Chips Act 2.0”, arguing that existing production infrastructure should also receive investment.

For companies like STMicroelectronics, robotics could become a key tool for keeping older manufacturing plants economically viable while avoiding costly closures.

As semiconductor manufacturing becomes more complex and globally competitive, automation may increasingly determine whether long-established facilities remain part of the industry’s future.

Rivian-Founded Mind Robotics Secures $500 Million for Industrial AI

Mind Robotics, a startup spun out of electric vehicle maker Rivian, has raised $500 million to develop AI-powered industrial robots designed for more adaptable factory automation.

By Daniel Krauss | Edited by Kseniia Klichova Published:
RJ Scaringe, founder and CEO of Rivian and chairman of Mind Robotics, the industrial AI robotics startup that recently raised $500 million to develop next-generation factory automation systems. Photo: Rivian

A robotics startup spun out of electric vehicle manufacturer Rivian has raised $500 million to build a new generation of industrial robots powered by artificial intelligence.

The company, called Mind Robotics, announced the Series A round this week, bringing its total funding to approximately $615 million only months after its launch. The investment values the company at around $2 billion and was co-led by venture firms Accel and Andreessen Horowitz.

Mind Robotics was created in late 2025 by Rivian founder and CEO RJ Scaringe, who now serves as chairman of the robotics startup.

The company’s goal is to address one of the biggest limitations in modern factory automation: the difficulty robots have performing tasks that require dexterity, adaptability, and real-world reasoning.

A Different Approach to Industrial Robotics

Industrial robots have been used in manufacturing for decades, but most systems remain limited to highly structured tasks such as welding, assembly, or material handling.

These machines perform best when working with predictable objects and fixed production lines.

Mind Robotics is attempting to develop robots capable of operating in more dynamic manufacturing environments where parts vary, conditions change, and tasks require human-like manipulation.

The startup plans to build AI systems that allow robots to interpret their surroundings and adapt their movements in real time.

Unlike many robotics startups that are focusing on humanoid machines, Mind Robotics is taking a more traditional approach to hardware design.

Scaringe has suggested that the company’s focus is on practical factory automation rather than building robots designed to resemble humans.

Training Robots with Factory Data

One advantage the startup brings to the robotics industry is access to manufacturing data from Rivian’s electric vehicle factories.

These facilities provide a real-world environment where robotic systems can be trained and tested on production tasks.

The company aims to use this data to develop AI models that help robots understand physical interactions and perform tasks requiring precision and adaptability.

According to Mind Robotics, much of the value generated inside factories today still depends on human workers performing tasks that traditional automation cannot easily replicate.

By combining robotics hardware with AI models capable of learning from real-world data, the company hopes to automate a broader range of manufacturing activities.

A Growing Investment Wave in Physical AI

The large funding round reflects growing investor interest in robotics companies building AI-driven physical systems.

Over the past year, venture capital firms have increasingly backed startups focused on what many researchers call physical AI – systems that combine machine learning with robots operating in the real world.

Mind Robotics is part of a broader shift toward integrating artificial intelligence directly into industrial automation.

Scaringe has said the company expects to deploy significant numbers of its robots within factories before the end of the year, suggesting an aggressive timeline for moving from research to deployment.

Ties to Rivian’s Technology Ecosystem

Although Mind Robotics operates as an independent company, its relationship with Rivian could extend beyond manufacturing data.

Rivian has recently developed custom semiconductor chips designed to run autonomous driving software inside its vehicles.

Those processors could potentially be used to power robotics systems as well, creating a shared technology foundation between the two companies.

The spinout is also part of a broader pattern emerging at Rivian, which has begun launching new technology ventures alongside its core automotive business.

In 2025 the company also created another startup focused on electric mobility platforms for small cargo vehicles and e-bikes.

Together, these efforts suggest that Rivian is positioning itself not only as a vehicle manufacturer but as a broader developer of robotics and AI technologies.

For Mind Robotics, the next challenge will be proving that AI-powered robots can deliver tangible productivity gains on real factory floors.

Zoox Expands Robotaxi Testing to Phoenix and Dallas

Zoox is expanding testing of its autonomous driving system to Phoenix and Dallas while preparing to deploy its purpose-built robotaxi and integrate its service with the Uber platform.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Zoox’s purpose-built robotaxi is designed for autonomous ride-hailing, featuring a bidirectional vehicle layout and face-to-face seating for passengers. Photo: Zoox

Amazon-owned autonomous vehicle company Zoox is expanding its robotaxi testing program to Phoenix, Arizona, and Dallas, Texas, as the company continues building toward commercial deployment of its purpose-built autonomous vehicles.

The expansion will introduce Zoox’s autonomous driving technology into two additional urban environments while also supporting the launch of new operational infrastructure, including fleet depots and a new operations center in Scottsdale, Arizona.

With these additions, Zoox now operates testing fleets across ten major U.S. markets, reflecting a broader effort by autonomous vehicle developers to gather real-world data across diverse driving conditions.

Testing in New Environments

The first phase of Zoox’s rollout in Phoenix and Dallas will involve a small number of retrofitted SUVs used for mapping and early testing.

These vehicles will initially be driven manually as engineers map city streets and gather environmental data. Autonomous testing will follow, with safety drivers remaining behind the wheel to intervene if necessary.

Once the company completes this phase, Zoox plans to deploy its purpose-built robotaxi vehicles in both cities.

Each location presents unique testing conditions. Phoenix offers an opportunity to evaluate sensor performance and vehicle durability in extreme heat and dusty environments, particularly on high-speed roads common in the region.

Dallas, meanwhile, provides a complex road network and more variable weather patterns, helping engineers refine how the autonomous system handles diverse driving scenarios.

A Partnership with Uber

At the same time, Zoox is expanding its distribution strategy through a new partnership with Uber.

Under a multi-year agreement, Zoox robotaxis will be integrated into Uber’s ride-hailing platform, allowing users to request autonomous rides through the Uber app in selected cities.

The first integration is expected to begin in Las Vegas later this year, followed by Los Angeles in 2027.

Zoox will continue offering rides through its own mobile application as well, effectively operating on both its proprietary platform and Uber’s global network.

The partnership reflects Uber’s strategy of collaborating with autonomous vehicle developers rather than building its own driverless technology.

Uber previously ran an in-house autonomous vehicle program but sold the division after a fatal crash in 2018. Since then, the company has shifted toward forming partnerships with technology developers.

Building the Infrastructure for Autonomous Fleets

Supporting Zoox’s growing robotaxi program is a network of facilities known as Fusion Centers.

The company is opening a third such facility in Scottsdale, Arizona, joining existing centers in Las Vegas and the San Francisco Bay Area.

These facilities function as operational command centers, coordinating autonomous fleets through teleoperations, mission control, and rider support systems.

Fusion Centers allow human operators to assist vehicles in complex scenarios, manage fleet operations, and provide customer service for passengers.

Since launching its early robotaxi service in Las Vegas and testing programs in San Francisco, Zoox says its vehicles have completed more than one million autonomous miles and transported over 300,000 passengers.

The company’s robotaxi design differs from traditional vehicles. The fully autonomous platform eliminates the steering wheel and pedals, replacing them with a bidirectional cabin featuring face-to-face seating intended to encourage social interaction among riders.

The Growing Robotaxi Race

Zoox’s expansion highlights the intensifying competition among companies seeking to deploy autonomous ride-hailing services.

Developers such as Waymo, Cruise, and several emerging startups are all testing driverless vehicles across multiple U.S. cities, racing to demonstrate safe and scalable operations.

For Zoox, the strategy combines purpose-built vehicles, extensive real-world testing, and partnerships with major mobility platforms.

As autonomous driving technology moves from pilot programs toward commercial deployment, cities like Phoenix and Dallas are becoming critical testing grounds for the next phase of driverless transportation.

Automation, Business & Markets, News, Robots & Robotics