Tesla Expands Robotaxi Service to Dallas and Houston

Tesla has launched its driverless robotaxi service in Dallas and Houston, extending a program that began in Austin and marking the company’s most significant autonomous ride-hailing expansion to date.

By Daniel Krauss | Edited by Kseniia Klichova Published:
A driverless electric vehicle operating as part of an autonomous ride-hailing service on urban streets. Photo: Tesla Robotaxi

Tesla has expanded its robotaxi service to Dallas and Houston, the company announced Saturday, extending a program that launched in Austin, Texas, last year. The rollout uses Model Y SUVs operating without a human driver or safety monitor in the front seat, a configuration Tesla has been working toward since first deploying the service in Austin with onboard monitors and restricted operating zones.

The expansion is the most geographically significant step yet in Tesla’s autonomous ride-hailing strategy, adding two of the largest metro areas in the U.S. to a service that also operates in parts of the San Francisco Bay Area.

Operational Details Remain Limited

Tesla announced the launches through its official robotaxi account on X, posting videos of vehicles operating in both cities alongside map images outlining service boundaries. The company did not disclose fleet size, pricing, or availability for general riders. CEO Elon Musk reposted the announcement without adding further detail.

The absence of operational specifics is consistent with how Tesla has managed the rollout to date – expanding the geographic footprint while disclosing limited data on performance, incident rates, or the regulatory approvals underpinning each new market.

The Competitive Context

Tesla’s expansion comes as the robotaxi sector broadly regains momentum. Alphabet’s Waymo has been scaling paid commercial operations in San Francisco, Los Angeles, and Phoenix, with further expansion underway. Amazon’s Zoox is also accelerating deployment of its purpose-built autonomous vehicle platform.

Tesla’s approach differs structurally from its competitors. Waymo and Zoox have relied on vehicles designed or heavily modified for autonomous operation, with extensive sensor arrays including lidar. Tesla uses a camera-based system derived from its Full Self-Driving software, applied to its existing production vehicles. That approach lowers hardware costs and allows rapid fleet scaling, but has drawn scrutiny over its safety validation methodology compared to lidar-dependent systems with longer commercial track records.

Stakes for Tesla’s Broader Strategy

Autonomous vehicles have become central to how Tesla justifies its valuation. Much of the company’s $1.3 trillion market capitalization is tied to expectations that its FSD software and robotaxi service will generate substantial recurring revenue. Musk had previously predicted that Tesla robotaxis would be operating widely across multiple U.S. metro areas by the end of 2025 – a target the company missed.

The Dallas and Houston launches represent tangible progress against that timeline, but the scale of the current deployments relative to the stated ambition remains unclear without fleet size data. How quickly Tesla can move from limited-zone launches to citywide commercial availability will be the metric that matters most for the company’s autonomous transportation thesis.

Agibot Deploys Humanoid Robots in Live Electronics Manufacturing, Eyes 100-Unit Expansion

Agibot has deployed its G2 humanoid robots at a Shanghai electronics manufacturer, reporting throughput of up to 310 units per hour and a success rate above 99%, with plans to scale to 100 robots by Q3 2026.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Humanoid robots operating on an electronics manufacturing line, handling precision loading and unloading tasks at automated testing stations. Photo: AGIBOT

Agibot has deployed its G2 humanoid robots in an active production environment at Longcheer Technology, a Shanghai-based consumer electronics manufacturer. The rollout marks one of the more concrete demonstrations of humanoid robots operating within a live industrial workflow, moving the technology beyond controlled pilots into continuous factory-floor use.

The deployment comes months after Agibot announced the production of its 10,000th humanoid robot in March, a milestone the company described as a turning point in the industrialization of embodied AI.

What the Robots Are Doing

G2 units are stationed at multimedia-integrated testing stations, where they perform precision loading and unloading of devices into testing fixtures. The task demands millimeter-level placement accuracy, consistent cycle timing, and the ability to sort finished and defective units without interruption.

Agibot reports throughput of up to 310 units per hour, with individual cycle times of approximately 19 to 20 seconds per operation and a success rate exceeding 99% in continuous use. Production line integration was completed within 36 hours, and each shift produces approximately 3,000 units. The system has logged over 140 hours of cumulative continuous operation, with downtime losses below 4%.

The robots use multi-modal sensing – combining visual perception and spatial awareness – to identify objects and execute task sequences without custom tooling. The platform supports mixed-model production, meaning it can handle different device configurations on the same line, reducing changeover time.

The Underlying Architecture

Agibot describes its approach as a full-stack ecosystem for embodied intelligence, integrating robot hardware, AI models, and large-scale data infrastructure into a single system designed for continuous learning. Rather than executing fixed instructions, the robots are built to adapt to task and environment variations over time through a combination of simulation-based validation, reinforcement learning, and on-device inference.

“This project shows that embodied AI is no longer experimental. It is a practical, production-ready capability that can operate reliably in real industrial environments and deliver measurable economic value,” said Maoqing Yao, Partner, Senior Vice President, and President of the Embodied Business Unit at Agibot.

Scale and Next Steps

Agibot plans to expand the deployment to 100 robots by Q3 2026 and has identified automotive, semiconductors, and energy as the next target sectors. The company is also developing Genie Sim 3.0, a simulation platform designed to accelerate training of new robot behaviors before physical deployment.

The broader implication of the Longcheer deployment is not the throughput figures alone, but what they suggest about deployment speed. A 36-hour integration timeline and no custom tooling requirement lower the barrier for manufacturers evaluating humanoid robots against conventional fixed automation. Whether those numbers hold across more varied factory environments – with different layouts, device types, and production rhythms – will determine how transferable the model is at scale.

News, Robots & Robotics, Science & Tech

Toyota Unveils CUE7, a Lighter Basketball Robot Built on Hybrid AI Control

Toyota has introduced CUE7, the latest iteration of its basketball-shooting robot, featuring a significantly lighter frame, an inverted two-wheel base, and a hybrid control system combining reinforcement learning with model predictive control.

By Daniel Krauss | Edited by Kseniia Klichova Published:

Toyota has unveiled CUE7, the seventh generation of its basketball-shooting robot platform, on April 12. The system marks the most significant technical upgrade in the CUE series to date, with reductions in weight, a new mobility architecture, and a hybrid AI control system that combines reinforcement learning with model predictive control.

The robot was developed by Toyota’s Frontier Research Center and signals the company’s continued investment in embodied AI research outside its traditional automotive domain.

What Changed in CUE7

The most immediate change is physical. CUE7 weighs 74 kg, down from 120 kg in its predecessor – a reduction of nearly 40%. The wheeled base has been redesigned around an inverted two-wheel structure, replacing the earlier fixed-platform approach and giving the robot greater dynamic stability during motion.

The control architecture is also new. Rather than relying on a single AI method, CUE7 uses a hybrid system that combines reinforcement learning – where the robot improves through repeated trial and feedback – with model predictive control, which uses forward simulation to plan and execute precise movements in real time. The result is a platform capable of more dynamic, fluid motion than earlier versions of the robot.

CUE7 uses vision systems to identify the basket, estimate distance, and calculate shot trajectory. Its upper body makes deliberate postural adjustments to align the release angle before executing the shot with calibrated force.

A Platform Built Over Years

The CUE project began as an internal employee initiative before becoming a dedicated research program. CUE3 set a Guinness World Record in 2019 by completing 2,020 consecutive free throws. CUE6 extended the platform’s range, completing a 24.55-meter shot during a record attempt.

Each iteration has expanded the robot’s operational scope. Early versions were stationary shooters. Later models introduced mobility, ball retrieval, and dribbling. CUE7 advances the underlying control and sensing systems rather than adding new physical tasks, consolidating the platform’s technical foundation.

The Broader Purpose

Toyota uses the CUE series as a testbed for capabilities with direct relevance to general robotics: vision-based target acquisition, real-time trajectory planning, precise force control, and repeatable physical execution under variable conditions. Basketball provides a structured environment in which each of these capabilities can be isolated, measured, and improved.

The platform reflects a wider industry pattern in which automakers are applying their manufacturing and systems engineering expertise to humanoid and semi-humanoid robotics. Toyota has not announced commercial applications for CUE7, and the robot remains a research demonstration. The hybrid control architecture, however, represents a technical approach with potential applicability beyond sport – particularly in industrial and service environments where consistent, adaptive physical performance is required.

Business & Markets, News, Robots & Robotics, Science & Tech

SoftBank Robotics America and Matternet Partner to Scale Autonomous Drone Delivery

SoftBank Robotics America and Matternet have signed a strategic partnership to accelerate autonomous drone delivery deployments across the U.S., targeting healthcare and other industries where speed and reliability are critical.

By Rachel Whitman | Edited by Kseniia Klichova Published:
An autonomous delivery drone operating over an urban environment as part of a commercial logistics network. Photo: Matternet

SoftBank Robotics America has signed a strategic partnership with Matternet, a drone delivery company, to accelerate the deployment of autonomous aerial last-mile delivery in the U.S. and other key markets. The deal combines SoftBank Robotics America’s role as a physical AI integrator with Matternet’s FAA-certified drone platform, targeting enterprise operators in healthcare, commerce, and industrial logistics.

Last-mile delivery continues to face structural pressure from labor shortages, rising costs, and urban congestion. Autonomous aerial delivery is emerging as a cost-competitive alternative to traditional ground-based methods, particularly at scale.

What Each Company Brings

Matternet has spent more than a decade building commercial drone delivery infrastructure. The company is the first in the industry to achieve both FAA Type Certification and Production Certification, and its technology has enabled tens of thousands of commercial flights in urban and suburban environments across the U.S. and Europe. Its M2 drone and software platform are already deployed through partnerships with UPS and Ameriflight.

SoftBank Robotics America operates as an integrator – its role is to take proven autonomous technologies and embed them into real-world operational environments at scale. The company works across senior living, hospitality, aviation, facilities management, and commercial cleaning, and has built a track record of translating robotics pilots into production deployments.

Brady Watkins, President and GM of SoftBank Robotics America, said:

“The challenge is not the technology, but rather operationalizing the technology such that it produces consistent measurable outcomes.”

Healthcare as the Initial Focus

The partnership’s initial emphasis is on healthcare, where delivery speed and reliability directly affect patient outcomes. Medical supplies, lab samples, and pharmaceuticals represent a natural fit for autonomous aerial delivery – time-sensitive, high-value, and moving between fixed points such as hospitals, labs, and pharmacies.

Katya Akudovich, Vice President of New Ventures at SoftBank Robotics America, said:

“By combining Matternet’s technology with our global commercialization capability and experience, we are creating a powerful partnership to bring the benefits of autonomous drone delivery into day-to-day operations for vertical markets such as healthcare where speed and reliability are mission critical.”

Scaling the Infrastructure

Andreas Raptopoulos, founder and CEO of Matternet, framed the partnership as part of a broader shift toward autonomous logistics networks. He said:

“Our partnership with SoftBank Robotics America will accelerate deployment of our technology and help build the autonomous delivery infrastructure for healthcare, commerce, and industry.”

The partnership does not introduce new drone hardware. Instead, it focuses on the integration layer – the processes, support structures, and operational frameworks needed to move autonomous drone delivery from isolated pilots to consistent, large-scale networks. That focus on operationalization rather than invention reflects where the autonomous delivery industry is broadly: the technology is sufficiently mature, but deployment at enterprise scale remains the central challenge. The companies did not disclose financial terms of the agreement.

Business & Markets, News, Robots & Robotics

Accenture Invests in General Robotics to Build a Unified AI Layer for Industrial Robots

Accenture Ventures has invested in General Robotics, whose GRID platform connects robots from multiple manufacturers under a single AI intelligence layer, targeting scaled automation in factories and warehouses.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Industrial robots operating on a factory floor managed by a unified AI orchestration platform. Photo: Accenture

Accenture has invested in General Robotics, a startup building a unified AI intelligence platform for industrial robots, through its Accenture Ventures arm. The two companies will also partner to help manufacturers, logistics operators, and other asset-intensive industries deploy autonomous robotic systems at scale. Financial terms were not disclosed.

The deal reflects a wider strategic push by Accenture to move beyond software consulting and into the physical infrastructure of AI-driven automation.

The Problem GRID Is Designed to Solve

Most factories operate robots from multiple manufacturers, each running its own software stack, programming language, and integration requirements. Scaling automation across a multi-vendor fleet is expensive and slow, and the cost has historically limited full deployment to only the largest industrial operators.

General Robotics addresses this with GRID, a platform that sits above the hardware layer and connects robots from more than 40 manufacturers – including FANUC, Flexiv, Ghost Robotics, and Galaxea – under a single orchestration framework. Rather than programming each machine individually, GRID offers modular, reusable AI skills deployable across different hardware through cloud-based orchestration, simulation-based training, and full data sovereignty for enterprise customers.

“While robotics hardware and AI models advance at a rapid pace, real-world impact is constrained by the lack of a unified intelligence infrastructure,” said Ashish Kapoor, CEO and co-founder of General Robotics. Kapoor previously served as general manager of autonomous systems and robotics research at Microsoft, where he created AirSim, a widely used open-source simulator for training autonomous vehicles and drones.

Accenture’s Physical AI Strategy

The investment extends an infrastructure position Accenture has been building for over a year. The company launched its Physical AI Orchestrator in October 2025, a system that uses NVIDIA Omniverse libraries and the NVIDIA Mega Blueprint to coordinate robotic and autonomous systems in industrial settings. GRID integrates NVIDIA Isaac Sim, allowing manufacturers to train robotic AI skills in digital twins before deploying them on physical hardware – a capability that aligns directly with Accenture’s existing toolchain.

Where Accenture’s Physical AI Orchestrator handles coordination at the facility level, GRID handles robot-level AI – the skills, perception, and decision-making that individual machines need to perform complex tasks autonomously. Together, the two layers form a more complete stack for enterprise robotics deployment.

Prior investments in Sanctuary AI and a partnership with Schaeffler for industrial humanoid robots in automotive manufacturing point to a consistent thesis: Accenture is positioning itself as the primary integrator for physical AI at the enterprise level.

Scale and Market Context

“Piloting robotic systems takes too long, is expensive, and often not scalable and repeatable across a network of facilities,” said Prasad Satyavolu, Accenture’s global lead for manufacturing and operations. The stated goal of the partnership is to compress that deployment cycle by delivering an enterprise-grade robotics intelligence and orchestration layer that clients can apply across multiple facilities.

The physical AI market is projected to grow from roughly $1.5 billion in 2026 to more than $15 billion by 2032. A Deloitte survey found that 58% of global business leaders are already using some form of physical AI, though scaled deployment remains concentrated in automotive, electronics, and logistics. General Robotics remains an early-stage company without publicly reported revenue figures, and the broader challenge – persuading manufacturers to adopt an independent orchestration layer over proprietary vendor platforms – will require demonstrated performance on working factory floors, not just in simulation.

Artificial Intelligence (AI), Business & Markets, News, Startups & Venture

Grab Introduces Carri Robot to Speed Up Food Delivery in Southeast Asia

Grab unveiled an AI-powered delivery robot called Carri at its annual GrabX 2026 event, designed to reduce the time drivers spend navigating malls and office buildings to collect orders.

By Laura Bennett | Edited by Kseniia Klichova Published:
A delivery robot navigating an indoor commercial space to collect food orders for handoff to a human driver. Photo: Grab X

Grab unveiled an AI-powered delivery robot called Carri at GrabX 2026, the company’s annual product event held in Jakarta this month. The announcement is part of a broader push by the Singapore-based super app to embed physical automation into a platform that has long depended entirely on gig workers.

Anthony Tan, Grab’s CEO and co-founder, said the company is building an Intelligence Layer – AI infrastructure fueled by real-world, real-time signals – that sits underneath every feature and innovation in the app. That layer now extends into hardware.

Carri and the Indoor Delivery Problem

Tan said delivery partners currently lose around 10% of their earning time navigating large malls or waiting for customers to come down from office buildings. Carri is designed to absorb that idle time by handling restaurant retrieval and handoff, freeing drivers to stay on the road.

The robot is built for both indoor and outdoor environments, equipped with LIDAR sensors and cameras to navigate crowds and avoid obstacles. It features secure storage compartments that open only for the specific user assigned to a given order.

Carri is still in the development and testing phase, and the pricing or cost model for deployment has not yet been determined. Tan framed the move as a natural extension of the platform’s AI capabilities into the physical world. “We are moving into hardware to improve the messy physical parts of the job that software alone cannot fix,” he said.

13 New Features Across Three User Groups

GrabX 2026 introduced 13 AI-powered features designed around three core pillars: local life, effortless travel, and business empowerment.

For consumers, Grab introduced Group Ride for shared fares, GrabMore for multi-merchant orders under a single delivery fee, and a Grab AI Assistant that handles food, shopping, and bookings. GrabMaps and a Cash Loan product round out the consumer-facing additions.

For travelers, Grab unveiled GrabStays for hotel bookings, Discover by Grab for AI-curated dining recommendations, and GrabPay for Travel to enable cross-border QR payments across Southeast Asia.

For merchants and drivers, Grab announced a Virtual Store Manager using CCTV hardware for AI-powered monitoring, a Cloud Printer to automate order handling, and Tap to Pay to turn smartphones into contactless payment terminals. A Driver AI Assistant provides hands-free route and earnings guidance.

Monetization and the Road Ahead

Tan said Grab is also planning to extend its intelligence layer into autonomous vehicles and CCTV cameras, signaling that hardware will become a structural component of the business rather than a peripheral experiment.

On the revenue side, merchant hardware tools including cloud printers and virtual store management are set to move from free trials to a subscription model. A recent Barclays analysis estimated that widespread use of robots and drones in food delivery could reduce per-order costs to as little as $1 – a threshold that, if reached, would reshape the unit economics of every major delivery platform in the region.

Business & Markets, News, Robots & Robotics

Irrigation Robot Maps Water Needs Tree by Tree, Challenging Farm Automation Norms

A field robot that maps soil moisture at the level of individual trees could reshape irrigation practices, reducing water use and improving crop health.

By Rachel Whitman | Edited by Kseniia Klichova Published:
A mobile field robot scans soil conditions in orchards, generating detailed maps that guide precise irrigation at the level of individual trees. Photo: UCR

A mobile irrigation robot developed by researchers at the University of California, Riverside is challenging one of agriculture’s most persistent assumptions: that crops in the same field require the same amount of water.

By mapping soil moisture at the level of individual trees, the system reveals significant variation even between neighboring plants, suggesting that conventional irrigation methods may be systematically inefficient.

The findings point to a broader shift in agricultural robotics, where mobile sensing systems are replacing static infrastructure to deliver more granular, data-driven decisions.

From Field Averages to Tree-Level Precision

Traditional irrigation relies on fixed sensors and uniform watering schedules, operating on the assumption that conditions are relatively consistent across a field. The robot developed at UCR takes a different approach, scanning soil conditions continuously as it moves through orchards.

In field trials across citrus groves in California, the system detected sharp differences in water availability between adjacent trees, despite identical irrigation inputs. These variations were linked to differences in soil composition, where finer soils retained water more effectively than sandier patches.

The robot measures electrical conductivity in the soil – a proxy for moisture – and combines those readings with calibration data from a limited number of ground sensors. The result is a detailed moisture map that identifies both under-watered and over-watered areas.

This level of resolution allows irrigation to be adjusted at a much finer scale, turning what has traditionally been a field-wide estimate into a localized decision.

Reducing Waste and Managing Risk

The implications extend beyond water conservation. Overwatering can damage crops by depriving roots of oxygen and increasing susceptibility to disease, while also washing fertilizers deeper into the soil, where they can no longer be absorbed.

By identifying these imbalances, the system enables growers to maintain soil moisture within a narrower, optimal range. In testing, the model achieved high accuracy with relatively few calibration points, suggesting that widespread deployment may not require dense sensor networks.

This efficiency is significant in an industry where the cost of installing and maintaining sensors can limit adoption of precision agriculture technologies.

The approach also aligns with broader pressures facing agriculture, particularly in water-constrained regions. As drought conditions intensify, growers are increasingly forced to either reduce production or find ways to use water more efficiently.

Robotics Expands Beyond Automation

Unlike many agricultural robots focused on harvesting or crop monitoring, this system highlights a different role for robotics: acting as a mobile data layer that enhances decision-making rather than directly performing physical tasks.

The platform used in the study is capable of autonomous navigation, although it was manually operated during trials. Future versions are expected to operate independently, covering larger areas and integrating more closely with irrigation systems.

Several challenges remain before commercial deployment, including adapting the system to different crops, soil types, and environmental conditions. The relationship between surface measurements and deeper soil moisture also requires further refinement.

The development reflects a broader trend in robotics toward combining mobility with sensing and AI-driven analysis. By moving through environments rather than relying on fixed points, robots can capture variability that static systems miss.

In agriculture, where small differences in soil conditions can have large impacts on yield and resource use, that shift may prove particularly consequential.

If validated at scale, tree-level irrigation mapping could redefine how farms manage water – not as a uniform input, but as a variable resource tailored to each plant.

Automation, News, Robots & Robotics, Science & Tech

Siemens, NVIDIA and Humanoid Test Factory-Ready Humanoid Robot in Live Production

Siemens, NVIDIA and Humanoid have tested a humanoid robot in a live factory environment, signaling progress toward industrial-scale physical AI deployment.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
The HMND 01 Alpha humanoid robot operates inside a Siemens factory, performing autonomous logistics tasks as part of a physical AI deployment. Photo: Siemens

Siemens, NVIDIA and UK-based Humanoid have jointly deployed a humanoid robot inside a live manufacturing environment, marking one of the clearest signals yet that physical AI is moving beyond controlled demonstrations and into production settings.

The companies confirmed that Humanoid’s HMND 01 Alpha robot has been tested at a Siemens electronics factory in Erlangen, where it performed autonomous logistics tasks as part of ongoing operations. The deployment is part of a broader effort to build fully AI-driven, adaptive manufacturing systems.

While humanoid robots have been widely showcased in labs and pilot programs, this test stands out for meeting defined industrial performance thresholds in a real facility.

From Demonstration to Measurable Output

In the Erlangen deployment, the HMND 01 Alpha was assigned tote-handling tasks – picking, transporting, and placing containers within the factory workflow. According to the companies, the robot achieved throughput of around 60 operations per hour, maintained uptime beyond a full shift, and delivered pick-and-place success rates exceeding 90%.

These metrics place the system closer to practical utility than many earlier humanoid demonstrations, which have often focused on mobility or isolated manipulation tasks rather than sustained operational performance.

The robot’s design reflects this shift. Instead of a purely bipedal system, the HMND 01 uses a wheeled base combined with dual-arm manipulation, prioritizing stability and efficiency over human-like locomotion. This hybrid approach suggests that early industrial humanoids may diverge from human form where it improves performance.

The Stack Behind Physical AI

The deployment underscores the importance of integration across multiple layers of the robotics stack. While the robot itself executes tasks, its performance depends on a combination of simulation, AI models, and industrial control systems.

NVIDIA provides the underlying AI infrastructure, including edge computing hardware and simulation tools used to train and optimize the robot’s behavior before deployment. This “simulation-first” approach has significantly reduced development timelines, allowing the system to move from design to operational testing in months rather than years.

Siemens, meanwhile, contributes the industrial backbone through its Xcelerator platform, which connects the robot to factory systems, enabling real-time coordination with equipment, workflows, and human operators. Without this level of integration, even advanced robots would remain isolated within the production environment.

Together, these components form what the companies describe as a full-stack approach to physical AI – combining perception, reasoning, and execution within a unified operational framework.

A Path to Adaptive Manufacturing

The broader goal of the collaboration is to create factories that can adapt dynamically to changing conditions, rather than relying on fixed automation systems. In this model, robots are not programmed for single tasks but can be reassigned as production needs evolve.

This flexibility addresses a longstanding limitation in industrial automation, where reconfiguring production lines can be costly and time-consuming. By contrast, AI-driven systems can adjust workflows through software, reducing the need for physical reengineering.

The deployment also reflects a response to labor shortages and increasing operational complexity in manufacturing. Humanoid robots, particularly those capable of working in human-designed environments, are positioned as a way to augment existing workforces rather than replace them outright.

The Erlangen test does not yet represent large-scale adoption, but it demonstrates that humanoid robots can meet the performance and reliability thresholds required for real industrial tasks.

More broadly, it highlights a shift in how robotics is being deployed: not as standalone machines, but as part of integrated systems that combine AI, simulation, and industrial infrastructure.

As physical AI continues to mature, the question is less whether humanoid robots can operate in factories, and more how quickly these systems can scale across production networks.

Skild AI Acquires Zebra Robotics Unit to Build Unified Warehouse Automation Layer

Skild AI has acquired Zebra Technologies’ robotics automation business, aiming to unify fragmented warehouse systems under a single AI-driven control layer.

By Laura Bennett | Edited by Kseniia Klichova Published:
Skild AI is combining its general-purpose robotics model with Zebra’s orchestration platform to coordinate diverse robot fleets across warehouse operations. Photo: Skild AI

Skild AI has acquired the robotics automation business of Zebra Technologies, a move that signals a shift toward unified control systems for warehouse robotics rather than isolated deployments.

The deal includes Zebra’s Symmetry Fulfillment platform, a system designed to coordinate fleets of robots and human workers in logistics environments. By combining this orchestration layer with Skild AI’s general-purpose robotics model, the company is aiming to address one of the most persistent challenges in automation: fragmentation across hardware, software, and tasks.

The acquisition positions Skild AI to move beyond model development into full-stack deployment, where AI systems not only control individual robots but manage entire warehouse operations.

From Task Specific Automation to Generalized Control

Warehouse robotics has traditionally been built around specialized systems, with different robots programmed for picking, transport, or inspection. These systems often operate independently, requiring significant integration effort and limiting flexibility.

Skild AI’s approach centers on what it calls an “omnibodied” model, designed to operate across different robot types without being tailored to a specific form factor. In principle, this allows the same AI system to control humanoid robots, mobile platforms, and robotic arms without retraining for each configuration.

The addition of Zebra’s orchestration software extends this capability from individual robots to coordinated fleets. The Symmetry platform enables real-time task allocation, workflow management, and human-robot interaction, providing the infrastructure needed to deploy heterogeneous systems in live environments.

Together, the two technologies suggest a shift from programming robots individually to managing automation as a unified system.

Orchestrating Mixed Fleets at Scale

The combined platform is intended to support a wide range of robotic systems within a single warehouse. This includes autonomous mobile robots for material transport, robotic arms for packing, and potentially humanoid systems for more complex manipulation tasks.

Such an approach reflects the operational reality of modern logistics, where no single robot type can handle all tasks efficiently. Instead, performance depends on coordination between different systems and their integration with human workers.

By embedding AI at the orchestration level, Skild AI is attempting to create a layer that can dynamically assign tasks, optimize workflows, and adapt to changing conditions without requiring extensive reprogramming.

This model also creates a feedback loop: data collected from deployments can be used to improve the underlying AI system, potentially increasing performance across all environments where it is deployed.

A Push Toward End to End Automation

The acquisition highlights a broader industry trend toward end-to-end automation platforms. Rather than selling individual robots or software components, companies are increasingly positioning themselves as providers of complete operational systems.

This shift is driven in part by the limitations of current approaches. Many warehouses still require significant manual configuration to integrate different automation tools, and retrofitting facilities to accommodate specific robots can be costly and disruptive.

Skild AI’s strategy suggests an alternative path, where existing warehouses are adapted through software and orchestration rather than physical redesign. By combining a general-purpose AI model with a proven coordination platform, the company aims to reduce the complexity of deploying automation at scale.

The approach also aligns with efforts by companies such as Nvidia to build infrastructure for physical AI, where simulation, data, and control systems are integrated into cohesive platforms.

The success of this strategy will depend on whether a single AI layer can reliably manage diverse robotic systems in complex, real-world environments. While the concept of “any robot, any task” remains ambitious, the integration of orchestration and intelligence represents a step toward more flexible and scalable automation.

As logistics operators seek to increase efficiency without overhauling existing infrastructure, the ability to coordinate mixed fleets of robots may become a defining feature of next-generation warehouse systems.

Automation, Business & Markets, News, Robots & Robotics

Humanoid Robot Chasing Wild Boars in Warsaw Highlights Real World Deployment Shift

A viral humanoid robot chasing wild boars in Warsaw has drawn attention to the rapid global spread of Chinese robotics hardware.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:

A humanoid robot chasing wild boars through a parking lot in Warsaw is not an obvious signal of industry change. But the viral footage, widely shared across social media, offers a glimpse into a deeper shift in the global robotics landscape.

The robot, known locally as “Edward”, is built on hardware from Unitree Robotics and adapted by a Polish team at MERA Robotics. While the scene itself borders on spectacle, the underlying model – combining Chinese manufacturing with local software customization – is becoming an increasingly common pathway for deploying humanoid systems outside their country of origin.

From Viral Moment to Deployment Model

Edward’s popularity stems from its unexpected public appearances, including the now widely circulated incident in which it pursued wild boars in an urban setting. But beyond the novelty, the robot represents a practical approach to deploying humanoid technology.

Rather than developing systems entirely in-house, MERA Robotics has integrated Chinese-built hardware with its own operating software, tailoring the platform for local use cases. This hybrid model allows smaller companies to bypass the high costs and long timelines associated with building complete humanoid systems from scratch.

According to MERA co-founder Radoslaw Grzelaczyk, this approach reflects a broader trend. After studying robotics commercialization efforts in China, his team concluded that Chinese manufacturers offer a combination of availability, performance, and pricing that is difficult to match elsewhere.

The result is a growing ecosystem in which hardware is sourced globally, while software and applications are developed locally.

China’s Cost Advantage Extends Abroad

The Warsaw example highlights a structural advantage that Chinese robotics companies have begun to establish. Firms such as Unitree are scaling production and reducing costs at a pace that is enabling international adoption, even in markets traditionally dominated by Western technology providers.

Grzelaczyk estimates that China may be up to two years ahead of other regions in humanoid robotics development, particularly in terms of commercialization. This lead is not only technological but also economic, as lower-cost systems make experimentation and deployment more accessible.

This dynamic is already shaping global partnerships. European firms are increasingly importing humanoid robots and adapting them for regional markets, rather than attempting to compete directly on hardware manufacturing.

MERA Robotics, for example, plans to import around 100 humanoid units in the near term, using them as a foundation for locally developed applications.

Early Use Cases Remain Unclear

Despite growing visibility, the practical role of humanoid robots in everyday environments remains uncertain. Edward’s viral moment illustrates both the potential and the ambiguity of current deployments.

On one hand, the robot demonstrates mobility, autonomy, and the ability to operate in unstructured outdoor environments. On the other, the task itself – chasing animals in a parking lot – underscores how far the technology still is from clearly defined, scalable applications.

This gap between capability and use case is a recurring theme in the humanoid robotics sector. While hardware performance continues to improve, identifying consistent, economically viable roles for these systems remains an open challenge.

At the same time, public demonstrations and viral content are playing an increasing role in shaping perception and interest. Visibility, even in unconventional scenarios, may help accelerate experimentation and adoption.

The Warsaw incident may be remembered less for the robot’s actions and more for what it represents: a globalizing robotics industry where hardware, software, and applications are increasingly decoupled.

As Chinese manufacturers expand their reach and local developers build on top of their platforms, humanoid robots are beginning to move from controlled demonstrations into everyday environments – even if their purpose is still evolving.

News, Robots & Robotics

Boston Dynamics Integrates Google Gemini into Spot for Industrial Inspection

Boston Dynamics has integrated Google’s Gemini robotics model into its Spot platform, enhancing reasoning and inspection capabilities in industrial environments.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
Boston Dynamics’ Spot robot now uses Google Gemini-powered AI to analyze industrial environments, improving inspection accuracy and enabling higher-level reasoning. Photo: Boston Dynamics

Boston Dynamics has integrated a new generation of AI models from Google into its industrial inspection platform, marking a step toward more autonomous and context-aware robotics in real-world environments.

The update brings Google’s Gemini and Gemini Robotics-ER 1.6 models into Boston Dynamics’ Orbit AIVI-Learning system, which powers inspection workflows for robots such as Spot. The integration reflects a broader shift in robotics toward combining physical systems with advanced reasoning models capable of interpreting complex environments and making decisions in real time.

The rollout is already live for existing AIVI-Learning customers, with the company positioning the upgrade as a foundational improvement in how robots understand and monitor industrial sites.

From Detection to Interpretation

Industrial inspection has traditionally relied on rule-based systems that identify predefined objects or anomalies. The integration of Gemini introduces a different approach, where robots can analyze scenes more holistically and reason about what they observe.

Using the updated system, Spot can perform tasks such as reading gauges, assessing fluid levels, counting materials, and identifying safety hazards like spills or debris. These capabilities extend beyond simple detection, requiring the robot to interpret visual signals and determine their operational significance.

This shift is particularly important in environments where conditions are dynamic and difficult to model in advance. Rather than relying on static rules, the system can adapt to new scenarios, enabling broader deployment across facilities with varying layouts and equipment.

The addition of “transparent reasoning” features also allows operators to review how the system arrives at its conclusions, offering greater visibility into AI-driven decisions – a requirement that is becoming increasingly important in industrial settings.

Continuous Learning in Live Environments

A defining feature of the updated platform is its ability to improve over time through continuous data collection and model updates. The system operates as a cloud-connected service, allowing performance improvements to be deployed without interrupting operations.

This “zero-downtime” update model reflects a shift toward treating robotics systems as evolving software platforms rather than static hardware installations. As new data is collected from deployed robots, the models can be refined to better understand specific environments and use cases.

The approach, however, also introduces new considerations around data sharing. Customers using AIVI-Learning are required to share operational data with Boston Dynamics to enable ongoing model training, highlighting the growing role of data as a core component of robotics performance.

Toward Site Wide Intelligence

Boston Dynamics frames the integration as a move toward “site-wide intelligence”, where robots contribute to a unified understanding of industrial operations. By combining visual inspection data with higher-level reasoning, the system aims to provide insights across safety, maintenance, and logistics.

This aligns with a broader industry trend toward physical AI systems that integrate perception, reasoning, and action. Companies such as Nvidia have emphasized similar approaches, focusing on the convergence of simulation, AI models, and robotics hardware.

In practical terms, the upgraded system enables Spot to handle more complex inspection workflows, from monitoring equipment health to tracking material movement. The ability to interpret gauges and other analog instruments is particularly relevant in industries where digital integration remains incomplete.

The integration of Gemini into Boston Dynamics’ inspection platform highlights how quickly robotics is evolving from task-specific automation to more generalized, intelligent systems. By embedding reasoning capabilities directly into deployed robots, companies are beginning to close the gap between perception and decision-making.

The remaining challenge lies in scaling these systems across diverse environments while maintaining reliability and trust. As robots take on more responsibility in industrial settings, their ability to explain and justify decisions may become as important as their technical performance.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Google Advances Embodied AI with Gemini Robotics ER Model

Google has introduced a new AI model that improves how robots understand, plan, and act in real-world environments, marking progress in embodied reasoning.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Google’s Gemini Robotics ER model enables robots to interpret environments, plan actions, and complete tasks with improved spatial awareness and reasoning. Photo: Google

Google has introduced a new AI model designed to improve how robots understand and operate in real-world environments, targeting one of the most persistent limitations in robotics: the ability to reason beyond predefined instructions.

The model, Gemini Robotics-ER 1.6, focuses on what researchers describe as embodied reasoning – the capacity for machines to interpret visual inputs, plan sequences of actions, and determine when a task has been successfully completed. The update reflects a broader shift in robotics from systems that execute commands to those that can make context-aware decisions in dynamic settings.

The model is being made available to developers through Google’s AI tooling ecosystem, positioning it as part of a growing effort to standardize software layers for physical AI.

Moving from Perception to Reasoning

Robotics systems have historically relied on separate modules for perception, planning, and control, often requiring extensive engineering to connect them. Gemini Robotics-ER 1.6 attempts to unify these functions, allowing robots to process visual information and translate it directly into action.

The model improves spatial reasoning, enabling robots to identify objects, understand their relationships, and break tasks into smaller steps. It can also track objects across multiple viewpoints, combining inputs from different cameras to build a more complete understanding of an environment.

This multi-view capability is particularly relevant in real-world settings, where occlusion, clutter, and changing conditions can limit the effectiveness of single-camera systems. By integrating multiple perspectives, robots can maintain situational awareness even when parts of a scene are temporarily hidden.

Another key advancement is success detection. The model allows robots to evaluate whether a task has been completed correctly, reducing reliance on external validation or rigid programming. This is a critical requirement for autonomous operation, particularly in environments where tasks may need to be repeated or adjusted in real time.

Interpreting the Physical World

One of the more practical capabilities introduced in the model is the ability to read instruments such as gauges, meters, and digital displays. This function is particularly relevant for industrial and inspection applications, where robots must interpret physical indicators rather than purely digital data.

In collaboration with Boston Dynamics, the system has been applied to robots like Spot, which are used for facility monitoring. The model can analyze visual inputs, identify key components such as needles or numerical readouts, and calculate values with a high degree of accuracy.

Reported improvements in instrument reading performance suggest a significant step forward. Accuracy has increased from earlier levels of around 23% to over 90% in some scenarios, indicating that robots are becoming more capable of handling tasks that require precise interpretation of real-world signals.

The model also incorporates safety-aware reasoning, allowing robots to identify potential hazards and avoid unsafe interactions. This reflects an increasing emphasis on aligning robotic behavior with physical constraints, particularly as systems move into environments shared with humans.

Building a Software Layer for Physical AI

The release of Gemini Robotics-ER 1.6 highlights a broader trend toward treating robotics as a software problem as much as a hardware one. As companies race to develop humanoid and autonomous systems, the ability to generalize across tasks and environments is becoming a key differentiator.

Efforts by companies such as Nvidia and others have focused on simulation and training infrastructure, while Google’s approach emphasizes reasoning and decision-making at runtime. Together, these developments point toward a layered architecture for physical AI, where perception, reasoning, and control are increasingly integrated.

The remaining challenge is translating these capabilities into reliable real-world performance at scale. While models like Gemini Robotics-ER 1.6 demonstrate significant progress in controlled evaluations, deployment in complex environments will require further advances in robustness, data integration, and system design.

Google’s latest model suggests that robotics is entering a phase where intelligence is defined less by isolated capabilities and more by the ability to connect perception, reasoning, and action. As embodied AI systems become more capable of interpreting and responding to the physical world, the boundary between digital intelligence and physical execution continues to narrow.

The extent to which this translates into widespread adoption will depend on how quickly these systems can move from experimental demonstrations to dependable tools in industry and beyond.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress

Unitree is bringing its lowest-cost humanoid robot to global markets via AliExpress, signaling a shift toward early consumer adoption of robotics.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Unitree’s R1 humanoid robot, designed for dynamic movement and lower-cost production, marks a step toward broader global access to humanoid machines. Photo: Unitree

Chinese robotics firm Unitree Robotics is preparing to launch its most affordable humanoid robot globally, a move that could test whether the category is beginning to transition from industrial experimentation to early consumer markets.

The company plans to debut its R1 humanoid robot through AliExpress, targeting customers in North America, Europe, Japan, and Singapore. With a starting price of around $4,000 in China, the R1 is among the lowest-cost humanoid robots introduced to date, positioning it closer to consumer electronics than traditional industrial machinery.

The rollout comes as Unitree accelerates production and expands internationally, following a year in which it shipped more than 5,500 humanoid robots – far exceeding most global competitors.

Lower Prices Meet Global Distribution

The R1 reflects a broader push to reduce the cost of humanoid robotics while expanding access through global distribution platforms. By launching on AliExpress, Unitree is bypassing traditional enterprise sales channels and testing direct-to-market demand.

The robot stands just over 1.2 meters tall and is designed for dynamic movement, including running, recovering from falls, and performing coordinated motions. Marketed as “sport-ready”, it highlights Unitree’s focus on mobility and mechanical performance rather than immediate utility in structured work environments.

The pricing strategy marks a significant departure from earlier humanoid systems, which have typically been priced in the tens of thousands of dollars or higher. Even companies such as Tesla have suggested that future humanoid robots could cost around $20,000, placing Unitree’s offering well below that threshold.

The question is not only whether such pricing is sustainable, but whether it will translate into meaningful adoption beyond research labs and demonstration use cases.

Scaling Production Ahead of Demand

Unitree’s global expansion is closely tied to its manufacturing scale. The company has set a target of shipping between 10,000 and 20,000 robots in 2026, building on its current position as one of the highest-volume producers of humanoid systems.

According to industry estimates, competitors such as Figure AI and Agility Robotics have shipped only a few hundred units each, underscoring the gap between Chinese and U.S. production capacity.

Market research firm TrendForce expects Unitree to account for a substantial share of global humanoid output in the near term, reflecting both aggressive scaling and a focus on cost reduction.

At the same time, the company is preparing for a potential IPO in Shanghai, aiming to raise capital to expand manufacturing and research. The R1’s international debut may therefore serve a dual purpose: generating revenue while demonstrating global demand to investors.

From Demonstration to Early Adoption

The launch also highlights a shift in how humanoid robots are being positioned. Rather than targeting a single industrial application, the R1 appears designed as a general-purpose platform that can showcase capabilities and attract a broader user base.

Unitree has previously gained visibility through high-profile demonstrations, including coordinated performances by its robots on national television. The move into global e-commerce suggests a transition from spectacle to early commercialization, even if practical use cases remain limited.

For now, most humanoid robots are still used in research, education, and controlled environments. The introduction of a lower-cost model does not immediately resolve challenges around autonomy, reliability, or real-world utility.

However, it may begin to reshape expectations. If consumers and small businesses can access humanoid robots at a fraction of previous costs, the market could shift from a handful of experimental deployments to a larger base of exploratory use.

Unitree’s R1 launch represents one of the clearest attempts to test that transition. By combining lower pricing with global distribution, the company is effectively probing whether humanoid robotics can move beyond early adopters and into a broader commercial category.

The outcome will depend less on technical capability alone and more on whether users find meaningful ways to integrate these systems into everyday environments. For an industry still searching for its first large-scale application, that question remains open.

Business & Markets, News, Robots & Robotics, Science & Tech

AGIBOT Launches Genie Sim 3.0 to Power Embodied AI Development

AGIBOT introduced Genie Sim 3.0, a unified platform combining simulation, data generation, and benchmarking to accelerate embodied AI development.

By Rachel Whitman Published: Updated:

AGIBOT has introduced Genie Sim 3.0, a new platform designed to unify simulation, data generation, and benchmarking for embodied artificial intelligence. The release reflects a growing industry push to address one of robotics’ biggest constraints – the lack of scalable, high-quality training data and standardized evaluation.

While advances in AI models have driven rapid progress in robotics, real-world deployment remains limited by expensive data collection, fragmented testing environments, and inconsistent performance metrics. Genie Sim 3.0 aims to consolidate these elements into a single development infrastructure, reducing the gap between research and deployment.

The platform combines environment creation, simulation, training, and evaluation into a continuous pipeline. Instead of building each component separately, developers can now iterate within a unified system designed specifically for embodied AI systems.

From Simulation to Scalable Data

A central feature of Genie Sim 3.0 is its ability to generate interactive 3D environments from text or image inputs, using a spatial world model. This allows developers to create training scenarios in minutes rather than hours, significantly lowering the cost and complexity of robotics development.

The system produces synchronized multimodal outputs – including visual, depth, and LiDAR data – closely aligned with real-world robot perception. This is critical for improving transfer from simulation to physical environments, a longstanding challenge in robotics.

By automating environment creation and scaling data generation, AGIBOT is effectively turning simulation into a primary source of training data, rather than a supplementary tool. This shift mirrors broader trends in AI, where synthetic data is increasingly used to overcome real-world limitations.

Standardizing Evaluation and Closing the Sim-to-Real Gap

Beyond data generation, Genie Sim 3.0 introduces a structured benchmarking framework designed to evaluate core robotic capabilities. These include instruction following, spatial reasoning, manipulation skills, robustness under environmental changes, and sim-to-real transfer performance.

This standardized approach addresses a key issue in robotics – the lack of consistent metrics across models and systems. By defining common evaluation tasks, the platform enables more reliable comparison and faster iteration.

The system also integrates reinforcement learning pipelines, allowing models to be trained and tested within the same environment. High-frequency physics simulation combined with parallel processing enables faster convergence and more efficient experimentation.

Taken together, these capabilities create a closed-loop system where robots can learn, adapt, and be evaluated continuously within simulation before deployment.

Genie Sim 3.0 reflects a broader shift toward infrastructure-driven robotics development. As embodied AI moves from research into real-world applications, platforms that unify data, training, and evaluation are becoming essential.

By reducing engineering overhead and accelerating iteration cycles, AGIBOT is positioning simulation not just as a tool, but as the foundation for scaling the next generation of intelligent machines.

Artificial Intelligence (AI), News, Robots & Robotics

Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030

Global shipments of humanoid and quadruped robots are projected to reach 810,000 units by 2030, as enterprise adoption replaces early experimentation.

By Daniel Krauss Published: Updated:
Humanoid and quadruped robots are scaling rapidly, with global shipments projected to reach 810,000 units by 2030 as enterprise adoption accelerates. Photo: Unitree Robotics / X

The global market for humanoid and quadruped robots is entering a decisive growth phase, with shipments projected to reach 810,000 units by 2030, according to new industry forecasts by SAG. The shift reflects a broader transition from early-stage experimentation to real-world deployment across logistics, manufacturing, and service industries.

Recent data shows the pace of expansion is already accelerating, reports AIstify. Global shipments reached nearly 53,000 units in 2025, representing a 250% year-over-year increase, while total market revenue approached $1 billion. By the end of the decade, the market is expected to scale to $8 billion, supported by sustained double-digit growth.

The defining change is not just technological progress, but demand. After years of testing and pilot programs, companies are now integrating robots directly into operational workflows where labor shortages, safety requirements, and efficiency pressures are most acute.

Enterprise Adoption Becomes the Primary Growth Driver

The next phase of growth will be driven primarily by enterprise adoption rather than experimentation. Early deployments focused on validation and proof-of-concept, but that cycle is now reaching its limits.

“The robotics industry delivered strong growth in 2025, but the real test lies ahead,” said Yiwen Wu, Lead Research Advisor at Smart Analytics Global. “Enterprise adoption will be the key. Only vendors that can scale real-world deployments will define the next phase of the industry.”

Quadruped robots are currently leading in real-world use cases, particularly in inspection, security, and industrial monitoring. Their ability to navigate uneven terrain and operate in hazardous environments has made them easier to commercialize at scale.

Humanoid robots, by contrast, remain earlier in deployment but are attracting significantly more investment and policy support. Their long-term potential lies in operating within human-designed environments, from warehouses and retail to healthcare and household applications.

This creates a dual-track market: quadrupeds driving immediate adoption, while humanoids dominate long-term strategic positioning.

China Dominates Hardware While Global Competition Intensifies

The geographic distribution of the market reveals a clear imbalance. Chinese companies accounted for approximately 85% of global shipments in 2025, with China itself absorbing more than 60% of total demand.

Companies such as Unitree Robotics, Agibot, DOBOT, and Galbot are scaling production rapidly, leveraging manufacturing efficiency to capture early market share. Unitree alone held a leading position across both segments, with a particularly dominant share in quadruped robots.

At the same time, Western companies are maintaining an advantage in software, AI models, and advanced research. Firms like Boston Dynamics, Tesla, and Amazon are focusing on autonomy, perception systems, and large-scale AI integration.

This divergence is shaping a fragmented but complementary global landscape, where leadership is split across hardware manufacturing, software intelligence, and regulatory frameworks. South Korea is increasing investment in robotics, while Europe continues to specialize in safety, certification, and high-value industrial applications.

Looking ahead, analysts expect consolidation pressure to increase as the market matures. Vendors that expanded production ahead of proven demand may face challenges, while others with strong deployment pipelines could emerge as dominant players.

The result is a market approaching a critical inflection point. Robotics is no longer defined by technical capability alone – it is increasingly shaped by scalability, economics, and the ability to operate reliably in the real world.