Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress

Unitree is bringing its lowest-cost humanoid robot to global markets via AliExpress, signaling a shift toward early consumer adoption of robotics.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress
Unitree’s R1 humanoid robot, designed for dynamic movement and lower-cost production, marks a step toward broader global access to humanoid machines. Photo: Unitree

Chinese robotics firm Unitree Robotics is preparing to launch its most affordable humanoid robot globally, a move that could test whether the category is beginning to transition from industrial experimentation to early consumer markets.

The company plans to debut its R1 humanoid robot through AliExpress, targeting customers in North America, Europe, Japan, and Singapore. With a starting price of around $4,000 in China, the R1 is among the lowest-cost humanoid robots introduced to date, positioning it closer to consumer electronics than traditional industrial machinery.

The rollout comes as Unitree accelerates production and expands internationally, following a year in which it shipped more than 5,500 humanoid robots – far exceeding most global competitors.

Lower Prices Meet Global Distribution

The R1 reflects a broader push to reduce the cost of humanoid robotics while expanding access through global distribution platforms. By launching on AliExpress, Unitree is bypassing traditional enterprise sales channels and testing direct-to-market demand.

The robot stands just over 1.2 meters tall and is designed for dynamic movement, including running, recovering from falls, and performing coordinated motions. Marketed as “sport-ready”, it highlights Unitree’s focus on mobility and mechanical performance rather than immediate utility in structured work environments.

The pricing strategy marks a significant departure from earlier humanoid systems, which have typically been priced in the tens of thousands of dollars or higher. Even companies such as Tesla have suggested that future humanoid robots could cost around $20,000, placing Unitree’s offering well below that threshold.

The question is not only whether such pricing is sustainable, but whether it will translate into meaningful adoption beyond research labs and demonstration use cases.

Scaling Production Ahead of Demand

Unitree’s global expansion is closely tied to its manufacturing scale. The company has set a target of shipping between 10,000 and 20,000 robots in 2026, building on its current position as one of the highest-volume producers of humanoid systems.

According to industry estimates, competitors such as Figure AI and Agility Robotics have shipped only a few hundred units each, underscoring the gap between Chinese and U.S. production capacity.

Market research firm TrendForce expects Unitree to account for a substantial share of global humanoid output in the near term, reflecting both aggressive scaling and a focus on cost reduction.

At the same time, the company is preparing for a potential IPO in Shanghai, aiming to raise capital to expand manufacturing and research. The R1’s international debut may therefore serve a dual purpose: generating revenue while demonstrating global demand to investors.

From Demonstration to Early Adoption

The launch also highlights a shift in how humanoid robots are being positioned. Rather than targeting a single industrial application, the R1 appears designed as a general-purpose platform that can showcase capabilities and attract a broader user base.

Unitree has previously gained visibility through high-profile demonstrations, including coordinated performances by its robots on national television. The move into global e-commerce suggests a transition from spectacle to early commercialization, even if practical use cases remain limited.

For now, most humanoid robots are still used in research, education, and controlled environments. The introduction of a lower-cost model does not immediately resolve challenges around autonomy, reliability, or real-world utility.

However, it may begin to reshape expectations. If consumers and small businesses can access humanoid robots at a fraction of previous costs, the market could shift from a handful of experimental deployments to a larger base of exploratory use.

Unitree’s R1 launch represents one of the clearest attempts to test that transition. By combining lower pricing with global distribution, the company is effectively probing whether humanoid robotics can move beyond early adopters and into a broader commercial category.

The outcome will depend less on technical capability alone and more on whether users find meaningful ways to integrate these systems into everyday environments. For an industry still searching for its first large-scale application, that question remains open.

Business & Markets, News, Robots & Robotics, Science & Tech

Toyota Unveils CUE7, a Lighter Basketball Robot Built on Hybrid AI Control

Toyota has introduced CUE7, the latest iteration of its basketball-shooting robot, featuring a significantly lighter frame, an inverted two-wheel base, and a hybrid control system combining reinforcement learning with model predictive control.

By Daniel Krauss | Edited by Kseniia Klichova Published:

Toyota has unveiled CUE7, the seventh generation of its basketball-shooting robot platform, on April 12. The system marks the most significant technical upgrade in the CUE series to date, with reductions in weight, a new mobility architecture, and a hybrid AI control system that combines reinforcement learning with model predictive control.

The robot was developed by Toyota’s Frontier Research Center and signals the company’s continued investment in embodied AI research outside its traditional automotive domain.

What Changed in CUE7

The most immediate change is physical. CUE7 weighs 74 kg, down from 120 kg in its predecessor – a reduction of nearly 40%. The wheeled base has been redesigned around an inverted two-wheel structure, replacing the earlier fixed-platform approach and giving the robot greater dynamic stability during motion.

The control architecture is also new. Rather than relying on a single AI method, CUE7 uses a hybrid system that combines reinforcement learning – where the robot improves through repeated trial and feedback – with model predictive control, which uses forward simulation to plan and execute precise movements in real time. The result is a platform capable of more dynamic, fluid motion than earlier versions of the robot.

CUE7 uses vision systems to identify the basket, estimate distance, and calculate shot trajectory. Its upper body makes deliberate postural adjustments to align the release angle before executing the shot with calibrated force.

A Platform Built Over Years

The CUE project began as an internal employee initiative before becoming a dedicated research program. CUE3 set a Guinness World Record in 2019 by completing 2,020 consecutive free throws. CUE6 extended the platform’s range, completing a 24.55-meter shot during a record attempt.

Each iteration has expanded the robot’s operational scope. Early versions were stationary shooters. Later models introduced mobility, ball retrieval, and dribbling. CUE7 advances the underlying control and sensing systems rather than adding new physical tasks, consolidating the platform’s technical foundation.

The Broader Purpose

Toyota uses the CUE series as a testbed for capabilities with direct relevance to general robotics: vision-based target acquisition, real-time trajectory planning, precise force control, and repeatable physical execution under variable conditions. Basketball provides a structured environment in which each of these capabilities can be isolated, measured, and improved.

The platform reflects a wider industry pattern in which automakers are applying their manufacturing and systems engineering expertise to humanoid and semi-humanoid robotics. Toyota has not announced commercial applications for CUE7, and the robot remains a research demonstration. The hybrid control architecture, however, represents a technical approach with potential applicability beyond sport – particularly in industrial and service environments where consistent, adaptive physical performance is required.

Business & Markets, News, Robots & Robotics, Science & Tech

SoftBank Robotics America and Matternet Partner to Scale Autonomous Drone Delivery

SoftBank Robotics America and Matternet have signed a strategic partnership to accelerate autonomous drone delivery deployments across the U.S., targeting healthcare and other industries where speed and reliability are critical.

By Rachel Whitman | Edited by Kseniia Klichova Published:
SoftBank Robotics America and Matternet Partner to Scale Autonomous Drone Delivery
An autonomous delivery drone operating over an urban environment as part of a commercial logistics network. Photo: Matternet

SoftBank Robotics America has signed a strategic partnership with Matternet, a drone delivery company, to accelerate the deployment of autonomous aerial last-mile delivery in the U.S. and other key markets. The deal combines SoftBank Robotics America’s role as a physical AI integrator with Matternet’s FAA-certified drone platform, targeting enterprise operators in healthcare, commerce, and industrial logistics.

Last-mile delivery continues to face structural pressure from labor shortages, rising costs, and urban congestion. Autonomous aerial delivery is emerging as a cost-competitive alternative to traditional ground-based methods, particularly at scale.

What Each Company Brings

Matternet has spent more than a decade building commercial drone delivery infrastructure. The company is the first in the industry to achieve both FAA Type Certification and Production Certification, and its technology has enabled tens of thousands of commercial flights in urban and suburban environments across the U.S. and Europe. Its M2 drone and software platform are already deployed through partnerships with UPS and Ameriflight.

SoftBank Robotics America operates as an integrator – its role is to take proven autonomous technologies and embed them into real-world operational environments at scale. The company works across senior living, hospitality, aviation, facilities management, and commercial cleaning, and has built a track record of translating robotics pilots into production deployments.

Brady Watkins, President and GM of SoftBank Robotics America, said:

“The challenge is not the technology, but rather operationalizing the technology such that it produces consistent measurable outcomes.”

Healthcare as the Initial Focus

The partnership’s initial emphasis is on healthcare, where delivery speed and reliability directly affect patient outcomes. Medical supplies, lab samples, and pharmaceuticals represent a natural fit for autonomous aerial delivery – time-sensitive, high-value, and moving between fixed points such as hospitals, labs, and pharmacies.

Katya Akudovich, Vice President of New Ventures at SoftBank Robotics America, said:

“By combining Matternet’s technology with our global commercialization capability and experience, we are creating a powerful partnership to bring the benefits of autonomous drone delivery into day-to-day operations for vertical markets such as healthcare where speed and reliability are mission critical.”

Scaling the Infrastructure

Andreas Raptopoulos, founder and CEO of Matternet, framed the partnership as part of a broader shift toward autonomous logistics networks. He said:

“Our partnership with SoftBank Robotics America will accelerate deployment of our technology and help build the autonomous delivery infrastructure for healthcare, commerce, and industry.”

The partnership does not introduce new drone hardware. Instead, it focuses on the integration layer – the processes, support structures, and operational frameworks needed to move autonomous drone delivery from isolated pilots to consistent, large-scale networks. That focus on operationalization rather than invention reflects where the autonomous delivery industry is broadly: the technology is sufficiently mature, but deployment at enterprise scale remains the central challenge. The companies did not disclose financial terms of the agreement.

Business & Markets, News, Robots & Robotics

Accenture Invests in General Robotics to Build a Unified AI Layer for Industrial Robots

Accenture Ventures has invested in General Robotics, whose GRID platform connects robots from multiple manufacturers under a single AI intelligence layer, targeting scaled automation in factories and warehouses.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Accenture Invests in General Robotics to Build a Unified AI Layer for Industrial Robots
Industrial robots operating on a factory floor managed by a unified AI orchestration platform. Photo: Accenture

Accenture has invested in General Robotics, a startup building a unified AI intelligence platform for industrial robots, through its Accenture Ventures arm. The two companies will also partner to help manufacturers, logistics operators, and other asset-intensive industries deploy autonomous robotic systems at scale. Financial terms were not disclosed.

The deal reflects a wider strategic push by Accenture to move beyond software consulting and into the physical infrastructure of AI-driven automation.

The Problem GRID Is Designed to Solve

Most factories operate robots from multiple manufacturers, each running its own software stack, programming language, and integration requirements. Scaling automation across a multi-vendor fleet is expensive and slow, and the cost has historically limited full deployment to only the largest industrial operators.

General Robotics addresses this with GRID, a platform that sits above the hardware layer and connects robots from more than 40 manufacturers – including FANUC, Flexiv, Ghost Robotics, and Galaxea – under a single orchestration framework. Rather than programming each machine individually, GRID offers modular, reusable AI skills deployable across different hardware through cloud-based orchestration, simulation-based training, and full data sovereignty for enterprise customers.

“While robotics hardware and AI models advance at a rapid pace, real-world impact is constrained by the lack of a unified intelligence infrastructure,” said Ashish Kapoor, CEO and co-founder of General Robotics. Kapoor previously served as general manager of autonomous systems and robotics research at Microsoft, where he created AirSim, a widely used open-source simulator for training autonomous vehicles and drones.

Accenture’s Physical AI Strategy

The investment extends an infrastructure position Accenture has been building for over a year. The company launched its Physical AI Orchestrator in October 2025, a system that uses NVIDIA Omniverse libraries and the NVIDIA Mega Blueprint to coordinate robotic and autonomous systems in industrial settings. GRID integrates NVIDIA Isaac Sim, allowing manufacturers to train robotic AI skills in digital twins before deploying them on physical hardware – a capability that aligns directly with Accenture’s existing toolchain.

Where Accenture’s Physical AI Orchestrator handles coordination at the facility level, GRID handles robot-level AI – the skills, perception, and decision-making that individual machines need to perform complex tasks autonomously. Together, the two layers form a more complete stack for enterprise robotics deployment.

Prior investments in Sanctuary AI and a partnership with Schaeffler for industrial humanoid robots in automotive manufacturing point to a consistent thesis: Accenture is positioning itself as the primary integrator for physical AI at the enterprise level.

Scale and Market Context

“Piloting robotic systems takes too long, is expensive, and often not scalable and repeatable across a network of facilities,” said Prasad Satyavolu, Accenture’s global lead for manufacturing and operations. The stated goal of the partnership is to compress that deployment cycle by delivering an enterprise-grade robotics intelligence and orchestration layer that clients can apply across multiple facilities.

The physical AI market is projected to grow from roughly $1.5 billion in 2026 to more than $15 billion by 2032. A Deloitte survey found that 58% of global business leaders are already using some form of physical AI, though scaled deployment remains concentrated in automotive, electronics, and logistics. General Robotics remains an early-stage company without publicly reported revenue figures, and the broader challenge – persuading manufacturers to adopt an independent orchestration layer over proprietary vendor platforms – will require demonstrated performance on working factory floors, not just in simulation.

Artificial Intelligence (AI), Business & Markets, News, Startups & Venture

Grab Introduces Carri Robot to Speed Up Food Delivery in Southeast Asia

Grab unveiled an AI-powered delivery robot called Carri at its annual GrabX 2026 event, designed to reduce the time drivers spend navigating malls and office buildings to collect orders.

By Laura Bennett | Edited by Kseniia Klichova Published:
Grab Introduces Carri Robot to Speed Up Food Delivery in Southeast Asia
A delivery robot navigating an indoor commercial space to collect food orders for handoff to a human driver. Photo: Grab X

Grab unveiled an AI-powered delivery robot called Carri at GrabX 2026, the company’s annual product event held in Jakarta this month. The announcement is part of a broader push by the Singapore-based super app to embed physical automation into a platform that has long depended entirely on gig workers.

Anthony Tan, Grab’s CEO and co-founder, said the company is building an Intelligence Layer – AI infrastructure fueled by real-world, real-time signals – that sits underneath every feature and innovation in the app. That layer now extends into hardware.

Carri and the Indoor Delivery Problem

Tan said delivery partners currently lose around 10% of their earning time navigating large malls or waiting for customers to come down from office buildings. Carri is designed to absorb that idle time by handling restaurant retrieval and handoff, freeing drivers to stay on the road.

The robot is built for both indoor and outdoor environments, equipped with LIDAR sensors and cameras to navigate crowds and avoid obstacles. It features secure storage compartments that open only for the specific user assigned to a given order.

Carri is still in the development and testing phase, and the pricing or cost model for deployment has not yet been determined. Tan framed the move as a natural extension of the platform’s AI capabilities into the physical world. “We are moving into hardware to improve the messy physical parts of the job that software alone cannot fix,” he said.

13 New Features Across Three User Groups

GrabX 2026 introduced 13 AI-powered features designed around three core pillars: local life, effortless travel, and business empowerment.

For consumers, Grab introduced Group Ride for shared fares, GrabMore for multi-merchant orders under a single delivery fee, and a Grab AI Assistant that handles food, shopping, and bookings. GrabMaps and a Cash Loan product round out the consumer-facing additions.

For travelers, Grab unveiled GrabStays for hotel bookings, Discover by Grab for AI-curated dining recommendations, and GrabPay for Travel to enable cross-border QR payments across Southeast Asia.

For merchants and drivers, Grab announced a Virtual Store Manager using CCTV hardware for AI-powered monitoring, a Cloud Printer to automate order handling, and Tap to Pay to turn smartphones into contactless payment terminals. A Driver AI Assistant provides hands-free route and earnings guidance.

Monetization and the Road Ahead

Tan said Grab is also planning to extend its intelligence layer into autonomous vehicles and CCTV cameras, signaling that hardware will become a structural component of the business rather than a peripheral experiment.

On the revenue side, merchant hardware tools including cloud printers and virtual store management are set to move from free trials to a subscription model. A recent Barclays analysis estimated that widespread use of robots and drones in food delivery could reduce per-order costs to as little as $1 – a threshold that, if reached, would reshape the unit economics of every major delivery platform in the region.

Business & Markets, News, Robots & Robotics

Irrigation Robot Maps Water Needs Tree by Tree, Challenging Farm Automation Norms

A field robot that maps soil moisture at the level of individual trees could reshape irrigation practices, reducing water use and improving crop health.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Irrigation Robot Maps Water Needs Tree by Tree, Challenging Farm Automation Norms
A mobile field robot scans soil conditions in orchards, generating detailed maps that guide precise irrigation at the level of individual trees. Photo: UCR

A mobile irrigation robot developed by researchers at the University of California, Riverside is challenging one of agriculture’s most persistent assumptions: that crops in the same field require the same amount of water.

By mapping soil moisture at the level of individual trees, the system reveals significant variation even between neighboring plants, suggesting that conventional irrigation methods may be systematically inefficient.

The findings point to a broader shift in agricultural robotics, where mobile sensing systems are replacing static infrastructure to deliver more granular, data-driven decisions.

From Field Averages to Tree-Level Precision

Traditional irrigation relies on fixed sensors and uniform watering schedules, operating on the assumption that conditions are relatively consistent across a field. The robot developed at UCR takes a different approach, scanning soil conditions continuously as it moves through orchards.

In field trials across citrus groves in California, the system detected sharp differences in water availability between adjacent trees, despite identical irrigation inputs. These variations were linked to differences in soil composition, where finer soils retained water more effectively than sandier patches.

The robot measures electrical conductivity in the soil – a proxy for moisture – and combines those readings with calibration data from a limited number of ground sensors. The result is a detailed moisture map that identifies both under-watered and over-watered areas.

This level of resolution allows irrigation to be adjusted at a much finer scale, turning what has traditionally been a field-wide estimate into a localized decision.

Reducing Waste and Managing Risk

The implications extend beyond water conservation. Overwatering can damage crops by depriving roots of oxygen and increasing susceptibility to disease, while also washing fertilizers deeper into the soil, where they can no longer be absorbed.

By identifying these imbalances, the system enables growers to maintain soil moisture within a narrower, optimal range. In testing, the model achieved high accuracy with relatively few calibration points, suggesting that widespread deployment may not require dense sensor networks.

This efficiency is significant in an industry where the cost of installing and maintaining sensors can limit adoption of precision agriculture technologies.

The approach also aligns with broader pressures facing agriculture, particularly in water-constrained regions. As drought conditions intensify, growers are increasingly forced to either reduce production or find ways to use water more efficiently.

Robotics Expands Beyond Automation

Unlike many agricultural robots focused on harvesting or crop monitoring, this system highlights a different role for robotics: acting as a mobile data layer that enhances decision-making rather than directly performing physical tasks.

The platform used in the study is capable of autonomous navigation, although it was manually operated during trials. Future versions are expected to operate independently, covering larger areas and integrating more closely with irrigation systems.

Several challenges remain before commercial deployment, including adapting the system to different crops, soil types, and environmental conditions. The relationship between surface measurements and deeper soil moisture also requires further refinement.

The development reflects a broader trend in robotics toward combining mobility with sensing and AI-driven analysis. By moving through environments rather than relying on fixed points, robots can capture variability that static systems miss.

In agriculture, where small differences in soil conditions can have large impacts on yield and resource use, that shift may prove particularly consequential.

If validated at scale, tree-level irrigation mapping could redefine how farms manage water – not as a uniform input, but as a variable resource tailored to each plant.

Automation, News, Robots & Robotics, Science & Tech