Toyota Deploys Agility Robotics’ Digit Humanoids in Canadian Factories

Toyota Motor Manufacturing Canada is deploying Agility Robotics’ Digit humanoid robots after a successful pilot, expanding their use in logistics and manufacturing tasks.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:

Toyota is expanding its use of humanoid robots in manufacturing, deploying Digit robots from Agility Robotics at its Canadian production facilities following a year-long pilot. The move signals growing confidence among major automakers that humanoid robots are ready to assist with real industrial tasks.

Toyota Motor Manufacturing Canada, the automaker’s largest production operation outside Japan, plans to initially deploy Digit robots to handle repetitive logistics tasks such as loading and unloading containers from automated transport systems. The company operates major assembly plants in Cambridge and Woodstock, Ontario, producing vehicles at significant scale and employing thousands of workers.

The deployment represents one of the clearest examples yet of humanoid robots moving from pilot programs into sustained manufacturing roles.

Moving Beyond Pilot Programs

Toyota’s evaluation of Digit involved multiple phases, including technical validation and onsite trials using three robots. Following the pilot’s success, the company plans to introduce at least seven additional robots, with the potential for further expansion if operational benefits continue.

The initial focus is on material handling tasks within production and logistics workflows. These activities are repetitive and physically demanding, making them ideal candidates for automation.

Humanoid robots offer a distinct advantage in such environments because they can operate within spaces designed for human workers. Unlike traditional industrial robots, which often require specialized infrastructure, humanoids can integrate into existing workflows with minimal facility modification.

Toyota executives emphasized that the deployment is intended to improve both operational efficiency and employee working conditions. Automating routine tasks allows human workers to focus on higher-value activities, while reducing physical strain associated with repetitive manual labor.

A Growing Commercial Footprint for Humanoid Robots

The partnership between Toyota and Agility Robotics reflects a broader trend across the manufacturing and logistics sectors. Major companies are increasingly testing humanoid robots as part of long-term automation strategies.

Digit has already been deployed commercially by logistics provider GXO Logistics, and pilots or deployments are underway at companies including Amazon and automotive and industrial supplier Schaeffler.

These deployments mark a shift from demonstration-driven development toward production-oriented integration. Rather than focusing on technical showcases, robotics companies are prioritizing reliability, safety, and compatibility with existing industrial systems.

Agility Robotics has also developed a cloud-based fleet management platform, allowing companies to monitor and coordinate robot operations at scale. This infrastructure enables humanoids to function as part of integrated production systems rather than isolated machines.

The Strategic Role of Humanoids in Manufacturing

Labor shortages and workforce demographics are accelerating interest in humanoid robotics. Manufacturing facilities often face challenges filling physically demanding roles, particularly in logistics and material handling.

Humanoid robots provide a flexible solution. Because they are designed with human-like form factors, they can operate in environments originally built for human workers without requiring major redesign.

This flexibility differentiates humanoids from traditional industrial automation systems, which typically require customized installations.

Agility Robotics CEO Peggy Johnson said the company is working to develop humanoids capable of safely operating alongside human workers. Cooperative safety is a critical requirement for large-scale deployment, ensuring robots can function reliably in shared workspaces.

The automotive industry has become a focal point for humanoid robot deployment. Automakers including BMW, Mercedes-Benz, and Hyundai Motor Company have also begun testing or deploying humanoid robots in manufacturing environments.

A Turning Point for Physical AI Deployment

Toyota’s decision to move from pilot testing to operational deployment reflects a key inflection point for humanoid robotics. The central question facing the industry has been whether humanoids can deliver reliable performance under real production conditions.

Deployments like this suggest that humanoid robots are beginning to meet those requirements, at least for certain classes of tasks.

The transition from experimental demonstrations to production deployment represents a fundamental shift in robotics commercialization. Rather than proving what robots can do in controlled environments, companies are now demonstrating what robots can do consistently in real-world industrial operations.

As physical AI systems continue to improve, humanoid robots may become a standard component of manufacturing infrastructure. Toyota’s deployment of Digit signals that the industry is entering a new phase – one where humanoid robots are no longer experimental technology, but emerging tools of industrial production.

Automation, News, Robots & Robotics, Science & Tech

Google Advances Embodied AI with Gemini Robotics ER Model

Google has introduced a new AI model that improves how robots understand, plan, and act in real-world environments, marking progress in embodied reasoning.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Google Advances Embodied AI with Gemini Robotics ER Model
Google’s Gemini Robotics ER model enables robots to interpret environments, plan actions, and complete tasks with improved spatial awareness and reasoning. Photo: Google

Google has introduced a new AI model designed to improve how robots understand and operate in real-world environments, targeting one of the most persistent limitations in robotics: the ability to reason beyond predefined instructions.

The model, Gemini Robotics-ER 1.6, focuses on what researchers describe as embodied reasoning – the capacity for machines to interpret visual inputs, plan sequences of actions, and determine when a task has been successfully completed. The update reflects a broader shift in robotics from systems that execute commands to those that can make context-aware decisions in dynamic settings.

The model is being made available to developers through Google’s AI tooling ecosystem, positioning it as part of a growing effort to standardize software layers for physical AI.

Moving from Perception to Reasoning

Robotics systems have historically relied on separate modules for perception, planning, and control, often requiring extensive engineering to connect them. Gemini Robotics-ER 1.6 attempts to unify these functions, allowing robots to process visual information and translate it directly into action.

The model improves spatial reasoning, enabling robots to identify objects, understand their relationships, and break tasks into smaller steps. It can also track objects across multiple viewpoints, combining inputs from different cameras to build a more complete understanding of an environment.

This multi-view capability is particularly relevant in real-world settings, where occlusion, clutter, and changing conditions can limit the effectiveness of single-camera systems. By integrating multiple perspectives, robots can maintain situational awareness even when parts of a scene are temporarily hidden.

Another key advancement is success detection. The model allows robots to evaluate whether a task has been completed correctly, reducing reliance on external validation or rigid programming. This is a critical requirement for autonomous operation, particularly in environments where tasks may need to be repeated or adjusted in real time.

Interpreting the Physical World

One of the more practical capabilities introduced in the model is the ability to read instruments such as gauges, meters, and digital displays. This function is particularly relevant for industrial and inspection applications, where robots must interpret physical indicators rather than purely digital data.

In collaboration with Boston Dynamics, the system has been applied to robots like Spot, which are used for facility monitoring. The model can analyze visual inputs, identify key components such as needles or numerical readouts, and calculate values with a high degree of accuracy.

Reported improvements in instrument reading performance suggest a significant step forward. Accuracy has increased from earlier levels of around 23% to over 90% in some scenarios, indicating that robots are becoming more capable of handling tasks that require precise interpretation of real-world signals.

The model also incorporates safety-aware reasoning, allowing robots to identify potential hazards and avoid unsafe interactions. This reflects an increasing emphasis on aligning robotic behavior with physical constraints, particularly as systems move into environments shared with humans.

Building a Software Layer for Physical AI

The release of Gemini Robotics-ER 1.6 highlights a broader trend toward treating robotics as a software problem as much as a hardware one. As companies race to develop humanoid and autonomous systems, the ability to generalize across tasks and environments is becoming a key differentiator.

Efforts by companies such as Nvidia and others have focused on simulation and training infrastructure, while Google’s approach emphasizes reasoning and decision-making at runtime. Together, these developments point toward a layered architecture for physical AI, where perception, reasoning, and control are increasingly integrated.

The remaining challenge is translating these capabilities into reliable real-world performance at scale. While models like Gemini Robotics-ER 1.6 demonstrate significant progress in controlled evaluations, deployment in complex environments will require further advances in robustness, data integration, and system design.

Google’s latest model suggests that robotics is entering a phase where intelligence is defined less by isolated capabilities and more by the ability to connect perception, reasoning, and action. As embodied AI systems become more capable of interpreting and responding to the physical world, the boundary between digital intelligence and physical execution continues to narrow.

The extent to which this translates into widespread adoption will depend on how quickly these systems can move from experimental demonstrations to dependable tools in industry and beyond.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress

Unitree is bringing its lowest-cost humanoid robot to global markets via AliExpress, signaling a shift toward early consumer adoption of robotics.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress
Unitree’s R1 humanoid robot, designed for dynamic movement and lower-cost production, marks a step toward broader global access to humanoid machines. Photo: Unitree

Chinese robotics firm Unitree Robotics is preparing to launch its most affordable humanoid robot globally, a move that could test whether the category is beginning to transition from industrial experimentation to early consumer markets.

The company plans to debut its R1 humanoid robot through AliExpress, targeting customers in North America, Europe, Japan, and Singapore. With a starting price of around $4,000 in China, the R1 is among the lowest-cost humanoid robots introduced to date, positioning it closer to consumer electronics than traditional industrial machinery.

The rollout comes as Unitree accelerates production and expands internationally, following a year in which it shipped more than 5,500 humanoid robots – far exceeding most global competitors.

Lower Prices Meet Global Distribution

The R1 reflects a broader push to reduce the cost of humanoid robotics while expanding access through global distribution platforms. By launching on AliExpress, Unitree is bypassing traditional enterprise sales channels and testing direct-to-market demand.

The robot stands just over 1.2 meters tall and is designed for dynamic movement, including running, recovering from falls, and performing coordinated motions. Marketed as “sport-ready”, it highlights Unitree’s focus on mobility and mechanical performance rather than immediate utility in structured work environments.

The pricing strategy marks a significant departure from earlier humanoid systems, which have typically been priced in the tens of thousands of dollars or higher. Even companies such as Tesla have suggested that future humanoid robots could cost around $20,000, placing Unitree’s offering well below that threshold.

The question is not only whether such pricing is sustainable, but whether it will translate into meaningful adoption beyond research labs and demonstration use cases.

Scaling Production Ahead of Demand

Unitree’s global expansion is closely tied to its manufacturing scale. The company has set a target of shipping between 10,000 and 20,000 robots in 2026, building on its current position as one of the highest-volume producers of humanoid systems.

According to industry estimates, competitors such as Figure AI and Agility Robotics have shipped only a few hundred units each, underscoring the gap between Chinese and U.S. production capacity.

Market research firm TrendForce expects Unitree to account for a substantial share of global humanoid output in the near term, reflecting both aggressive scaling and a focus on cost reduction.

At the same time, the company is preparing for a potential IPO in Shanghai, aiming to raise capital to expand manufacturing and research. The R1’s international debut may therefore serve a dual purpose: generating revenue while demonstrating global demand to investors.

From Demonstration to Early Adoption

The launch also highlights a shift in how humanoid robots are being positioned. Rather than targeting a single industrial application, the R1 appears designed as a general-purpose platform that can showcase capabilities and attract a broader user base.

Unitree has previously gained visibility through high-profile demonstrations, including coordinated performances by its robots on national television. The move into global e-commerce suggests a transition from spectacle to early commercialization, even if practical use cases remain limited.

For now, most humanoid robots are still used in research, education, and controlled environments. The introduction of a lower-cost model does not immediately resolve challenges around autonomy, reliability, or real-world utility.

However, it may begin to reshape expectations. If consumers and small businesses can access humanoid robots at a fraction of previous costs, the market could shift from a handful of experimental deployments to a larger base of exploratory use.

Unitree’s R1 launch represents one of the clearest attempts to test that transition. By combining lower pricing with global distribution, the company is effectively probing whether humanoid robotics can move beyond early adopters and into a broader commercial category.

The outcome will depend less on technical capability alone and more on whether users find meaningful ways to integrate these systems into everyday environments. For an industry still searching for its first large-scale application, that question remains open.

Business & Markets, News, Robots & Robotics, Science & Tech

AGIBOT Launches Genie Sim 3.0 to Power Embodied AI Development

AGIBOT introduced Genie Sim 3.0, a unified platform combining simulation, data generation, and benchmarking to accelerate embodied AI development.

By Rachel Whitman Published: Updated:

AGIBOT has introduced Genie Sim 3.0, a new platform designed to unify simulation, data generation, and benchmarking for embodied artificial intelligence. The release reflects a growing industry push to address one of robotics’ biggest constraints – the lack of scalable, high-quality training data and standardized evaluation.

While advances in AI models have driven rapid progress in robotics, real-world deployment remains limited by expensive data collection, fragmented testing environments, and inconsistent performance metrics. Genie Sim 3.0 aims to consolidate these elements into a single development infrastructure, reducing the gap between research and deployment.

The platform combines environment creation, simulation, training, and evaluation into a continuous pipeline. Instead of building each component separately, developers can now iterate within a unified system designed specifically for embodied AI systems.

From Simulation to Scalable Data

A central feature of Genie Sim 3.0 is its ability to generate interactive 3D environments from text or image inputs, using a spatial world model. This allows developers to create training scenarios in minutes rather than hours, significantly lowering the cost and complexity of robotics development.

The system produces synchronized multimodal outputs – including visual, depth, and LiDAR data – closely aligned with real-world robot perception. This is critical for improving transfer from simulation to physical environments, a longstanding challenge in robotics.

By automating environment creation and scaling data generation, AGIBOT is effectively turning simulation into a primary source of training data, rather than a supplementary tool. This shift mirrors broader trends in AI, where synthetic data is increasingly used to overcome real-world limitations.

Standardizing Evaluation and Closing the Sim-to-Real Gap

Beyond data generation, Genie Sim 3.0 introduces a structured benchmarking framework designed to evaluate core robotic capabilities. These include instruction following, spatial reasoning, manipulation skills, robustness under environmental changes, and sim-to-real transfer performance.

This standardized approach addresses a key issue in robotics – the lack of consistent metrics across models and systems. By defining common evaluation tasks, the platform enables more reliable comparison and faster iteration.

The system also integrates reinforcement learning pipelines, allowing models to be trained and tested within the same environment. High-frequency physics simulation combined with parallel processing enables faster convergence and more efficient experimentation.

Taken together, these capabilities create a closed-loop system where robots can learn, adapt, and be evaluated continuously within simulation before deployment.

Genie Sim 3.0 reflects a broader shift toward infrastructure-driven robotics development. As embodied AI moves from research into real-world applications, platforms that unify data, training, and evaluation are becoming essential.

By reducing engineering overhead and accelerating iteration cycles, AGIBOT is positioning simulation not just as a tool, but as the foundation for scaling the next generation of intelligent machines.

Artificial Intelligence (AI), News, Robots & Robotics

Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030

Global shipments of humanoid and quadruped robots are projected to reach 810,000 units by 2030, as enterprise adoption replaces early experimentation.

By Daniel Krauss Published: Updated:
Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030
Humanoid and quadruped robots are scaling rapidly, with global shipments projected to reach 810,000 units by 2030 as enterprise adoption accelerates. Photo: Unitree Robotics / X

The global market for humanoid and quadruped robots is entering a decisive growth phase, with shipments projected to reach 810,000 units by 2030, according to new industry forecasts by SAG. The shift reflects a broader transition from early-stage experimentation to real-world deployment across logistics, manufacturing, and service industries.

Recent data shows the pace of expansion is already accelerating, reports AIstify. Global shipments reached nearly 53,000 units in 2025, representing a 250% year-over-year increase, while total market revenue approached $1 billion. By the end of the decade, the market is expected to scale to $8 billion, supported by sustained double-digit growth.

The defining change is not just technological progress, but demand. After years of testing and pilot programs, companies are now integrating robots directly into operational workflows where labor shortages, safety requirements, and efficiency pressures are most acute.

Enterprise Adoption Becomes the Primary Growth Driver

The next phase of growth will be driven primarily by enterprise adoption rather than experimentation. Early deployments focused on validation and proof-of-concept, but that cycle is now reaching its limits.

“The robotics industry delivered strong growth in 2025, but the real test lies ahead,” said Yiwen Wu, Lead Research Advisor at Smart Analytics Global. “Enterprise adoption will be the key. Only vendors that can scale real-world deployments will define the next phase of the industry.”

Quadruped robots are currently leading in real-world use cases, particularly in inspection, security, and industrial monitoring. Their ability to navigate uneven terrain and operate in hazardous environments has made them easier to commercialize at scale.

Humanoid robots, by contrast, remain earlier in deployment but are attracting significantly more investment and policy support. Their long-term potential lies in operating within human-designed environments, from warehouses and retail to healthcare and household applications.

This creates a dual-track market: quadrupeds driving immediate adoption, while humanoids dominate long-term strategic positioning.

China Dominates Hardware While Global Competition Intensifies

The geographic distribution of the market reveals a clear imbalance. Chinese companies accounted for approximately 85% of global shipments in 2025, with China itself absorbing more than 60% of total demand.

Companies such as Unitree Robotics, Agibot, DOBOT, and Galbot are scaling production rapidly, leveraging manufacturing efficiency to capture early market share. Unitree alone held a leading position across both segments, with a particularly dominant share in quadruped robots.

At the same time, Western companies are maintaining an advantage in software, AI models, and advanced research. Firms like Boston Dynamics, Tesla, and Amazon are focusing on autonomy, perception systems, and large-scale AI integration.

This divergence is shaping a fragmented but complementary global landscape, where leadership is split across hardware manufacturing, software intelligence, and regulatory frameworks. South Korea is increasing investment in robotics, while Europe continues to specialize in safety, certification, and high-value industrial applications.

Looking ahead, analysts expect consolidation pressure to increase as the market matures. Vendors that expanded production ahead of proven demand may face challenges, while others with strong deployment pipelines could emerge as dominant players.

The result is a market approaching a critical inflection point. Robotics is no longer defined by technical capability alone – it is increasingly shaped by scalability, economics, and the ability to operate reliably in the real world.

Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home

Gig workers across more than 50 countries are recording household tasks to train humanoid robots, revealing a new data economy behind physical AI.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home
Gig workers are recording everyday household tasks to generate training data for humanoid robots, creating a new global labor layer behind physical AI systems. Photo: Kseniia Klichova / RobotsBeat

The development of humanoid robots is increasingly dependent not just on hardware breakthroughs or AI models, but on a growing global workforce capturing the physical world on camera. Across more than 50 countries, gig workers are now filming themselves performing everyday household tasks to generate training data for robots that are still years away from widespread deployment.

The model, led by startups such as Micro1, reflects a broader shift in how physical AI systems are built. Just as large language models relied on vast corpora of text scraped and labeled at scale, humanoid robots require detailed recordings of human interaction with objects in real-world environments. The difference is that this data must be created, not collected – and it is being produced inside people’s homes.

Building the Data Layer for Physical AI

Humanoid robots face a fundamentally different challenge from software-based AI systems: they must operate in unstructured, unpredictable environments. Tasks such as folding laundry, loading dishwashers, or organizing shelves involve subtle variations that are difficult to simulate or script.

To address this, companies are assembling large datasets of human activity, capturing how people manipulate objects in real settings. Workers are paid to record themselves performing routine tasks, often wearing cameras that track hand movements, object interactions, and spatial context.

The resulting footage forms the foundation for training robot perception and control systems. Companies such as Scale AI have already accumulated tens of thousands of hours of such material, while platforms like DoorDash have begun experimenting with allowing gig workers to contribute training data alongside their primary work.

This emerging pipeline suggests that physical AI will depend on a new category of data infrastructure – one that extends beyond digital content into the physical behaviors of human workers.

A Familiar Economic Structure in a New Domain

The economics of this system closely resemble earlier phases of the AI industry. Workers contributing data are typically paid hourly rates that are competitive within their local economies but represent a small fraction of the value generated downstream.

Participants receive no ownership over the data they produce and no share in the long-term value of the models trained on it. As humanoid robotics companies attract billions in investment, the gap between capital allocation and labor compensation is becoming more pronounced.

This structure mirrors the development of computer vision and natural language processing systems, where data labeling and annotation were outsourced globally. The key difference is that physical AI requires more invasive forms of data collection, capturing not just digital inputs but lived environments.

The result is a new layer of the gig economy, one that sits beneath the visible robotics industry and provides the raw material for its progress.

Privacy Risks Move Into the Home

Unlike earlier data pipelines, which largely relied on public or platform-generated content, the data used to train humanoid robots is often recorded in private spaces. Videos include kitchen layouts, household items, and other details that collectively form a detailed map of domestic life.

This raises questions about data ownership, consent, and long-term storage. Workers may have limited visibility into how their recordings are used, whether they are anonymized, or how long they are retained. The implications extend beyond individual privacy to broader concerns about the creation of large-scale visual datasets of private environments.

Researchers in human-centered computing have emphasized the need for clearer disclosure and safeguards, but industry practices remain inconsistent. As the volume of collected data grows, so too does the potential risk associated with breaches, misuse, or secondary applications.

The reliance on gig workers to generate training data underscores a central reality of humanoid robotics: progress depends not only on engineering advances, but on access to large-scale, real-world human behavior.

This data-centric approach may accelerate development, but it also introduces new questions about labor, ownership, and privacy. As physical AI moves closer to commercial deployment, the systems being built will increasingly reflect not just technological innovation, but the global infrastructure of work that supports them.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech