Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030

Global shipments of humanoid and quadruped robots are projected to reach 810,000 units by 2030, as enterprise adoption replaces early experimentation.

By Daniel Krauss Published: Updated:
Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030
Humanoid and quadruped robots are scaling rapidly, with global shipments projected to reach 810,000 units by 2030 as enterprise adoption accelerates. Photo: Unitree Robotics / X

The global market for humanoid and quadruped robots is entering a decisive growth phase, with shipments projected to reach 810,000 units by 2030, according to new industry forecasts by SAG. The shift reflects a broader transition from early-stage experimentation to real-world deployment across logistics, manufacturing, and service industries.

Recent data shows the pace of expansion is already accelerating, reports AIstify. Global shipments reached nearly 53,000 units in 2025, representing a 250% year-over-year increase, while total market revenue approached $1 billion. By the end of the decade, the market is expected to scale to $8 billion, supported by sustained double-digit growth.

The defining change is not just technological progress, but demand. After years of testing and pilot programs, companies are now integrating robots directly into operational workflows where labor shortages, safety requirements, and efficiency pressures are most acute.

Enterprise Adoption Becomes the Primary Growth Driver

The next phase of growth will be driven primarily by enterprise adoption rather than experimentation. Early deployments focused on validation and proof-of-concept, but that cycle is now reaching its limits.

“The robotics industry delivered strong growth in 2025, but the real test lies ahead,” said Yiwen Wu, Lead Research Advisor at Smart Analytics Global. “Enterprise adoption will be the key. Only vendors that can scale real-world deployments will define the next phase of the industry.”

Quadruped robots are currently leading in real-world use cases, particularly in inspection, security, and industrial monitoring. Their ability to navigate uneven terrain and operate in hazardous environments has made them easier to commercialize at scale.

Humanoid robots, by contrast, remain earlier in deployment but are attracting significantly more investment and policy support. Their long-term potential lies in operating within human-designed environments, from warehouses and retail to healthcare and household applications.

This creates a dual-track market: quadrupeds driving immediate adoption, while humanoids dominate long-term strategic positioning.

China Dominates Hardware While Global Competition Intensifies

The geographic distribution of the market reveals a clear imbalance. Chinese companies accounted for approximately 85% of global shipments in 2025, with China itself absorbing more than 60% of total demand.

Companies such as Unitree Robotics, Agibot, DOBOT, and Galbot are scaling production rapidly, leveraging manufacturing efficiency to capture early market share. Unitree alone held a leading position across both segments, with a particularly dominant share in quadruped robots.

At the same time, Western companies are maintaining an advantage in software, AI models, and advanced research. Firms like Boston Dynamics, Tesla, and Amazon are focusing on autonomy, perception systems, and large-scale AI integration.

This divergence is shaping a fragmented but complementary global landscape, where leadership is split across hardware manufacturing, software intelligence, and regulatory frameworks. South Korea is increasing investment in robotics, while Europe continues to specialize in safety, certification, and high-value industrial applications.

Looking ahead, analysts expect consolidation pressure to increase as the market matures. Vendors that expanded production ahead of proven demand may face challenges, while others with strong deployment pipelines could emerge as dominant players.

The result is a market approaching a critical inflection point. Robotics is no longer defined by technical capability alone – it is increasingly shaped by scalability, economics, and the ability to operate reliably in the real world.

AGIBOT Launches Genie Sim 3.0 to Power Embodied AI Development

AGIBOT introduced Genie Sim 3.0, a unified platform combining simulation, data generation, and benchmarking to accelerate embodied AI development.

By Rachel Whitman Published: Updated:

AGIBOT has introduced Genie Sim 3.0, a new platform designed to unify simulation, data generation, and benchmarking for embodied artificial intelligence. The release reflects a growing industry push to address one of robotics’ biggest constraints – the lack of scalable, high-quality training data and standardized evaluation.

While advances in AI models have driven rapid progress in robotics, real-world deployment remains limited by expensive data collection, fragmented testing environments, and inconsistent performance metrics. Genie Sim 3.0 aims to consolidate these elements into a single development infrastructure, reducing the gap between research and deployment.

The platform combines environment creation, simulation, training, and evaluation into a continuous pipeline. Instead of building each component separately, developers can now iterate within a unified system designed specifically for embodied AI systems.

From Simulation to Scalable Data

A central feature of Genie Sim 3.0 is its ability to generate interactive 3D environments from text or image inputs, using a spatial world model. This allows developers to create training scenarios in minutes rather than hours, significantly lowering the cost and complexity of robotics development.

The system produces synchronized multimodal outputs – including visual, depth, and LiDAR data – closely aligned with real-world robot perception. This is critical for improving transfer from simulation to physical environments, a longstanding challenge in robotics.

By automating environment creation and scaling data generation, AGIBOT is effectively turning simulation into a primary source of training data, rather than a supplementary tool. This shift mirrors broader trends in AI, where synthetic data is increasingly used to overcome real-world limitations.

Standardizing Evaluation and Closing the Sim-to-Real Gap

Beyond data generation, Genie Sim 3.0 introduces a structured benchmarking framework designed to evaluate core robotic capabilities. These include instruction following, spatial reasoning, manipulation skills, robustness under environmental changes, and sim-to-real transfer performance.

This standardized approach addresses a key issue in robotics – the lack of consistent metrics across models and systems. By defining common evaluation tasks, the platform enables more reliable comparison and faster iteration.

The system also integrates reinforcement learning pipelines, allowing models to be trained and tested within the same environment. High-frequency physics simulation combined with parallel processing enables faster convergence and more efficient experimentation.

Taken together, these capabilities create a closed-loop system where robots can learn, adapt, and be evaluated continuously within simulation before deployment.

Genie Sim 3.0 reflects a broader shift toward infrastructure-driven robotics development. As embodied AI moves from research into real-world applications, platforms that unify data, training, and evaluation are becoming essential.

By reducing engineering overhead and accelerating iteration cycles, AGIBOT is positioning simulation not just as a tool, but as the foundation for scaling the next generation of intelligent machines.

Artificial Intelligence (AI), News, Robots & Robotics

Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home

Gig workers across more than 50 countries are recording household tasks to train humanoid robots, revealing a new data economy behind physical AI.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home
Gig workers are recording everyday household tasks to generate training data for humanoid robots, creating a new global labor layer behind physical AI systems. Photo: Kseniia Klichova / RobotsBeat

The development of humanoid robots is increasingly dependent not just on hardware breakthroughs or AI models, but on a growing global workforce capturing the physical world on camera. Across more than 50 countries, gig workers are now filming themselves performing everyday household tasks to generate training data for robots that are still years away from widespread deployment.

The model, led by startups such as Micro1, reflects a broader shift in how physical AI systems are built. Just as large language models relied on vast corpora of text scraped and labeled at scale, humanoid robots require detailed recordings of human interaction with objects in real-world environments. The difference is that this data must be created, not collected – and it is being produced inside people’s homes.

Building the Data Layer for Physical AI

Humanoid robots face a fundamentally different challenge from software-based AI systems: they must operate in unstructured, unpredictable environments. Tasks such as folding laundry, loading dishwashers, or organizing shelves involve subtle variations that are difficult to simulate or script.

To address this, companies are assembling large datasets of human activity, capturing how people manipulate objects in real settings. Workers are paid to record themselves performing routine tasks, often wearing cameras that track hand movements, object interactions, and spatial context.

The resulting footage forms the foundation for training robot perception and control systems. Companies such as Scale AI have already accumulated tens of thousands of hours of such material, while platforms like DoorDash have begun experimenting with allowing gig workers to contribute training data alongside their primary work.

This emerging pipeline suggests that physical AI will depend on a new category of data infrastructure – one that extends beyond digital content into the physical behaviors of human workers.

A Familiar Economic Structure in a New Domain

The economics of this system closely resemble earlier phases of the AI industry. Workers contributing data are typically paid hourly rates that are competitive within their local economies but represent a small fraction of the value generated downstream.

Participants receive no ownership over the data they produce and no share in the long-term value of the models trained on it. As humanoid robotics companies attract billions in investment, the gap between capital allocation and labor compensation is becoming more pronounced.

This structure mirrors the development of computer vision and natural language processing systems, where data labeling and annotation were outsourced globally. The key difference is that physical AI requires more invasive forms of data collection, capturing not just digital inputs but lived environments.

The result is a new layer of the gig economy, one that sits beneath the visible robotics industry and provides the raw material for its progress.

Privacy Risks Move Into the Home

Unlike earlier data pipelines, which largely relied on public or platform-generated content, the data used to train humanoid robots is often recorded in private spaces. Videos include kitchen layouts, household items, and other details that collectively form a detailed map of domestic life.

This raises questions about data ownership, consent, and long-term storage. Workers may have limited visibility into how their recordings are used, whether they are anonymized, or how long they are retained. The implications extend beyond individual privacy to broader concerns about the creation of large-scale visual datasets of private environments.

Researchers in human-centered computing have emphasized the need for clearer disclosure and safeguards, but industry practices remain inconsistent. As the volume of collected data grows, so too does the potential risk associated with breaches, misuse, or secondary applications.

The reliance on gig workers to generate training data underscores a central reality of humanoid robotics: progress depends not only on engineering advances, but on access to large-scale, real-world human behavior.

This data-centric approach may accelerate development, but it also introduces new questions about labor, ownership, and privacy. As physical AI moves closer to commercial deployment, the systems being built will increasingly reflect not just technological innovation, but the global infrastructure of work that supports them.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

New Robotic Skin Brings Human Like Touch Closer to Machines

Researchers have developed a flexible sensor that allows robots to detect gentle touch with high precision, marking a step toward safer human-machine interaction.

By Laura Bennett | Edited by Kseniia Klichova Published:
New Robotic Skin Brings Human Like Touch Closer to Machines
A new flexible sensor system allows robotic hands to detect and respond to light pressure, enabling safer handling of fragile objects and more natural human-machine interaction. Photo: Kseniia Klichova / RobotsBeat

Robots have made rapid progress in vision and motion, but touch has remained a persistent limitation. Without reliable tactile feedback, even advanced systems struggle to handle fragile objects or safely interact with humans. A new class of flexible sensors developed by researchers at Penn State suggests that gap may be narrowing.

The team has created a lightweight “robotic skin” capable of detecting extremely small pressure changes while maintaining durability under repeated use. The development reflects a broader push in robotics to move beyond perception and mobility toward physical intelligence – systems that can interpret and respond to the physical world with greater nuance.

Turning Pressure into Real Time Control

At the core of the system is a small, flexible sensor built around graphene aerogel, a porous material that converts mechanical pressure into electrical signals. The structure allows the sensor to respond quickly to light touch while remaining stable under heavier loads, addressing a common tradeoff between sensitivity and durability.

Each sensor can register contact in just over 100 milliseconds and recover shortly after, enabling near real-time feedback. When arranged in arrays, these sensors generate pressure maps that function similarly to human skin, allowing robots to interpret how force is distributed across their surface.

This capability shifts tactile sensing from passive measurement to active control. In demonstrations, robotic hands equipped with the sensors adjusted grip strength dynamically, preventing damage to delicate objects such as soft food items. The system effectively translates touch into immediate motor responses, closing a loop that has historically been difficult to achieve in robotics.

From Grasping to Perception

Beyond simple force control, the sensor system introduces a new layer of perception. By analyzing pressure patterns, robots can begin to distinguish between different materials and objects based on how they respond to touch.

In experimental tests, researchers trained a lightweight model to classify food items using tactile data alone. After repeated training cycles, the system achieved accuracy above 99%, suggesting that touch-based recognition could complement or, in some cases, substitute for visual input.

This has implications for environments where vision is unreliable, such as cluttered industrial settings or domestic spaces with variable lighting. It also aligns with a growing interest in multimodal AI systems that combine vision, language, and physical interaction.

The same sensing approach has also been applied to wearable devices, where it can track pulse signals and joint movement with consistent accuracy. This points to potential crossover applications in healthcare, prosthetics, and rehabilitation.

Expanding the Role of Tactile Intelligence

The development highlights a broader shift in robotics toward integrating sensing, control, and learning into unified systems. While vision-based AI has dominated recent advances, tactile intelligence is emerging as a critical component for real-world deployment.

Companies such as Tesla and Nvidia have emphasized the importance of physical interaction in next-generation AI systems, particularly in humanoid robotics and automation. However, progress in touch sensing has lagged behind advances in perception and planning.

The Penn State research suggests that scalable, low-cost tactile systems may begin to close that gap. The sensors can also detect pressure changes in non-robotic contexts, such as monitoring swelling in battery systems – an early indicator of potential failure in electric vehicles.

Despite the progress, the technology remains in an early stage. Challenges include miniaturization, long-term reliability, and integration with existing robotic platforms. Researchers are also exploring ways to expand the sensing capabilities to include temperature and stretch, bringing the system closer to the complexity of human skin.

The ability to sense and respond to gentle touch is likely to be a defining feature of next-generation robots, particularly as they move into homes, healthcare settings, and collaborative workplaces. While the current system is still experimental, it illustrates how advances in materials science and AI are converging to address one of robotics’ most persistent limitations.

If scaled successfully, tactile sensing could shift robots from rigid, pre-programmed machines to adaptive systems capable of interacting with the physical world in a more human-like way.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

BMW Rebuilds Munich Plant Around AI Brain and 2000 Robots

BMW has overhauled its Munich plant with an AI-driven production system and thousands of robots, signaling a shift toward software-defined manufacturing for electric vehicles.

By Daniel Krauss | Edited by Kseniia Klichova Published:
BMW Rebuilds Munich Plant Around AI Brain and 2000 Robots
BMW’s Munich plant integrates an AI-driven control system with thousands of robots, marking a shift toward fully digitalized and flexible EV manufacturing. Photo: BMW

BMW has completed a €650 million transformation of its Munich factory, embedding artificial intelligence and robotics at the core of production as it prepares to manufacture its next generation of electric vehicles. The overhaul signals a broader shift in automotive manufacturing, where software systems are beginning to orchestrate not only design and engineering, but the physical assembly process itself.

At the center of the upgrade is what BMW describes as an “AI brain” – a centralized system that coordinates production lines, logistics, and quality control across the plant. The system is being deployed as part of the company’s broader iFactory strategy, which aims to standardize digitalized manufacturing across its global operations.

The Munich site, which will begin producing the Neue Klasse i3 sedan in August 2026, is expected to scale to around 1,000 vehicles per day, placing it among the highest-output EV facilities in Europe.

A Software Layer for Physical Production

BMW’s approach reflects a growing convergence between industrial automation and AI-driven orchestration. Rather than treating robotics as isolated systems, the company has integrated approximately 2,000 robotic arms and a fleet of autonomous logistics machines into a unified control architecture.

The AI system manages workflows in real time, from coordinating robotic assembly tasks to directing material movement across the factory floor. Around 200 mobile robots handle internal logistics, transporting components from incoming shipments to production lines. These machines are expected to perform up to 17,000 transport operations per day by 2027, effectively taking over what BMW describes as the “last mile” of factory logistics.

A key feature of the system is its use of digital twins, allowing the factory to simulate and test production scenarios before they are executed. This enables rapid adjustments to workflows, reducing downtime and allowing the plant to respond more quickly to changes in demand or product configuration.

While similar concepts have been tested elsewhere, including at facilities developed by Hyundai, BMW’s implementation stands out for its scale and integration into a high-volume production environment.

Flexibility Becomes a Competitive Requirement

The redesigned Munich plant is built to accommodate a wide range of vehicle variants on a single production line, reflecting the increasing variability of the EV market. According to BMW, production sequences can be reconfigured in as little as six days, compared to weeks or months in conventional factories.

This level of flexibility is intended to allow production to “follow the market”, adapting to shifts in demand, regulatory requirements, or supply chain constraints. It also reduces the need for dedicated production lines for individual models, a structure that has historically limited responsiveness in automotive manufacturing.

The shift aligns with a broader industry move toward modular platforms and software-defined vehicles, where differentiation occurs more through software and configuration than through fundamentally different hardware architectures.

Human Workers Remain in the Loop

Despite the scale of automation, BMW maintains that human workers will continue to play a central role in the factory. Tasks such as installing interiors, wiring, and final assembly will still be carried out by people, supported by robotic systems designed to reduce physical strain and improve precision.

AI is also being applied to quality control. Robotic inspection systems capture and analyze large volumes of visual data to identify defects early in the production process. In some cases, robots can autonomously correct issues, reducing the need for rework at later stages and improving overall throughput.

The company has emphasized that the introduction of AI and robotics is intended to augment, rather than replace, human labor, positioning workers as operators and supervisors within increasingly automated environments.

BMW’s Munich transformation highlights a broader shift in industrial strategy, where competitiveness is increasingly defined by the ability to integrate software, robotics, and data into a cohesive production system. As automakers transition to electric vehicles and face greater market volatility, factories are becoming less like static assembly lines and more like adaptive, software-controlled systems.

The success of this approach will depend not only on technological execution but on whether such highly automated systems can deliver consistent gains in efficiency and quality at scale. For now, BMW’s investment offers one of the clearest examples of how physical AI is beginning to reshape large-scale manufacturing.

DNA Robots Advance Toward Targeted Drug Delivery and Virus Detection

Researchers are developing DNA-based nanorobots capable of delivering drugs and targeting viruses, though the technology remains in early experimental stages.

By Laura Bennett | Edited by Kseniia Klichova Published:
DNA Robots Advance Toward Targeted Drug Delivery and Virus Detection
Microscopic robots built from DNA structures are being engineered to navigate the human body, signaling a new frontier in precision medicine and molecular-scale robotics. Photo: Kseniia Klichova / RobotsBeat

The idea of robots operating inside the human body has long been associated with science fiction. But recent advances in DNA-based nanotechnology are beginning to translate that vision into early-stage experimental systems, where programmable molecular machines can move, sense, and interact with biological environments.

Researchers are now designing DNA “robots” capable of delivering drugs directly to diseased cells and identifying viral threats within the bloodstream. While these systems remain far from clinical deployment, they represent a shift in how robotics is defined – extending from mechanical systems into the molecular domain.

Reimagining Robotics at the Molecular Scale

Unlike conventional robots built from metal, electronics, and actuators, DNA robots are constructed from strands of nucleic acids that can be folded, connected, and programmed into functional structures. Using techniques inspired by origami, scientists can create rigid joints, flexible linkages, and dynamic components that mimic mechanical systems at a nanoscale.

This approach adapts established principles from traditional robotics – including rigid-body motion and compliant mechanisms – into a biochemical context. The result is a new class of machines that operate not through motors or gears, but through chemical interactions and structural transformations.

Controlling these systems presents a fundamental challenge. At the molecular level, motion is dominated by random thermal fluctuations, known as Brownian motion, which can disrupt precise behavior. To address this, researchers rely on biochemical programming methods such as DNA strand displacement, where specific sequences act as triggers to initiate movement or change configuration.

External signals, including light, magnetic fields, and electric fields, can also be used to guide these nanorobots, providing an additional layer of control in otherwise unpredictable environments.

Medical Applications Remain Experimental

The most immediate interest in DNA robotics lies in medicine, where the ability to operate at cellular or even molecular resolution could enable highly targeted interventions. In experimental settings, DNA robots have been designed to locate specific cell types, release therapeutic payloads, and potentially capture or neutralize viruses.

Such systems could function as “nano-surgeons”, delivering drugs with far greater precision than conventional treatments and reducing side effects associated with systemic therapies. Researchers are also exploring their potential to detect and bind to viral particles, including pathogens similar to COVID-19, as a step toward autonomous diagnostic or therapeutic platforms.

Beyond medicine, DNA robots may also serve as tools for nanoscale manufacturing. By positioning molecules and nanoparticles with sub-nanometer precision, they could enable new forms of computing and materials engineering that are difficult to achieve with existing fabrication techniques.

However, most current systems remain proof-of-concept demonstrations. They typically operate in controlled laboratory conditions and lack the robustness required for real-world biological environments.

From Proof of Concept to Scalable Systems

The transition from experimental prototypes to practical applications presents several challenges. In addition to environmental unpredictability, researchers face limitations in modeling and design. There is currently no comprehensive database of DNA mechanical properties, and simulation tools for predicting nanorobot behavior remain underdeveloped.

Scaling these systems will likely require advances across multiple domains, including bio-manufacturing, materials science, and artificial intelligence. Proposed approaches include the development of standardized DNA component libraries and the use of AI-driven design tools to optimize structures and predict performance.

The broader implication is that robotics may increasingly extend beyond traditional hardware into programmable biological systems. DNA robots, if successfully scaled, could redefine automation at the smallest possible level – enabling machines that operate not in factories or warehouses, but within cells and molecules themselves.

For now, the technology remains in its formative stage. But its trajectory suggests that the next phase of robotics innovation may be less about building larger, more capable machines, and more about engineering systems that can function where conventional robots cannot reach.

News, Robots & Robotics, Science & Tech