Google Advances Embodied AI with Gemini Robotics ER Model

Google has introduced a new AI model that improves how robots understand, plan, and act in real-world environments, marking progress in embodied reasoning.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Google Advances Embodied AI with Gemini Robotics ER Model
Google’s Gemini Robotics ER model enables robots to interpret environments, plan actions, and complete tasks with improved spatial awareness and reasoning. Photo: Google

Google has introduced a new AI model designed to improve how robots understand and operate in real-world environments, targeting one of the most persistent limitations in robotics: the ability to reason beyond predefined instructions.

The model, Gemini Robotics-ER 1.6, focuses on what researchers describe as embodied reasoning – the capacity for machines to interpret visual inputs, plan sequences of actions, and determine when a task has been successfully completed. The update reflects a broader shift in robotics from systems that execute commands to those that can make context-aware decisions in dynamic settings.

The model is being made available to developers through Google’s AI tooling ecosystem, positioning it as part of a growing effort to standardize software layers for physical AI.

Moving from Perception to Reasoning

Robotics systems have historically relied on separate modules for perception, planning, and control, often requiring extensive engineering to connect them. Gemini Robotics-ER 1.6 attempts to unify these functions, allowing robots to process visual information and translate it directly into action.

The model improves spatial reasoning, enabling robots to identify objects, understand their relationships, and break tasks into smaller steps. It can also track objects across multiple viewpoints, combining inputs from different cameras to build a more complete understanding of an environment.

This multi-view capability is particularly relevant in real-world settings, where occlusion, clutter, and changing conditions can limit the effectiveness of single-camera systems. By integrating multiple perspectives, robots can maintain situational awareness even when parts of a scene are temporarily hidden.

Another key advancement is success detection. The model allows robots to evaluate whether a task has been completed correctly, reducing reliance on external validation or rigid programming. This is a critical requirement for autonomous operation, particularly in environments where tasks may need to be repeated or adjusted in real time.

Interpreting the Physical World

One of the more practical capabilities introduced in the model is the ability to read instruments such as gauges, meters, and digital displays. This function is particularly relevant for industrial and inspection applications, where robots must interpret physical indicators rather than purely digital data.

In collaboration with Boston Dynamics, the system has been applied to robots like Spot, which are used for facility monitoring. The model can analyze visual inputs, identify key components such as needles or numerical readouts, and calculate values with a high degree of accuracy.

Reported improvements in instrument reading performance suggest a significant step forward. Accuracy has increased from earlier levels of around 23% to over 90% in some scenarios, indicating that robots are becoming more capable of handling tasks that require precise interpretation of real-world signals.

The model also incorporates safety-aware reasoning, allowing robots to identify potential hazards and avoid unsafe interactions. This reflects an increasing emphasis on aligning robotic behavior with physical constraints, particularly as systems move into environments shared with humans.

Building a Software Layer for Physical AI

The release of Gemini Robotics-ER 1.6 highlights a broader trend toward treating robotics as a software problem as much as a hardware one. As companies race to develop humanoid and autonomous systems, the ability to generalize across tasks and environments is becoming a key differentiator.

Efforts by companies such as Nvidia and others have focused on simulation and training infrastructure, while Google’s approach emphasizes reasoning and decision-making at runtime. Together, these developments point toward a layered architecture for physical AI, where perception, reasoning, and control are increasingly integrated.

The remaining challenge is translating these capabilities into reliable real-world performance at scale. While models like Gemini Robotics-ER 1.6 demonstrate significant progress in controlled evaluations, deployment in complex environments will require further advances in robustness, data integration, and system design.

Google’s latest model suggests that robotics is entering a phase where intelligence is defined less by isolated capabilities and more by the ability to connect perception, reasoning, and action. As embodied AI systems become more capable of interpreting and responding to the physical world, the boundary between digital intelligence and physical execution continues to narrow.

The extent to which this translates into widespread adoption will depend on how quickly these systems can move from experimental demonstrations to dependable tools in industry and beyond.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress

Unitree is bringing its lowest-cost humanoid robot to global markets via AliExpress, signaling a shift toward early consumer adoption of robotics.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Unitree Brings $4000 Humanoid Robot to Global Buyers via AliExpress
Unitree’s R1 humanoid robot, designed for dynamic movement and lower-cost production, marks a step toward broader global access to humanoid machines. Photo: Unitree

Chinese robotics firm Unitree Robotics is preparing to launch its most affordable humanoid robot globally, a move that could test whether the category is beginning to transition from industrial experimentation to early consumer markets.

The company plans to debut its R1 humanoid robot through AliExpress, targeting customers in North America, Europe, Japan, and Singapore. With a starting price of around $4,000 in China, the R1 is among the lowest-cost humanoid robots introduced to date, positioning it closer to consumer electronics than traditional industrial machinery.

The rollout comes as Unitree accelerates production and expands internationally, following a year in which it shipped more than 5,500 humanoid robots – far exceeding most global competitors.

Lower Prices Meet Global Distribution

The R1 reflects a broader push to reduce the cost of humanoid robotics while expanding access through global distribution platforms. By launching on AliExpress, Unitree is bypassing traditional enterprise sales channels and testing direct-to-market demand.

The robot stands just over 1.2 meters tall and is designed for dynamic movement, including running, recovering from falls, and performing coordinated motions. Marketed as “sport-ready”, it highlights Unitree’s focus on mobility and mechanical performance rather than immediate utility in structured work environments.

The pricing strategy marks a significant departure from earlier humanoid systems, which have typically been priced in the tens of thousands of dollars or higher. Even companies such as Tesla have suggested that future humanoid robots could cost around $20,000, placing Unitree’s offering well below that threshold.

The question is not only whether such pricing is sustainable, but whether it will translate into meaningful adoption beyond research labs and demonstration use cases.

Scaling Production Ahead of Demand

Unitree’s global expansion is closely tied to its manufacturing scale. The company has set a target of shipping between 10,000 and 20,000 robots in 2026, building on its current position as one of the highest-volume producers of humanoid systems.

According to industry estimates, competitors such as Figure AI and Agility Robotics have shipped only a few hundred units each, underscoring the gap between Chinese and U.S. production capacity.

Market research firm TrendForce expects Unitree to account for a substantial share of global humanoid output in the near term, reflecting both aggressive scaling and a focus on cost reduction.

At the same time, the company is preparing for a potential IPO in Shanghai, aiming to raise capital to expand manufacturing and research. The R1’s international debut may therefore serve a dual purpose: generating revenue while demonstrating global demand to investors.

From Demonstration to Early Adoption

The launch also highlights a shift in how humanoid robots are being positioned. Rather than targeting a single industrial application, the R1 appears designed as a general-purpose platform that can showcase capabilities and attract a broader user base.

Unitree has previously gained visibility through high-profile demonstrations, including coordinated performances by its robots on national television. The move into global e-commerce suggests a transition from spectacle to early commercialization, even if practical use cases remain limited.

For now, most humanoid robots are still used in research, education, and controlled environments. The introduction of a lower-cost model does not immediately resolve challenges around autonomy, reliability, or real-world utility.

However, it may begin to reshape expectations. If consumers and small businesses can access humanoid robots at a fraction of previous costs, the market could shift from a handful of experimental deployments to a larger base of exploratory use.

Unitree’s R1 launch represents one of the clearest attempts to test that transition. By combining lower pricing with global distribution, the company is effectively probing whether humanoid robotics can move beyond early adopters and into a broader commercial category.

The outcome will depend less on technical capability alone and more on whether users find meaningful ways to integrate these systems into everyday environments. For an industry still searching for its first large-scale application, that question remains open.

Business & Markets, News, Robots & Robotics, Science & Tech

AGIBOT Launches Genie Sim 3.0 to Power Embodied AI Development

AGIBOT introduced Genie Sim 3.0, a unified platform combining simulation, data generation, and benchmarking to accelerate embodied AI development.

By Rachel Whitman Published: Updated:

AGIBOT has introduced Genie Sim 3.0, a new platform designed to unify simulation, data generation, and benchmarking for embodied artificial intelligence. The release reflects a growing industry push to address one of robotics’ biggest constraints – the lack of scalable, high-quality training data and standardized evaluation.

While advances in AI models have driven rapid progress in robotics, real-world deployment remains limited by expensive data collection, fragmented testing environments, and inconsistent performance metrics. Genie Sim 3.0 aims to consolidate these elements into a single development infrastructure, reducing the gap between research and deployment.

The platform combines environment creation, simulation, training, and evaluation into a continuous pipeline. Instead of building each component separately, developers can now iterate within a unified system designed specifically for embodied AI systems.

From Simulation to Scalable Data

A central feature of Genie Sim 3.0 is its ability to generate interactive 3D environments from text or image inputs, using a spatial world model. This allows developers to create training scenarios in minutes rather than hours, significantly lowering the cost and complexity of robotics development.

The system produces synchronized multimodal outputs – including visual, depth, and LiDAR data – closely aligned with real-world robot perception. This is critical for improving transfer from simulation to physical environments, a longstanding challenge in robotics.

By automating environment creation and scaling data generation, AGIBOT is effectively turning simulation into a primary source of training data, rather than a supplementary tool. This shift mirrors broader trends in AI, where synthetic data is increasingly used to overcome real-world limitations.

Standardizing Evaluation and Closing the Sim-to-Real Gap

Beyond data generation, Genie Sim 3.0 introduces a structured benchmarking framework designed to evaluate core robotic capabilities. These include instruction following, spatial reasoning, manipulation skills, robustness under environmental changes, and sim-to-real transfer performance.

This standardized approach addresses a key issue in robotics – the lack of consistent metrics across models and systems. By defining common evaluation tasks, the platform enables more reliable comparison and faster iteration.

The system also integrates reinforcement learning pipelines, allowing models to be trained and tested within the same environment. High-frequency physics simulation combined with parallel processing enables faster convergence and more efficient experimentation.

Taken together, these capabilities create a closed-loop system where robots can learn, adapt, and be evaluated continuously within simulation before deployment.

Genie Sim 3.0 reflects a broader shift toward infrastructure-driven robotics development. As embodied AI moves from research into real-world applications, platforms that unify data, training, and evaluation are becoming essential.

By reducing engineering overhead and accelerating iteration cycles, AGIBOT is positioning simulation not just as a tool, but as the foundation for scaling the next generation of intelligent machines.

Artificial Intelligence (AI), News, Robots & Robotics

Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030

Global shipments of humanoid and quadruped robots are projected to reach 810,000 units by 2030, as enterprise adoption replaces early experimentation.

By Daniel Krauss Published: Updated:
Humanoid and Quadruped Robot Shipments Set to Hit 810,000 Units by 2030
Humanoid and quadruped robots are scaling rapidly, with global shipments projected to reach 810,000 units by 2030 as enterprise adoption accelerates. Photo: Unitree Robotics / X

The global market for humanoid and quadruped robots is entering a decisive growth phase, with shipments projected to reach 810,000 units by 2030, according to new industry forecasts by SAG. The shift reflects a broader transition from early-stage experimentation to real-world deployment across logistics, manufacturing, and service industries.

Recent data shows the pace of expansion is already accelerating, reports AIstify. Global shipments reached nearly 53,000 units in 2025, representing a 250% year-over-year increase, while total market revenue approached $1 billion. By the end of the decade, the market is expected to scale to $8 billion, supported by sustained double-digit growth.

The defining change is not just technological progress, but demand. After years of testing and pilot programs, companies are now integrating robots directly into operational workflows where labor shortages, safety requirements, and efficiency pressures are most acute.

Enterprise Adoption Becomes the Primary Growth Driver

The next phase of growth will be driven primarily by enterprise adoption rather than experimentation. Early deployments focused on validation and proof-of-concept, but that cycle is now reaching its limits.

“The robotics industry delivered strong growth in 2025, but the real test lies ahead,” said Yiwen Wu, Lead Research Advisor at Smart Analytics Global. “Enterprise adoption will be the key. Only vendors that can scale real-world deployments will define the next phase of the industry.”

Quadruped robots are currently leading in real-world use cases, particularly in inspection, security, and industrial monitoring. Their ability to navigate uneven terrain and operate in hazardous environments has made them easier to commercialize at scale.

Humanoid robots, by contrast, remain earlier in deployment but are attracting significantly more investment and policy support. Their long-term potential lies in operating within human-designed environments, from warehouses and retail to healthcare and household applications.

This creates a dual-track market: quadrupeds driving immediate adoption, while humanoids dominate long-term strategic positioning.

China Dominates Hardware While Global Competition Intensifies

The geographic distribution of the market reveals a clear imbalance. Chinese companies accounted for approximately 85% of global shipments in 2025, with China itself absorbing more than 60% of total demand.

Companies such as Unitree Robotics, Agibot, DOBOT, and Galbot are scaling production rapidly, leveraging manufacturing efficiency to capture early market share. Unitree alone held a leading position across both segments, with a particularly dominant share in quadruped robots.

At the same time, Western companies are maintaining an advantage in software, AI models, and advanced research. Firms like Boston Dynamics, Tesla, and Amazon are focusing on autonomy, perception systems, and large-scale AI integration.

This divergence is shaping a fragmented but complementary global landscape, where leadership is split across hardware manufacturing, software intelligence, and regulatory frameworks. South Korea is increasing investment in robotics, while Europe continues to specialize in safety, certification, and high-value industrial applications.

Looking ahead, analysts expect consolidation pressure to increase as the market matures. Vendors that expanded production ahead of proven demand may face challenges, while others with strong deployment pipelines could emerge as dominant players.

The result is a market approaching a critical inflection point. Robotics is no longer defined by technical capability alone – it is increasingly shaped by scalability, economics, and the ability to operate reliably in the real world.

Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home

Gig workers across more than 50 countries are recording household tasks to train humanoid robots, revealing a new data economy behind physical AI.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home
Gig workers are recording everyday household tasks to generate training data for humanoid robots, creating a new global labor layer behind physical AI systems. Photo: Kseniia Klichova / RobotsBeat

The development of humanoid robots is increasingly dependent not just on hardware breakthroughs or AI models, but on a growing global workforce capturing the physical world on camera. Across more than 50 countries, gig workers are now filming themselves performing everyday household tasks to generate training data for robots that are still years away from widespread deployment.

The model, led by startups such as Micro1, reflects a broader shift in how physical AI systems are built. Just as large language models relied on vast corpora of text scraped and labeled at scale, humanoid robots require detailed recordings of human interaction with objects in real-world environments. The difference is that this data must be created, not collected – and it is being produced inside people’s homes.

Building the Data Layer for Physical AI

Humanoid robots face a fundamentally different challenge from software-based AI systems: they must operate in unstructured, unpredictable environments. Tasks such as folding laundry, loading dishwashers, or organizing shelves involve subtle variations that are difficult to simulate or script.

To address this, companies are assembling large datasets of human activity, capturing how people manipulate objects in real settings. Workers are paid to record themselves performing routine tasks, often wearing cameras that track hand movements, object interactions, and spatial context.

The resulting footage forms the foundation for training robot perception and control systems. Companies such as Scale AI have already accumulated tens of thousands of hours of such material, while platforms like DoorDash have begun experimenting with allowing gig workers to contribute training data alongside their primary work.

This emerging pipeline suggests that physical AI will depend on a new category of data infrastructure – one that extends beyond digital content into the physical behaviors of human workers.

A Familiar Economic Structure in a New Domain

The economics of this system closely resemble earlier phases of the AI industry. Workers contributing data are typically paid hourly rates that are competitive within their local economies but represent a small fraction of the value generated downstream.

Participants receive no ownership over the data they produce and no share in the long-term value of the models trained on it. As humanoid robotics companies attract billions in investment, the gap between capital allocation and labor compensation is becoming more pronounced.

This structure mirrors the development of computer vision and natural language processing systems, where data labeling and annotation were outsourced globally. The key difference is that physical AI requires more invasive forms of data collection, capturing not just digital inputs but lived environments.

The result is a new layer of the gig economy, one that sits beneath the visible robotics industry and provides the raw material for its progress.

Privacy Risks Move Into the Home

Unlike earlier data pipelines, which largely relied on public or platform-generated content, the data used to train humanoid robots is often recorded in private spaces. Videos include kitchen layouts, household items, and other details that collectively form a detailed map of domestic life.

This raises questions about data ownership, consent, and long-term storage. Workers may have limited visibility into how their recordings are used, whether they are anonymized, or how long they are retained. The implications extend beyond individual privacy to broader concerns about the creation of large-scale visual datasets of private environments.

Researchers in human-centered computing have emphasized the need for clearer disclosure and safeguards, but industry practices remain inconsistent. As the volume of collected data grows, so too does the potential risk associated with breaches, misuse, or secondary applications.

The reliance on gig workers to generate training data underscores a central reality of humanoid robotics: progress depends not only on engineering advances, but on access to large-scale, real-world human behavior.

This data-centric approach may accelerate development, but it also introduces new questions about labor, ownership, and privacy. As physical AI moves closer to commercial deployment, the systems being built will increasingly reflect not just technological innovation, but the global infrastructure of work that supports them.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

New Robotic Skin Brings Human Like Touch Closer to Machines

Researchers have developed a flexible sensor that allows robots to detect gentle touch with high precision, marking a step toward safer human-machine interaction.

By Laura Bennett | Edited by Kseniia Klichova Published:
New Robotic Skin Brings Human Like Touch Closer to Machines
A new flexible sensor system allows robotic hands to detect and respond to light pressure, enabling safer handling of fragile objects and more natural human-machine interaction. Photo: Kseniia Klichova / RobotsBeat

Robots have made rapid progress in vision and motion, but touch has remained a persistent limitation. Without reliable tactile feedback, even advanced systems struggle to handle fragile objects or safely interact with humans. A new class of flexible sensors developed by researchers at Penn State suggests that gap may be narrowing.

The team has created a lightweight “robotic skin” capable of detecting extremely small pressure changes while maintaining durability under repeated use. The development reflects a broader push in robotics to move beyond perception and mobility toward physical intelligence – systems that can interpret and respond to the physical world with greater nuance.

Turning Pressure into Real Time Control

At the core of the system is a small, flexible sensor built around graphene aerogel, a porous material that converts mechanical pressure into electrical signals. The structure allows the sensor to respond quickly to light touch while remaining stable under heavier loads, addressing a common tradeoff between sensitivity and durability.

Each sensor can register contact in just over 100 milliseconds and recover shortly after, enabling near real-time feedback. When arranged in arrays, these sensors generate pressure maps that function similarly to human skin, allowing robots to interpret how force is distributed across their surface.

This capability shifts tactile sensing from passive measurement to active control. In demonstrations, robotic hands equipped with the sensors adjusted grip strength dynamically, preventing damage to delicate objects such as soft food items. The system effectively translates touch into immediate motor responses, closing a loop that has historically been difficult to achieve in robotics.

From Grasping to Perception

Beyond simple force control, the sensor system introduces a new layer of perception. By analyzing pressure patterns, robots can begin to distinguish between different materials and objects based on how they respond to touch.

In experimental tests, researchers trained a lightweight model to classify food items using tactile data alone. After repeated training cycles, the system achieved accuracy above 99%, suggesting that touch-based recognition could complement or, in some cases, substitute for visual input.

This has implications for environments where vision is unreliable, such as cluttered industrial settings or domestic spaces with variable lighting. It also aligns with a growing interest in multimodal AI systems that combine vision, language, and physical interaction.

The same sensing approach has also been applied to wearable devices, where it can track pulse signals and joint movement with consistent accuracy. This points to potential crossover applications in healthcare, prosthetics, and rehabilitation.

Expanding the Role of Tactile Intelligence

The development highlights a broader shift in robotics toward integrating sensing, control, and learning into unified systems. While vision-based AI has dominated recent advances, tactile intelligence is emerging as a critical component for real-world deployment.

Companies such as Tesla and Nvidia have emphasized the importance of physical interaction in next-generation AI systems, particularly in humanoid robotics and automation. However, progress in touch sensing has lagged behind advances in perception and planning.

The Penn State research suggests that scalable, low-cost tactile systems may begin to close that gap. The sensors can also detect pressure changes in non-robotic contexts, such as monitoring swelling in battery systems – an early indicator of potential failure in electric vehicles.

Despite the progress, the technology remains in an early stage. Challenges include miniaturization, long-term reliability, and integration with existing robotic platforms. Researchers are also exploring ways to expand the sensing capabilities to include temperature and stretch, bringing the system closer to the complexity of human skin.

The ability to sense and respond to gentle touch is likely to be a defining feature of next-generation robots, particularly as they move into homes, healthcare settings, and collaborative workplaces. While the current system is still experimental, it illustrates how advances in materials science and AI are converging to address one of robotics’ most persistent limitations.

If scaled successfully, tactile sensing could shift robots from rigid, pre-programmed machines to adaptive systems capable of interacting with the physical world in a more human-like way.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech