New Robotic Skin Brings Human Like Touch Closer to Machines

Researchers have developed a flexible sensor that allows robots to detect gentle touch with high precision, marking a step toward safer human-machine interaction.

By Laura Bennett | Edited by Kseniia Klichova Published:
New Robotic Skin Brings Human Like Touch Closer to Machines
A new flexible sensor system allows robotic hands to detect and respond to light pressure, enabling safer handling of fragile objects and more natural human-machine interaction. Photo: Kseniia Klichova / RobotsBeat

Robots have made rapid progress in vision and motion, but touch has remained a persistent limitation. Without reliable tactile feedback, even advanced systems struggle to handle fragile objects or safely interact with humans. A new class of flexible sensors developed by researchers at Penn State suggests that gap may be narrowing.

The team has created a lightweight “robotic skin” capable of detecting extremely small pressure changes while maintaining durability under repeated use. The development reflects a broader push in robotics to move beyond perception and mobility toward physical intelligence – systems that can interpret and respond to the physical world with greater nuance.

Turning Pressure into Real Time Control

At the core of the system is a small, flexible sensor built around graphene aerogel, a porous material that converts mechanical pressure into electrical signals. The structure allows the sensor to respond quickly to light touch while remaining stable under heavier loads, addressing a common tradeoff between sensitivity and durability.

Each sensor can register contact in just over 100 milliseconds and recover shortly after, enabling near real-time feedback. When arranged in arrays, these sensors generate pressure maps that function similarly to human skin, allowing robots to interpret how force is distributed across their surface.

This capability shifts tactile sensing from passive measurement to active control. In demonstrations, robotic hands equipped with the sensors adjusted grip strength dynamically, preventing damage to delicate objects such as soft food items. The system effectively translates touch into immediate motor responses, closing a loop that has historically been difficult to achieve in robotics.

From Grasping to Perception

Beyond simple force control, the sensor system introduces a new layer of perception. By analyzing pressure patterns, robots can begin to distinguish between different materials and objects based on how they respond to touch.

In experimental tests, researchers trained a lightweight model to classify food items using tactile data alone. After repeated training cycles, the system achieved accuracy above 99%, suggesting that touch-based recognition could complement or, in some cases, substitute for visual input.

This has implications for environments where vision is unreliable, such as cluttered industrial settings or domestic spaces with variable lighting. It also aligns with a growing interest in multimodal AI systems that combine vision, language, and physical interaction.

The same sensing approach has also been applied to wearable devices, where it can track pulse signals and joint movement with consistent accuracy. This points to potential crossover applications in healthcare, prosthetics, and rehabilitation.

Expanding the Role of Tactile Intelligence

The development highlights a broader shift in robotics toward integrating sensing, control, and learning into unified systems. While vision-based AI has dominated recent advances, tactile intelligence is emerging as a critical component for real-world deployment.

Companies such as Tesla and Nvidia have emphasized the importance of physical interaction in next-generation AI systems, particularly in humanoid robotics and automation. However, progress in touch sensing has lagged behind advances in perception and planning.

The Penn State research suggests that scalable, low-cost tactile systems may begin to close that gap. The sensors can also detect pressure changes in non-robotic contexts, such as monitoring swelling in battery systems – an early indicator of potential failure in electric vehicles.

Despite the progress, the technology remains in an early stage. Challenges include miniaturization, long-term reliability, and integration with existing robotic platforms. Researchers are also exploring ways to expand the sensing capabilities to include temperature and stretch, bringing the system closer to the complexity of human skin.

The ability to sense and respond to gentle touch is likely to be a defining feature of next-generation robots, particularly as they move into homes, healthcare settings, and collaborative workplaces. While the current system is still experimental, it illustrates how advances in materials science and AI are converging to address one of robotics’ most persistent limitations.

If scaled successfully, tactile sensing could shift robots from rigid, pre-programmed machines to adaptive systems capable of interacting with the physical world in a more human-like way.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home

Gig workers across more than 50 countries are recording household tasks to train humanoid robots, revealing a new data economy behind physical AI.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Humanoid Robots Are Being Trained by Gig Workers Filming Life at Home
Gig workers are recording everyday household tasks to generate training data for humanoid robots, creating a new global labor layer behind physical AI systems. Photo: Kseniia Klichova / RobotsBeat

The development of humanoid robots is increasingly dependent not just on hardware breakthroughs or AI models, but on a growing global workforce capturing the physical world on camera. Across more than 50 countries, gig workers are now filming themselves performing everyday household tasks to generate training data for robots that are still years away from widespread deployment.

The model, led by startups such as Micro1, reflects a broader shift in how physical AI systems are built. Just as large language models relied on vast corpora of text scraped and labeled at scale, humanoid robots require detailed recordings of human interaction with objects in real-world environments. The difference is that this data must be created, not collected – and it is being produced inside people’s homes.

Building the Data Layer for Physical AI

Humanoid robots face a fundamentally different challenge from software-based AI systems: they must operate in unstructured, unpredictable environments. Tasks such as folding laundry, loading dishwashers, or organizing shelves involve subtle variations that are difficult to simulate or script.

To address this, companies are assembling large datasets of human activity, capturing how people manipulate objects in real settings. Workers are paid to record themselves performing routine tasks, often wearing cameras that track hand movements, object interactions, and spatial context.

The resulting footage forms the foundation for training robot perception and control systems. Companies such as Scale AI have already accumulated tens of thousands of hours of such material, while platforms like DoorDash have begun experimenting with allowing gig workers to contribute training data alongside their primary work.

This emerging pipeline suggests that physical AI will depend on a new category of data infrastructure – one that extends beyond digital content into the physical behaviors of human workers.

A Familiar Economic Structure in a New Domain

The economics of this system closely resemble earlier phases of the AI industry. Workers contributing data are typically paid hourly rates that are competitive within their local economies but represent a small fraction of the value generated downstream.

Participants receive no ownership over the data they produce and no share in the long-term value of the models trained on it. As humanoid robotics companies attract billions in investment, the gap between capital allocation and labor compensation is becoming more pronounced.

This structure mirrors the development of computer vision and natural language processing systems, where data labeling and annotation were outsourced globally. The key difference is that physical AI requires more invasive forms of data collection, capturing not just digital inputs but lived environments.

The result is a new layer of the gig economy, one that sits beneath the visible robotics industry and provides the raw material for its progress.

Privacy Risks Move Into the Home

Unlike earlier data pipelines, which largely relied on public or platform-generated content, the data used to train humanoid robots is often recorded in private spaces. Videos include kitchen layouts, household items, and other details that collectively form a detailed map of domestic life.

This raises questions about data ownership, consent, and long-term storage. Workers may have limited visibility into how their recordings are used, whether they are anonymized, or how long they are retained. The implications extend beyond individual privacy to broader concerns about the creation of large-scale visual datasets of private environments.

Researchers in human-centered computing have emphasized the need for clearer disclosure and safeguards, but industry practices remain inconsistent. As the volume of collected data grows, so too does the potential risk associated with breaches, misuse, or secondary applications.

The reliance on gig workers to generate training data underscores a central reality of humanoid robotics: progress depends not only on engineering advances, but on access to large-scale, real-world human behavior.

This data-centric approach may accelerate development, but it also introduces new questions about labor, ownership, and privacy. As physical AI moves closer to commercial deployment, the systems being built will increasingly reflect not just technological innovation, but the global infrastructure of work that supports them.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

BMW Rebuilds Munich Plant Around AI Brain and 2000 Robots

BMW has overhauled its Munich plant with an AI-driven production system and thousands of robots, signaling a shift toward software-defined manufacturing for electric vehicles.

By Daniel Krauss | Edited by Kseniia Klichova Published:
BMW Rebuilds Munich Plant Around AI Brain and 2000 Robots
BMW’s Munich plant integrates an AI-driven control system with thousands of robots, marking a shift toward fully digitalized and flexible EV manufacturing. Photo: BMW

BMW has completed a €650 million transformation of its Munich factory, embedding artificial intelligence and robotics at the core of production as it prepares to manufacture its next generation of electric vehicles. The overhaul signals a broader shift in automotive manufacturing, where software systems are beginning to orchestrate not only design and engineering, but the physical assembly process itself.

At the center of the upgrade is what BMW describes as an “AI brain” – a centralized system that coordinates production lines, logistics, and quality control across the plant. The system is being deployed as part of the company’s broader iFactory strategy, which aims to standardize digitalized manufacturing across its global operations.

The Munich site, which will begin producing the Neue Klasse i3 sedan in August 2026, is expected to scale to around 1,000 vehicles per day, placing it among the highest-output EV facilities in Europe.

A Software Layer for Physical Production

BMW’s approach reflects a growing convergence between industrial automation and AI-driven orchestration. Rather than treating robotics as isolated systems, the company has integrated approximately 2,000 robotic arms and a fleet of autonomous logistics machines into a unified control architecture.

The AI system manages workflows in real time, from coordinating robotic assembly tasks to directing material movement across the factory floor. Around 200 mobile robots handle internal logistics, transporting components from incoming shipments to production lines. These machines are expected to perform up to 17,000 transport operations per day by 2027, effectively taking over what BMW describes as the “last mile” of factory logistics.

A key feature of the system is its use of digital twins, allowing the factory to simulate and test production scenarios before they are executed. This enables rapid adjustments to workflows, reducing downtime and allowing the plant to respond more quickly to changes in demand or product configuration.

While similar concepts have been tested elsewhere, including at facilities developed by Hyundai, BMW’s implementation stands out for its scale and integration into a high-volume production environment.

Flexibility Becomes a Competitive Requirement

The redesigned Munich plant is built to accommodate a wide range of vehicle variants on a single production line, reflecting the increasing variability of the EV market. According to BMW, production sequences can be reconfigured in as little as six days, compared to weeks or months in conventional factories.

This level of flexibility is intended to allow production to “follow the market”, adapting to shifts in demand, regulatory requirements, or supply chain constraints. It also reduces the need for dedicated production lines for individual models, a structure that has historically limited responsiveness in automotive manufacturing.

The shift aligns with a broader industry move toward modular platforms and software-defined vehicles, where differentiation occurs more through software and configuration than through fundamentally different hardware architectures.

Human Workers Remain in the Loop

Despite the scale of automation, BMW maintains that human workers will continue to play a central role in the factory. Tasks such as installing interiors, wiring, and final assembly will still be carried out by people, supported by robotic systems designed to reduce physical strain and improve precision.

AI is also being applied to quality control. Robotic inspection systems capture and analyze large volumes of visual data to identify defects early in the production process. In some cases, robots can autonomously correct issues, reducing the need for rework at later stages and improving overall throughput.

The company has emphasized that the introduction of AI and robotics is intended to augment, rather than replace, human labor, positioning workers as operators and supervisors within increasingly automated environments.

BMW’s Munich transformation highlights a broader shift in industrial strategy, where competitiveness is increasingly defined by the ability to integrate software, robotics, and data into a cohesive production system. As automakers transition to electric vehicles and face greater market volatility, factories are becoming less like static assembly lines and more like adaptive, software-controlled systems.

The success of this approach will depend not only on technological execution but on whether such highly automated systems can deliver consistent gains in efficiency and quality at scale. For now, BMW’s investment offers one of the clearest examples of how physical AI is beginning to reshape large-scale manufacturing.

DNA Robots Advance Toward Targeted Drug Delivery and Virus Detection

Researchers are developing DNA-based nanorobots capable of delivering drugs and targeting viruses, though the technology remains in early experimental stages.

By Laura Bennett | Edited by Kseniia Klichova Published:
DNA Robots Advance Toward Targeted Drug Delivery and Virus Detection
Microscopic robots built from DNA structures are being engineered to navigate the human body, signaling a new frontier in precision medicine and molecular-scale robotics. Photo: Kseniia Klichova / RobotsBeat

The idea of robots operating inside the human body has long been associated with science fiction. But recent advances in DNA-based nanotechnology are beginning to translate that vision into early-stage experimental systems, where programmable molecular machines can move, sense, and interact with biological environments.

Researchers are now designing DNA “robots” capable of delivering drugs directly to diseased cells and identifying viral threats within the bloodstream. While these systems remain far from clinical deployment, they represent a shift in how robotics is defined – extending from mechanical systems into the molecular domain.

Reimagining Robotics at the Molecular Scale

Unlike conventional robots built from metal, electronics, and actuators, DNA robots are constructed from strands of nucleic acids that can be folded, connected, and programmed into functional structures. Using techniques inspired by origami, scientists can create rigid joints, flexible linkages, and dynamic components that mimic mechanical systems at a nanoscale.

This approach adapts established principles from traditional robotics – including rigid-body motion and compliant mechanisms – into a biochemical context. The result is a new class of machines that operate not through motors or gears, but through chemical interactions and structural transformations.

Controlling these systems presents a fundamental challenge. At the molecular level, motion is dominated by random thermal fluctuations, known as Brownian motion, which can disrupt precise behavior. To address this, researchers rely on biochemical programming methods such as DNA strand displacement, where specific sequences act as triggers to initiate movement or change configuration.

External signals, including light, magnetic fields, and electric fields, can also be used to guide these nanorobots, providing an additional layer of control in otherwise unpredictable environments.

Medical Applications Remain Experimental

The most immediate interest in DNA robotics lies in medicine, where the ability to operate at cellular or even molecular resolution could enable highly targeted interventions. In experimental settings, DNA robots have been designed to locate specific cell types, release therapeutic payloads, and potentially capture or neutralize viruses.

Such systems could function as “nano-surgeons”, delivering drugs with far greater precision than conventional treatments and reducing side effects associated with systemic therapies. Researchers are also exploring their potential to detect and bind to viral particles, including pathogens similar to COVID-19, as a step toward autonomous diagnostic or therapeutic platforms.

Beyond medicine, DNA robots may also serve as tools for nanoscale manufacturing. By positioning molecules and nanoparticles with sub-nanometer precision, they could enable new forms of computing and materials engineering that are difficult to achieve with existing fabrication techniques.

However, most current systems remain proof-of-concept demonstrations. They typically operate in controlled laboratory conditions and lack the robustness required for real-world biological environments.

From Proof of Concept to Scalable Systems

The transition from experimental prototypes to practical applications presents several challenges. In addition to environmental unpredictability, researchers face limitations in modeling and design. There is currently no comprehensive database of DNA mechanical properties, and simulation tools for predicting nanorobot behavior remain underdeveloped.

Scaling these systems will likely require advances across multiple domains, including bio-manufacturing, materials science, and artificial intelligence. Proposed approaches include the development of standardized DNA component libraries and the use of AI-driven design tools to optimize structures and predict performance.

The broader implication is that robotics may increasingly extend beyond traditional hardware into programmable biological systems. DNA robots, if successfully scaled, could redefine automation at the smallest possible level – enabling machines that operate not in factories or warehouses, but within cells and molecules themselves.

For now, the technology remains in its formative stage. But its trajectory suggests that the next phase of robotics innovation may be less about building larger, more capable machines, and more about engineering systems that can function where conventional robots cannot reach.

News, Robots & Robotics, Science & Tech

LG CNS Expands Physical AI Strategy Through Silicon Valley Partnerships

LG CNS has partnered with Silicon Valley robotics startups to strengthen its physical AI capabilities, combining robot foundation models with new hardware platforms.

By Rachel Whitman | Edited by Kseniia Klichova Published:
LG CNS Expands Physical AI Strategy Through Silicon Valley Partnerships
LG CNS is expanding its physical AI capabilities through partnerships with Silicon Valley startups, combining robot foundation models with new humanoid hardware platforms. Photo: LG CNS

LG CNS is deepening its push into physical AI through a set of partnerships with Silicon Valley robotics startups, signaling a shift from enterprise software toward integrated AI and robotics systems. The move reflects a broader trend among large technology firms seeking to secure both the software intelligence and hardware platforms required for real-world automation.

The South Korean company announced that it has partnered with U.S.-based startups Config and Dexmate following its Open Innovation Summit held in Silicon Valley on March 19. The initiative is part of an ongoing effort to identify early-stage technologies that can be incorporated into LG CNS’s enterprise-focused AI and automation offerings.

Combining Robot Foundation Models with Hardware

At the center of the partnerships is Config, a startup focused on robot foundation models, a category of AI systems designed to generalize across tasks in physical environments. The company’s technology enables robots to learn from human motion data, translating real-world demonstrations into structured training inputs for robotic systems.

LG CNS plans to integrate Config’s models into its robotics stack to improve precision in dual-arm manipulation, a capability widely seen as critical for industrial automation and service robotics. Unlike traditional robotic programming, which relies on predefined instructions, these models aim to allow robots to adapt to variable environments with less manual configuration.

The partnership with Dexmate, meanwhile, extends LG CNS’s reach into hardware. Dexmate develops humanoid robots equipped with dual arms and wheel-based mobility, offering an alternative to bipedal locomotion that can simplify stability and deployment in structured environments.

LG CNS had previously invested in Dexmate, and the expanded partnership suggests a longer-term strategy of aligning software capabilities with specific hardware platforms rather than remaining hardware-agnostic.

Expanding the Definition of Physical AI

The company’s approach reflects an evolving definition of physical AI, where progress depends on the interaction between machine learning models and mechanical systems rather than advances in either domain alone. By working with both a model developer and a hardware manufacturer, LG CNS is positioning itself within a growing ecosystem that spans perception, control, and actuation.

This mirrors broader industry developments led by companies such as Nvidia, which has promoted integrated frameworks combining simulation, AI training, and robotics deployment. The emphasis on full-stack systems is becoming increasingly important as robotics moves from controlled demonstrations to operational environments.

LG CNS’s expansion into wheel-based humanoid systems also suggests a pragmatic approach to deployment. While bipedal robots remain a long-term goal for many developers, hybrid designs that prioritize stability and efficiency are gaining traction in logistics, manufacturing, and service applications.

Open Innovation as a Scaling Strategy

The partnerships were announced as part of LG CNS’s broader open innovation program, which seeks to identify and collaborate with startups at an early stage. This model allows large enterprises to access emerging technologies without building all capabilities in-house, while giving startups a pathway to commercial deployment.

For LG CNS, the strategy appears aimed at accelerating its transition from enterprise IT services into a provider of AI-driven automation infrastructure. By combining internal capabilities with external innovation, the company is attempting to build a flexible ecosystem that can adapt as both AI models and robotics hardware continue to evolve.

The challenge, as with much of the physical AI sector, lies in translating technical capability into scalable, real-world use cases. While partnerships can accelerate development, widespread deployment will depend on whether these integrated systems can deliver consistent performance in complex environments.

Artificial Intelligence (AI), Business & Markets, News, Startups & Venture

Unitree Files for IPO as Humanoid Robot Market Enters New Phase

Unitree Robotics has filed for a Shanghai IPO after becoming the world’s largest humanoid robot seller, signaling a shift from experimentation to early commercialization.

By Laura Bennett | Edited by Kseniia Klichova Published:
Unitree Files for IPO as Humanoid Robot Market Enters New Phase
Unitree’s humanoid robots, once showcased in staged demonstrations, are now entering early commercial deployment as the company prepares for a public listing in Shanghai. Photo: Unitree

The planned public listing of Unitree Robotics marks a turning point for the humanoid robotics sector, which has long been defined by prototypes, research funding, and speculative timelines. By moving toward an initial public offering, the Hangzhou-based company is positioning itself as one of the first large-scale tests of whether humanoid robots can sustain a viable commercial market.

Unitree filed to list on the Shanghai Stock Exchange on March 20, seeking to raise 4.2 billion yuan, or about $610 million, to expand manufacturing and research. The company’s trajectory, from viral demonstrations to profitability within a year, places it at the center of a broader shift in how robotics companies are financed and evaluated.

Profitability Arrives Ahead of Mass Adoption

Unlike many peers, Unitree enters the public markets with profitability already established. The company reported an adjusted net profit of 600 million yuan in 2025, a sharp increase from its first profitable year in 2024. Revenue rose to 1.71 billion yuan from 392 million yuan the previous year, reflecting both volume growth and expanding product adoption.

This distinguishes Unitree from earlier entrants such as UBTech Robotics, which has remained unprofitable despite going public. The contrast highlights a widening gap between companies still operating in development mode and those beginning to scale production.

Even so, the market remains early. More than 100 humanoid robotics companies currently operate in China, according to Counterpoint Research, with consolidation expected as capital markets begin to impose stricter performance expectations. Unitree’s IPO is likely to serve as an early signal of which business models can sustain investor confidence.

From Quadrupeds to Humanoids

Unitree’s growth has been driven in part by a transition from quadruped robots to humanoid systems. The company shipped more than 30,000 quadrupeds between 2022 and 2025, establishing a hardware and supply chain base before scaling humanoid production.

In 2025, it sold 5,500 humanoid robots, which accounted for over half of its core revenue, up from less than 2% two years earlier. The majority of these units were sold to research institutions and educational users, indicating that widespread enterprise deployment remains limited.

The shift reflects a broader industry pattern, in which quadruped platforms have served as an intermediate step toward more complex humanoid systems. These earlier products provide revenue, operational data, and manufacturing experience that can be transferred into humanoid development.

Falling Prices and Vertical Integration

One of the more notable signals in Unitree’s prospectus is the rapid decline in pricing. The average price of its humanoid robots fell from roughly 593,400 yuan in 2023 to 167,600 yuan in 2025, bringing systems closer to a range that could support broader adoption.

At the same time, gross margins improved to nearly 60%, suggesting that cost reductions are being driven by manufacturing efficiencies rather than discounting alone. Unitree attributes this to its strategy of developing and producing key components in-house, reducing reliance on external suppliers.

This combination of falling prices and improving margins remains rare in the humanoid robotics sector, where most companies are still managing high costs and limited production volumes.

However, external dependencies remain. Like many robotics developers, Unitree relies on computing platforms and chips from Nvidia for core processing capabilities, leaving part of its supply chain exposed to geopolitical and trade uncertainties.

A Market Signal for Physical AI

Unitree’s IPO arrives amid intensifying global competition in humanoid robotics. In the United States, Elon Musk has said that Tesla plans to begin retail sales of its Optimus robots by 2027, framing humanoids as a future mass-market product.

At the same time, the concept of “physical AI” – systems that combine machine learning with real-world interaction – is gaining traction across the industry. Unitree’s robots were featured alongside other platforms at a recent conference led by Jensen Huang, underscoring growing alignment between hardware manufacturers and AI infrastructure providers.

Despite this momentum, near-term demand remains concentrated in research, education, and controlled industrial environments. Unitree’s own projections, which include plans to produce tens of thousands of humanoids annually within five years, suggest confidence in scaling, but not necessarily immediate mass adoption.

The company’s public listing will therefore function as more than a financing event. It will offer one of the first measurable indicators of whether investors view humanoid robotics as an emerging industrial category or as a longer-term technological bet.

Business & Markets, News, Robots & Robotics