Northwestern Engineers Build Self Reconfiguring Modular Robots

Researchers at Northwestern University have developed modular “legged metamachines” that can flip, jump, and continue operating even after being split into pieces, offering a new approach to resilient robotics.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:

Engineers at Northwestern University have developed a new type of modular robot capable of continuing to move and operate even after being physically separated into pieces, a design approach that could change how robots are built for unpredictable environments.

The machines, described by researchers as “legged metamachines”, are composed of self-contained robotic modules that can connect, detach, and reorganize themselves while maintaining mobility. Each module includes its own electronics, power source, and motor, allowing it to function as an independent robotic unit.

When combined into larger structures, the modules behave like limbs of a larger robot, capable of performing complex movements such as jumping, flipping, and traversing uneven terrain.

The project explores how robotic systems might achieve a level of resilience that traditional designs lack.

Robots Built from Autonomous Modules

Unlike conventional robots built around a single centralized body, metamachines are constructed from multiple independent units that snap together like building blocks.

Each module resembles a small mechanical limb composed of two elongated segments connected through a central spherical joint. Inside that spherical core are the essential components required for operation, including circuitry, battery power, and a motor.

Individually, these modules are capable of rolling or jumping across the ground. But when combined into multi-limbed configurations, they form coordinated robotic structures capable of far more complex movement.

Researchers describe this architecture as similar to a robot made from smaller robots, where each piece contributes to the overall motion while retaining its own sensing and control systems.

AI Designed the Robot’s Body

The unusual appearance and motion of the metamachines emerged from an AI-driven design process rather than conventional engineering.

The research team used an evolutionary algorithm that simulated a process similar to natural selection. Digital robot designs were generated, tested in simulation, and iteratively modified through virtual “mutations” until high-performing configurations emerged.

Because the algorithm explored design possibilities unconstrained by traditional engineering intuition, it produced unusual structures that resemble the movement patterns of animals.

Some configurations move with motions similar to seals undulating across terrain, while others bound like small mammals or leap using spring-like dynamics.

According to the researchers, these AI-evolved designs allowed the robots to move effectively across a variety of surfaces.

Surviving Damage and Reassembling

Perhaps the most distinctive feature of the metamachines is their ability to continue functioning after severe physical damage.

In traditional robots, losing a limb or structural component often renders the entire system unusable. In modular metamachines, however, damage simply alters the configuration of the system.

If a component is severed, the remaining modules immediately adjust their movement pattern and continue traveling with fewer limbs.

Meanwhile, detached modules do not become inert debris. Each segment remains an autonomous robot capable of sensing its environment and moving independently.

Researchers observed detached modules crawling or rolling across the terrain, potentially allowing them to reconnect with the rest of the system.

The team described the result as a form of functional resilience that resembles biological organisms capable of regenerating or adapting after injury.

Testing Robots in the Real World

To validate the concept outside of simulation, the researchers built physical prototypes composed of three, four, and five modules.

The robots were tested outdoors across uneven terrain, including sand, soil, and forest floor environments.

During these experiments, the metamachines demonstrated the ability to flip themselves upright when overturned and to traverse obstacles without external control adjustments.

The experiments were designed to test whether AI-evolved designs developed in computer simulations could function effectively in real-world environments.

According to the researchers, the robots performed these movements immediately after assembly without requiring manual calibration.

What Modular Robots Could Enable

The research highlights a potential direction for robotics focused on adaptability and resilience rather than rigid precision.

Modular machines capable of self-repair and reconfiguration could be particularly useful in environments where human intervention is difficult or impossible.

Possible applications include planetary exploration, disaster response, and infrastructure inspection, where robots may encounter unpredictable terrain or physical damage.

By distributing intelligence and mobility across many small units rather than relying on a single central structure, such systems may continue operating even when individual components fail.

As robotics researchers continue exploring new approaches to embodied intelligence, designs inspired by biological resilience may play an increasingly important role in machines intended to operate beyond controlled laboratory environments.

News, Robots & Robotics, Science & Tech

Beeple Installs Robot Dogs with Musk and Zuckerberg Heads at Berlin’s Neue Nationalgalerie

American digital artist Beeple has installed a group of robot dogs fitted with hyper-realistic silicone heads modeled after Elon Musk, Mark Zuckerberg, Jeff Bezos, Andy Warhol, and Pablo Picasso at Berlin’s Neue Nationalgalerie, where they roam freely and print AI-transformed images of their surroundings.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:

American digital artist Beeple, whose legal name is Mike Winkelmann, has opened an interactive installation at Berlin’s Neue Nationalgalerie featuring robot dogs fitted with hyper-realistic silicone heads modeled after some of the most recognizable figures in technology and cultural history. The dogs roam freely through the museum, carrying the likenesses of Elon Musk, Mark Zuckerberg, Jeff Bezos, Andy Warhol, and Pablo Picasso – as well as a head modeled after Beeple himself.

The work, entitled “Regular Animals”, was first shown at Art Basel Miami Beach in 2025 and is now on extended display in Berlin.

How the Installation Works

Each robot dog is equipped with integrated cameras that continuously capture images of its surroundings as it moves through the gallery. Those images are processed by AI and periodically printed – with the output filtered through the personality or aesthetic worldview of the figure each dog represents. The Picasso dog produces images in Cubist style. The Warhol dog outputs in pop art. The technology billionaire dogs reinterpret their surroundings through AI models conditioned on each figure’s public identity and worldview.

The printing mechanism is deliberately unglamorous: the dogs occasionally stop and produce the images in a manner the artist and press have described as defecating. The choice of delivery method is part of the work’s visual language.

The Commentary Behind the Hardware

Beeple has been direct about the installation’s intent. “In the past, our view of the world was shaped in part by how artists saw the world,” he told the Associated Press. “How Picasso painted changed how we saw the world, how Warhol talked about consumerism, pop culture, that changed how he saw those things.”

The figures who shape perception now, he argues, are not artists but technology executives controlling algorithmic platforms that determine what billions of people see and do not see. “That’s an immense amount of power that I don’t think we’ve fully understood, especially because when they want to make a change, they don’t need to lobby the U.N. They don’t need to get something through Congress or the EU – they just wake up and change these algorithms.”

By placing those figures’ faces on quadruped robots – hardware associated with surveillance, industrial automation, and military research – the installation draws a connection between algorithmic power and physical AI systems that is rarely made this explicitly in a public cultural setting.

Lisa Botti, the exhibition’s curator, said artificial intelligence is among the phenomena most significantly affecting daily life and that museums are the appropriate spaces for society to examine such shifts. Beeple, according to Christie’s, is the third most expensive living artist to sell at auction, after David Hockney and Jeff Koons.

Artificial Intelligence (AI), News, Robots & Robotics

LG Electronics and Nvidia in Talks on Robotics, AI Data Centers, and Mobility

LG Electronics has confirmed it is in discussions with Nvidia on potential cooperation spanning robotics, AI data center infrastructure, and mobility technologies, as both companies deepen their positions in physical AI.

By Daniel Krauss | Edited by Kseniia Klichova Published:
LG Electronics and Nvidia in Talks on Robotics, AI Data Centers, and Mobility
At CES 2026, LG introduced AI-powered living spaces spanning from homes to vehicles. Photo: LG Electronics

LG Electronics has confirmed it is in discussions with Nvidia on potential cooperation across three areas: robotics development, AI data center infrastructure, and future mobility applications. No formal agreement has been announced. The talks follow a visit by Madison Huang, a senior Nvidia executive focused on physical AI platforms, to LG Electronics and other major South Korean companies.

The confirmation positions LG as an active participant in the physical AI ecosystem at a moment when global hardware manufacturers are moving to align with leading AI platform providers across industrial, data infrastructure, and autonomous systems markets.

What Is Being Discussed

The three areas under discussion reflect distinct but converging priorities. In robotics, LG has been building out a service and commercial robotics business through its subsidiary LG Electronics Business Solutions, with cleaning, delivery, and guide robots already deployed in hotels, hospitals, and commercial buildings. A partnership with Nvidia’s physical AI stack – which includes Isaac simulation, Jetson edge compute modules, and Omniverse digital twin infrastructure – would give LG access to the training and deployment tools that are becoming standard across the humanoid and service robotics sector.

On data centers, Nvidia’s accelerated computing infrastructure is the dominant platform for AI model training and inference at scale. LG’s interest in AI data center cooperation reflects a broader shift among large electronics manufacturers toward providing AI-optimized infrastructure solutions to enterprise customers, rather than competing purely on consumer devices.

The mobility component aligns with LG’s existing investments in vehicle components and smart home-to-vehicle connectivity systems, areas where Nvidia’s DRIVE platform for autonomous vehicle computing has established significant market presence.

Strategic Context

The discussions gained public attention through the reported visit of Madison Huang to South Korean companies, a trip that signals Nvidia’s active effort to deepen physical AI partnerships in a country with significant electronics manufacturing capability and a growing robotics industry. South Korea’s government has identified robotics and AI as national strategic priorities, and companies including Samsung, Hyundai, and LG are all expanding their positions in the sector.

LG and Nvidia have not disclosed a timeline for reaching a formal agreement, the financial terms under discussion, or which specific product or platform areas any agreement would initially cover. The talks remain at an exploratory stage, though the combination of LG’s manufacturing scale and Nvidia’s AI infrastructure position would, if formalized, add a significant hardware partner to Nvidia’s physical AI ecosystem.

Kinetix AI Unveils KAI Humanoid with 115 Degrees of Freedom and 18,000-Sensor Tactile Skin

Shenzhen-based Kinetix AI has unveiled KAI, a full-sized humanoid robot with 115 degrees of freedom, a 36-DoF dexterous hand, and a full-body tactile skin system with 18,000 sensors, targeting service and home assistance applications at a sub-$40,000 price point.

By Rachel Whitman | Edited by Kseniia Klichova Published:

Shenzhen-based startup Kinetix AI held its GIFTED press conference on April 26 to unveil KAI, a full-sized humanoid robot designed for service and home assistance applications. The company, also operating as Kai Robotics, was founded by veterans of the original R&D team behind the XPENG Iron humanoid. KAI is targeting a sub-$40,000 price point and mass production in late 2026.

The platform’s specifications are ambitious across hardware, sensing, and AI architecture – though the gap between laboratory demonstration and reliable real-world deployment remains the central challenge the company will need to close before commercial scale is achievable.

115 Degrees of Freedom

The most distinctive technical claim is KAI’s 115 degrees of freedom across the full body – a figure substantially higher than the 20 to 45 DoF typical of most contemporary humanoid platforms. The articulation range includes shoulder movement, torso flexion to 75 degrees, and neck rotation across a 65-degree range, giving the robot a range of motion intended to closely approximate human flexibility.

The hands are the most mechanically complex element. Each features 36 DoF – 22 active and 14 passive joints. The passive joints act as mechanical buffers, allowing the hand to conform to objects and absorb impact forces without requiring immediate computational response. The company describes this as a safety feature for domestic use, where contact with objects and people is frequent and unpredictable.

Tactile Sensing and Battery Safety

KAI is covered in a synthetic tactile skin containing 18,000 sensing points capable of detecting forces as light as 0.1 newtons. The system enables what Kinetix AI calls haptic-aware manipulation – the ability to modulate grip force and contact behavior based on real-time pressure feedback across the robot’s surface.

Power is supplied by a 1.7 kWh semi-solid-state battery, a chemistry choice that reduces thermal runaway risk compared to conventional lithium-ion packs. The selection mirrors a broader trend among Chinese humanoid manufacturers, including XPENG Robotics, toward safer battery architectures for robots operating in proximity to people.

Data Strategy and AI Architecture

KAI’s intelligence layer is built around what Kinetix AI calls the KAI World Model, a closed-loop architecture comprising Base, Action, and Evaluation modules. The system is designed to predict environmental changes and assess the safety of candidate movement trajectories before executing them – a simulation-before-action approach that parallels techniques used across physical AI development more broadly.

To address the data scarcity problem that constrains most humanoid AI training, Kinetix AI developed the KAI Halo, a lightweight head-mounted device worn by human operators during normal daily routines. The device captures first-person video, body pose, and environmental point cloud data, generating training data from natural human behavior rather than structured motion-capture sessions. The company argues that this approach produces a more diverse and naturalistic dataset than traditional capture methods.

Market Positioning and the Reliability Question

KAI is positioned as a general-purpose helper for retail, concierge, and home assistance roles rather than heavy industrial applications. The sub-$40,000 target price is designed to be competitive within a segment where most platforms remain either significantly more expensive or more narrowly capable.

The architecture’s complexity – 115 DoF, 18,000 sensors, semi-solid-state batteries – introduces significant engineering challenges in maintaining system reliability outside laboratory conditions. XPENG’s own robotics leadership has publicly identified hardware reliability, including signal disconnection and mechanical failure rates, as a primary bottleneck for the industry. Whether KAI’s high-DoF design can sustain stable performance in the unstructured environments it targets will determine whether the platform reaches commercial deployment on its stated timeline.

News, Robots & Robotics, Startups & Venture

Humanoid Robots Close In on Human 100-Metre Sprint Record as Locomotion Advances Accelerate

Unitree’s H1 robot has recorded a peak sprint speed of 10 meters per second on an athletics track, approaching Usain Bolt’s average race speed of 10.44 meters per second, as Chinese manufacturers push bipedal locomotion toward the limits of human athletic performance.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Humanoid Robots Close In on Human 100-Metre Sprint Record as Locomotion Advances Accelerate
A bipedal humanoid robot sprinting on an athletics track during a speed test, with a velocity measurement device recording peak speed. Photo: Unitree

Humanoid robots are converging rapidly on the limits of human athletic performance in sprinting. Unitree Robotics recently released footage of its H1 robot reaching a peak speed of 10.1 meters per second on an athletics track – a figure that approaches the average race speed of 10.44 meters per second Usain Bolt maintained during his 9.58-second 100-metre world record. Unitree’s CEO Wang Xingxing has publicly predicted that Chinese humanoid robots will break the 10-second barrier in the 100-metre dash by mid-2026.

The sprint developments follow the Beijing half-marathon in April, where Honor’s humanoid robot Lightning completed the 13-mile course in 50 minutes and 26 seconds – below the standing human world record by nearly seven minutes. Taken together, the results mark a rapid compression of the performance gap between human and robotic bipedal locomotion across both endurance and speed dimensions.

What the Speed Numbers Represent

Unitree’s 10.1 m/s figure was recorded as the H1 passed a speed-measuring device during a track test, with the company noting possible measurement error in the video. The robot weighs approximately 62 kilograms with a combined leg length of 80 centimeters – proportions comparable to an average adult human. MirrorMe Tech, a startup linked to Zhejiang University, has separately demonstrated a humanoid named Bolt reaching 10 meters per second on a treadmill, with an explicit design goal of approaching or exceeding the biological limits of human motion.

At 10 seconds flat for a 100-metre sprint, a humanoid robot would place within range of elite Olympic competition. The current humanoid robot 100-metre record, set at the 2025 World Humanoid Robot Games, stands at 21.50 seconds – a figure that illustrates how quickly the performance envelope is shifting.

Engineering Progress, Not Scientific Breakthrough

Researchers with deep experience in bipedal robotics caution against overstating what the speed records demonstrate. Alan Fern, a computer science professor at Oregon State University who helped develop the Cassie bipedal robot, said the basic principles of robot locomotion are not new. What changed in the past year, he argued, was engineering quality and investment volume – faster machines that hold together longer, rather than a fundamental advance in how robots learn to move.

“What changed this year was good old-fashioned engineering and investment,” Fern said. “Last year’s robots were slower, and many broke. This year’s machines were fast and held together. That is not nothing, but it is not a breakthrough either.” Yanran Ding, a robotics professor at the University of Michigan, identified heat management as the more significant engineering achievement behind sustained high-speed operation.

Jonathan Hurst, whose company Agility Robotics builds the Digit warehouse humanoid, drew a sharp distinction between track performance and operational readiness. The gap between a robot that can run a premapped course and one that can navigate safely among people in a warehouse is the gap the industry is still working to close. “It’s like looking at the first cars and being like, ‘It doesn’t fly,'” Hurst said. “It’s a pretty high bar.”

Why Speed Benchmarks Still Matter

The investment in locomotion speed serves purposes beyond athletic competition. High-speed bipedal movement requires tight integration across perception, actuation, and learned control policies – the same control stack that governs how a robot navigates dynamic environments, responds to unexpected disturbances, and maintains stability under load. Progress at the performance extremes tends to transfer into improved reliability at the operational middle.

For Chinese manufacturers, publicly demonstrated speed records also carry strategic value in a sector where national competition is explicit. With more than 150 humanoid robot companies active in China and government support tied to performance milestones, speed benchmarks function as both technical validation and competitive positioning.

News, Robots & Robotics

AtkinsRéalis and Oxford Robotics Institute Partner to Deploy Autonomous Robots in Nuclear Sites

AtkinsRéalis and the University of Oxford’s Oxford Robotics Institute have formed a partnership to commercialize autonomous inspection and manipulation robots for nuclear decommissioning and energy sector applications, building on deployments already active at Sellafield.

By Laura Bennett | Edited by Kseniia Klichova Published:
AtkinsRéalis and Oxford Robotics Institute Partner to Deploy Autonomous Robots in Nuclear Sites
An autonomous mobile robot conducting inspection and radiation mapping in a hazardous industrial facility, operating without direct human presence in the environment. Photo: AtkinsRéalis

AtkinsRéalis, the engineering and project management firm, has formed a partnership with the University of Oxford’s Oxford Robotics Institute to accelerate the deployment of autonomous robots in nuclear and wider energy sector environments. The collaboration formalizes and scales a body of work already active in the UK, where ORI-developed systems have been integrated into AtkinsRéalis platforms for autonomous navigation, mapping, and radiation hotspot detection at nuclear sites including Sellafield.

The partnership’s initial focus is on converting those proven UK deployments into commercial products available to international customers. Systems currently operating as mobile inspection vehicles and manipulation platforms will be refined in ORI’s laboratory infrastructure before transitioning into field-ready applications through AtkinsRéalis’ nuclear engineering capabilities.

Why Nuclear Is a Demanding Test Environment

Nuclear decommissioning and inspection represent one of the most constrained deployment contexts in industrial robotics. Human access is limited by radiation exposure limits, physical endurance, and safety protocols that restrict time on-site. Autonomous robots that can navigate, map, and detect radiation anomalies without continuous human presence directly address those constraints, extending operational capability into areas and durations that human crews cannot sustain.

Reducing personnel exposure to hazardous conditions is the core operational driver. Beyond safety, autonomous systems can potentially accelerate decommissioning work that would otherwise be paced by human radiation limits – a meaningful economic consideration given the multi-decade timescales and substantial costs associated with nuclear site closure programs.

The partners described the work as part of the emerging field of physical AI – the coupling of simulation, AI-enabled perception, decision-making, and real-world validation to enable reliable autonomous operation in safety-critical environments.

AtkinsRéalis’ Broader Robotics Ecosystem

The ORI partnership extends an ecosystem AtkinsRéalis has been assembling across robotics and AI over the past year. The company has a proposed trial of remote robot operation with Sellafield Ltd, an extended partnership with Canadian robotics manufacturer Kinova, and an active collaboration with NVIDIA on simulation and autonomy tools. Together, these alliances position AtkinsRéalis as an integrator across the physical AI stack for nuclear applications – from simulation and perception to manipulation hardware and regulatory compliance.

The deal gives AtkinsRéalis deeper access to ORI’s academic research and specialist testing infrastructure in perception, navigation, manipulation, and digital twin development. First public demonstrations of related technology in the UK are expected in the coming months as trials with nuclear site operators progress.

“This partnership allows us to rapidly move autonomous robotics from research to operational deployment on nuclear power plants around the world,” said Sam Stephens, head of digital for AtkinsRéalis’ nuclear division.

The longer-term objective is a validated suite of autonomous inspection and manipulation platforms deployable across decommissioning, operations, and monitoring tasks at nuclear sites internationally – a market where regulatory requirements, site-specific complexity, and safety standards create high barriers to entry but also durable demand for proven systems.

Business & Markets, News, Robots & Robotics, Science & Tech