German Researchers Develop AI Robot System to Recycle Smart Textiles

Researchers at Osnabrück University are building an AI-powered robotic system to identify and sort smart textiles, aiming to make e-textile recycling scalable and sustainable.

By Laura Bennett | Edited by Kseniia Klichova Published:
An AI-powered robotic system uses multispectral imaging and 3D sensors to identify smart textiles and embedded electronics on sorting lines. Photo: Osnabrück University of Applied Sciences

Researchers at Osnabrück University of Applied Sciences in Germany have launched a two-year initiative to develop an AI-powered robotic system capable of identifying and sorting smart textiles for recycling. The project addresses a growing sustainability challenge as garments embedded with electronics become more common in consumer wearables, industrial uniforms, and automotive applications.

The initiative, known as ReSiST-AR, is backed by regional development funding and aims to automate the detection and separation of e-textiles from conventional clothing streams. As smart fabrics integrate sensors, wiring, and electronic modules, traditional textile recycling systems struggle to process them safely and efficiently.

Without automated sorting, many of these garments risk ending up in landfills or being shipped abroad for low-cost manual processing.

Teaching Robots to Recognize Soft, Complex Materials

Unlike rigid materials such as metals or plastics, textiles present unique challenges for robotics systems. Fabrics are flexible, irregularly shaped, and often tangled or layered when placed on conveyor belts. Smart textiles add further complexity by embedding electronic components that may be hidden within seams or woven into fibers.

The research team is developing a robotic platform equipped with multispectral cameras and 3D sensors capable of scanning garments in mixed piles. AI-based material classification algorithms analyze the captured data to distinguish between fabric types and detect embedded electronics.

The goal is to enable robots to identify smart garments regardless of how they are positioned or folded. This requires machine learning models capable of interpreting varied visual and structural cues in real time.

Automation engineering researcher Steffen Greiser, who leads the project, noted that manual textile sorting is labor-intensive and often outsourced internationally, raising both environmental and ethical concerns. By automating the process, the team hopes to create regional recycling loops that reduce transportation and improve sustainability.

Designing Smart Textiles for Future Recyclability

Beyond sorting, the project also examines how smart textiles can be designed to simplify recycling. A separate research team is analyzing different integration methods, including sewn-in electronics, embroidery, and welded components, to determine how sensors and circuits can remain durable during use while being easier to remove at end of life.

This design-for-recycling approach reflects a broader shift in sustainable manufacturing, where product architecture is increasingly shaped by lifecycle considerations.

Guidelines developed through the project could help manufacturers create smart textiles that balance performance, user requirements, and recyclability. By embedding sustainability principles into product design, the initiative aims to prevent future waste streams from becoming unmanageable.

Robotics Expands into Circular Economy Applications

The ReSiST-AR project highlights how robotics and AI are moving into environmental and circular economy applications. Automated waste sorting has traditionally focused on rigid materials such as plastics and metals. Smart textiles introduce new technical demands that require advanced sensing and AI interpretation.

By combining robotics with multispectral imaging and AI-driven classification, researchers are building systems capable of operating in complex, variable environments where traditional automation struggles.

The project also involves collaboration with regional robotics and textile companies, allowing researchers to test prototypes in real industrial settings. These partnerships aim to accelerate commercialization and ensure that the technology can integrate into existing recycling infrastructure.

As wearable electronics and connected garments continue to proliferate, scalable recycling solutions will become increasingly necessary. The Osnabrück initiative offers an early example of how physical AI systems can support sustainability goals by automating complex sorting tasks that were previously dependent on manual labor.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Zoomlion Debuts Robot Ops Platform at Hannover Messe 2026, Targeting Industrial AI Deployment

Zoomlion has made the global debut of Robot Ops, an embodied intelligence operating system designed to standardize and accelerate robot deployment across industrial, construction, and logistics applications, at Hannover Messe 2026.

By Laura Bennett | Edited by Kseniia Klichova Published:
A wheeled humanoid robot and logistics mobile robot collaborating on a sorting task during a live demonstration at an industrial technology exhibition. Photo: Zoomlion

Zoomlion Heavy Industry Science and Technology, the Hong Kong-listed Chinese industrial machinery manufacturer, made the global debut of Robot Ops at Hannover Messe 2026 this week. The platform is a full-stack embodied intelligence operating system designed to standardize the development and deployment of robots across industrial, logistics, construction machinery, and autonomous driving applications.

The launch positions Zoomlion – better known for cranes, concrete equipment, and agricultural machinery – as an active participant in the industrial AI software layer, not just a hardware operator. The company is exhibiting at the show alongside Amazon Web Services and is participating in the China Pavilion’s Invest in China launch ceremony.

What Robot Ops Does

Robot Ops is built around an engineering concept the company describes as “Data, Software, and Agents,” integrating three operational disciplines – DevOps, DataOps, and AgentOps – into a unified platform. The system covers the full lifecycle of robot deployment: data collection, model training, simulation verification, application development, and ongoing deployment maintenance.

The platform comprises four modules covering basic development tools, imitation learning, reinforcement learning, and task orchestration. Zoomlion says the system improves closed-loop iteration efficiency by more than 50% and is designed to lower the technical barrier for organizations building and deploying robotic systems at scale.

The platform directly targets four challenges that have slowed industrial robotics adoption: high technical barriers to entry, difficulty migrating robot behaviors across different scenarios, data bottlenecks in training pipelines, and the absence of structured lifecycle management tools.

Live Demonstration at Hannover

At the show, Zoomlion is running live multi-robot demonstrations under Robot Ops scheduling. A wheeled humanoid robot and a logistics mobile robot collaborate on a logistics-sorting scenario, with the platform managing algorithm coordination, task orchestration, and on-site execution across both systems simultaneously. The company’s first-generation mass-produced humanoid robot, the Z1, is also on display performing dynamic motion-control demonstrations.

The multi-robot setup is designed to demonstrate Robot Ops’ capacity to coordinate heterogeneous robot types – different hardware, different task profiles – within a single orchestration layer, which is the core engineering claim the platform is built around.

Broader Industrial AI Context

Zoomlion is also presenting its Industry 5.0 intelligent manufacturing solutions at the show, including its Smart Industrial City initiative, which integrates digital twins, intelligent scheduling, industrial AI, and end-to-end logistics automation into manufacturing operations.

The Robot Ops debut reflects a pattern visible across Hannover Messe 2026 more broadly: established industrial companies using the event to announce software and AI platforms that sit above their existing hardware operations, targeting the orchestration and deployment layer rather than competing purely on robot specifications. For Zoomlion, whose core business is heavy construction equipment, the move into embodied intelligence software represents a deliberate effort to participate in the higher-margin, faster-growing segment of the industrial automation stack.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Agibot Declares 2026 “Deployment Year One”, Unveils Five Robot Platforms and Open AI Architecture

Agibot used its annual partner conference to declare 2026 the first year of large-scale commercial deployment for embodied AI, unveiling five new robotic platforms, eight AI models, and an open-source development architecture called AIMA.

By Rachel Whitman | Edited by Kseniia Klichova Published:
: A lineup of humanoid and wheeled robotic platforms displayed at an embodied AI product conference. Photo: Agibot

Agibot used its annual partner conference, APC 2026, to declare 2026 as “Deployment Year One” for embodied AI – the point at which the industry transitions from validating robot capabilities to generating measurable productivity at scale. The company unveiled five new robotic platforms, eight AI models, and a full-stack open development architecture called AIMA, framing the announcements as the infrastructure layer for its next phase of commercialization.

The declaration is grounded in operational data. Agibot rolled out its 10,000th humanoid robot in March 2026, and its humanoid robot revenue grew more than 22-fold in 2025 to become the company’s largest revenue stream, according to figures the company has previously reported.

Five Platforms, One Unified Architecture

Agibot positioned itself at APC 2026 as the only company offering a full-series lineup spanning humanoids, wheeled platforms, and multi-form robots across different sizes and deployment scenarios. The five new platforms are built on what the company calls a “One Robotic Body with Three Intelligences” framework, integrating motion intelligence, interaction intelligence, and operation intelligence into a unified system.

The architecture is designed to address a core limitation of earlier robotic deployments: systems optimized for a single task or environment. By coupling perception, decision-making, and physical execution within one hardware and software stack, Agibot argues that robots can generalize across complex real-world environments rather than operating within narrowly defined parameters.

Seven Commercial Solutions

To support faster enterprise adoption, Agibot introduced seven standardized productivity solutions targeting specific industrial scenarios: loading and unloading, industrial handling, logistics sorting, guidance and retail assistance, retail service stations, security patrol, and industrial and commercial cleaning. Each solution bundles hardware, AI models, and data infrastructure into a repeatable deployment package, reducing the integration complexity that has historically extended robotics pilot timelines.

“The industry is moving from proving what robots can do, to proving what value they can consistently deliver at scale,” said Edward Deng, Founder, Chairman, and CEO of Agibot.

AIMA: An Open Architecture for Embodied AI

The most structurally significant announcement was the launch of AIMA – AI Machine Architecture – described as the first complete open technology system for embodied intelligence. Built on a “1+3+X” design, AIMA consists of a unified robot operating system called Link-U OS, three development platforms covering motion creation, interaction design, and task development, and an extensible layer supporting third-party applications and the AGIBOT Embodied Agent Framework.

The architecture provides an end-to-end toolchain from low-level system control to high-level application development, and Agibot intends to continue open-sourcing components to attract developers and partners. The company plans to invest more than 2 billion yuan over five years to expand the ecosystem, targeting partnerships with universities, industry operators, and a large-scale developer community.

A Three-Stage Industry Roadmap

Agibot also presented a long-term framework for the embodied AI industry structured around three development curves. The current phase, running through 2026, covers foundational development and early adoption. The period from 2026 to 2030 is characterized as a deployment growth stage, during which robot productivity is expected to approach human levels and scenario-based deployment scales significantly. From 2030 onward, the company projects a qualitative leap in generalization capability, collective intelligence, and robots beginning to surpass human productivity in selected domains.

The roadmap is a strategic positioning exercise as much as a technical forecast. With more than 150 humanoid robot manufacturers active in China alone, the company that can credibly claim platform and ecosystem leadership – rather than competing on individual robot specifications – is likely to capture a disproportionate share of the emerging enterprise market.

Artificial Intelligence (AI), Business & Markets, News, Robots & Robotics, Science & Tech

NVIDIA and Partners Demonstrate Production-Ready AI Manufacturing Systems at Hannover Messe 2026

NVIDIA and more than a dozen industrial partners used Hannover Messe 2026 to demonstrate AI-driven manufacturing systems already operating in live production environments, from humanoid robots in electronics factories to vision AI agents on automotive assembly lines.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Robotic systems and AI-driven automation equipment operating on a factory floor during a major industrial technology exhibition. Photo: HANNOVER MESSE

NVIDIA used Hannover Messe 2026 to present a coordinated set of industrial AI deployments across robotics, simulation, vision AI, and edge computing – alongside partners spanning Siemens, Microsoft, ABB, Dassault Systèmes, and a range of specialized software and hardware firms. The common thread across the demonstrations was an emphasis on systems already operating in production rather than technology previews, with several partners presenting quantified outcomes from live deployments.

The show ran April 20-24 in Hannover, Germany, and served as a staging ground for NVIDIA’s physical AI ecosystem, built around its Omniverse, Isaac, Jetson, and IGX compute platforms.

Humanoid Robots in Live Production

The most concrete robotics demonstration came from Humanoid, whose HMND 01 wheeled humanoid has completed autonomous logistics operations at a Siemens electronics factory in Erlangen, Germany – described as a first proof of concept within a live production environment. The robot runs NVIDIA’s Jetson Thor edge AI module for on-device compute and was developed using Isaac Sim and Isaac Lab for simulation and reinforcement learning.

A simulation-first development approach compressed what typically takes up to two years of hardware development down to seven months, according to Humanoid.

A second notable humanoid deployment involves Hexagon Robotics, whose AEON robot is preparing for assembly operations at a BMW plant in Leipzig – one of the first humanoid deployments in a German production environment. The system was developed using NVIDIA’s Physical AI Data Factory Blueprint and IGX Thor for industrial-grade edge compute with functional safety certification.

SCHUNK’s GROW automation cell demonstrated a standardized, deployable form of physical AI for small and medium-sized manufacturers. The system uses NVIDIA Omniverse and Isaac simulation to train and validate robot behavior before deployment, with Wandelbots’ NOVA platform managing continuous refinement on the shop floor and EY designing the operating model for European SME rollout.

Vision AI Agents on the Factory Floor

Several partners demonstrated vision AI systems built on NVIDIA’s Metropolis and Cosmos platforms, targeting quality control, safety monitoring, and operational intelligence.

Invisible AI launched its Vision Execution System at the show, an agent-based platform that captures and analyzes every production cycle in real time using the NVIDIA Metropolis VSS Blueprint and Cosmos Reason 2 models. The system is already deployed at major automotive manufacturers including Toyota.

Tulip Interfaces showcased Factory Playback, which synchronizes machine telemetry, operator workflows, quality events, and video into a searchable operational timeline. Terex, an industrial equipment manufacturer operating more than 40 plants, uses the platform and is projected to achieve a 3% yield increase and 10% reduction in rework.

Fogsphere demonstrated vision AI deployment in high-risk industrial environments, with Saipem using the platform to detect and respond in real time to safety and environmental events on energy infrastructure.

Sovereign AI Infrastructure

Underlying many of the deployments on display is the Industrial AI Cloud, built in Germany by Deutsche Telekom on NVIDIA infrastructure and designed as a sovereign AI platform for European industry. The facility provides a secure foundation for running AI workloads – from factory-scale digital twins to software-defined robotics – under European data governance requirements.

ABB, Dassault Systèmes, Kongsberg Digital, Microsoft, and Siemens each demonstrated digital twin capabilities built on NVIDIA Omniverse libraries, with applications ranging from real-time asset performance analysis to stress-testing factory configurations before physical changes are made.

QNX expanded its collaboration with NVIDIA to cover safety-critical edge AI, with QNX OS for Safety 8.0 now integrated on NVIDIA IGX Thor alongside the NVIDIA Halos safety stack – a combination targeting robotics, medical, and industrial applications where functional safety certification is a deployment requirement.

NEURA Robotics and AWS Partner to Scale Physical AI Training and Deploy Robots in Amazon Fulfillment Centers

NEURA Robotics and Amazon Web Services have announced a strategic collaboration to train, validate, and deploy cognitive robots at scale, with Amazon exploring deployment of NEURA systems in select fulfillment centers as a real-world data source for Physical AI development.

By Laura Bennett | Edited by Kseniia Klichova Published:
A cognitive humanoid robot operating in a warehouse fulfillment environment alongside human workers. Photo: NEURA Robotics

NEURA Robotics, the German cognitive robotics company, and Amazon Web Services have announced a strategic collaboration to accelerate the development and global deployment of physical AI systems. AWS will serve as the primary cloud provider for NEURA’s Neuraverse platform, handling AI training, real-world data processing, and shared intelligence across robot fleets. Amazon will separately explore deploying NEURA robotic systems in select fulfillment centers, providing production-environment data to accelerate the development of new robotic capabilities in logistics and warehouse operations.

The partnership addresses what both companies describe as the central constraint on physical AI progress: the data gap. Unlike large language models trained on trillions of internet-sourced data points, robotic AI systems have access to a fraction of that volume, and the data they need can only be generated through real-world operation.

Three Areas of Collaboration

The agreement spans cloud infrastructure, AI development, and real-world validation. AWS will provide the computational backbone for the Neuraverse, NEURA’s platform for training and sharing robotic intelligence across fleets. NEURA Gym – a purpose-built training environment where robots practice complex tasks in controlled settings alongside high-fidelity simulation – will integrate with Amazon SageMaker to accelerate joint training pipelines across NEURA and partner use cases.

The real-world validation component is the most strategically significant element. Amazon’s fulfillment centers represent one of the most operationally demanding and data-rich environments available for robotic deployment – high throughput, variable product mix, and continuous operation at global scale. Each deployment generates the kind of sensor data, task variety, and edge-case exposure that controlled training environments cannot replicate.

“Physical AI will only reach its full potential if intelligence can be trained, validated, and continuously improved in the real world,” said David Reger, CEO and founder of NEURA Robotics. “With AWS, we gain the infrastructure to scale the Neuraverse globally. With Amazon, we have the opportunity to bring Physical AI into one of the most advanced operational environments in the world.”

The Data Infrastructure Problem

The collaboration is built around a structural challenge that applies across the robotics sector. Simulation can approximate physical environments but cannot fully replicate the variability of real-world conditions – surface irregularities, lighting changes, unexpected object configurations, and human interaction patterns. Continuous feedback loops between simulation and real-world deployment are required to close that gap over time.

AWS’s role is to make those loops faster and more scalable. By running the Neuraverse on cloud infrastructure with global reach, NEURA can distribute trained intelligence across its entire robot fleet in near real time, so improvements derived from one deployment environment propagate across all systems.

Ecosystem and Scale

The AWS partnership extends a network NEURA has been assembling across cloud, semiconductors, and industrial deployment. The company’s existing partners include Kawasaki – one of the ten largest robotics companies globally – alongside Schaeffler, Bosch, and Qualcomm Technologies. The stated goal is to enable millions of cognitive robots by 2030.

NEURA has not disclosed the financial terms of the AWS agreement, the timeline for Amazon fulfillment center deployments, or the specific robotic systems under consideration for those pilots. The fulfillment center component is framed as exploratory, meaning commercial deployment at scale remains contingent on performance outcomes from the initial trials.

Business & Markets, News, Robots & Robotics, Science & Tech

DEEPX and Hyundai Motor Group Robotics LAB Partner to Build On-Device AI Chips for Robots

DEEPX and Hyundai Motor Group’s Robotics LAB have announced a strategic collaboration to co-develop an ultra-low-power AI computing platform capable of running large-scale generative AI models on-device within robotic systems.

By Rachel Whitman | Edited by Kseniia Klichova Published:
An advanced AI semiconductor chip designed for low-power on-device inference in robotic and autonomous systems. Photo: DEEPX

DEEPX, a South Korean AI semiconductor company specializing in ultra-low-power inference chips, has announced a strategic partnership with Hyundai Motor Group’s Robotics LAB to co-develop a next-generation AI computing platform for robotic systems. The two organizations have been working together on low-power edge AI technology for robotics over the past three years, and the new agreement formalizes that collaboration into a joint architecture program.

The partnership targets a specific technical problem: running large-scale generative AI models in real time on robotic hardware, without relying on cloud connectivity or data center-level power consumption.

The Technical Challenge

Modern robotics AI is increasingly built around Vision-Language-Action and Vision-Language Model architectures – systems that allow robots to interpret camera input, process natural language instructions, and make autonomous decisions in real time. These models are computationally intensive and have historically required significant power and connectivity to run, which limits their deployment in mobile, battery-powered, or field-based robotic systems.

The collaboration will focus on four areas: ultra-low-power AI semiconductor architecture, AI computing hardware systems for robotics, a physical AI software stack, and robotics application AI libraries. The goal is a cohesive computing platform that can support VLA and VLM models at the edge – on the robot itself – rather than offloading inference to external infrastructure.

At the center of the technical effort is DEEPX’s DX-M2, a next-generation chip the company describes as a Physical GenAI semiconductor, designed specifically to run large-scale AI models in ultra-low-power environments for robotics, autonomous mobile systems, and industrial automation applications.

Why On-Device Inference Matters

The shift toward on-device AI computation in robotics has direct implications for deployment viability. Robots operating in warehouses, factories, or outdoor environments cannot always maintain low-latency cloud connections, and the power budgets of mobile platforms place hard limits on the compute hardware they can carry. A chip capable of running generative AI models locally removes both constraints.

“The AI industry is rapidly shifting from data center-centric models to a Physical AI era,” said Lokwon Kim, CEO of DEEPX. “Ultra-low-power computing capable of running AI in real-world systems will become the core infrastructure.”

Hyundai Motor Group’s Robotics LAB frames the partnership as part of a broader strategy to build a proprietary technology ecosystem for robots that operate alongside people. “In the era of Physical AI, robots are becoming the closest point of contact between AI technology and people,” said Dong Jin Hyun, Vice President and Head of Robotics LAB at Hyundai Motor Group.

Market Context

The physical AI semiconductor market is projected to reach approximately $123 billion by 2030, with robotics and humanoid systems identified as the primary demand drivers. The segment is attracting attention from both established chipmakers and specialized startups, as the requirement for on-device AI inference in physical systems creates demand that general-purpose data center chips are not optimized to meet.

DEEPX and Hyundai have not disclosed a product timeline or the specific robotic platforms the DX-M2 is intended to power. The partnership agreement covers joint architecture development, suggesting the platform is still in the design phase rather than approaching commercial deployment.

Artificial Intelligence (AI), Business & Markets, News, Robots & Robotics
Exit mobile version