NVIDIA Unveils New AI Robotics Stack to Accelerate the Rise of General-Purpose Robots

NVIDIA is expanding its Isaac robotics platform with new AI models, simulation tools, and data pipelines designed to help developers build general-purpose robots and deploy them at scale.

By Laura Bennett | Edited by Kseniia Klichova Published:
Simulation environments created with NVIDIA Isaac enable robots to train on thousands of scenarios simultaneously before deployment in the real world. Photo: NVIDIA

NVIDIA is expanding its robotics software ecosystem with new AI models, simulation frameworks, and development tools aimed at accelerating the creation of general-purpose robots capable of operating in real-world environments.

The updates, announced around NVIDIA’s GTC developer conference, reflect a broader shift in robotics toward systems that can combine general intelligence with specialized task skills. Rather than building machines designed for a single function, developers are increasingly working toward what NVIDIA describes as “generalist-specialist” robots – machines that can understand instructions, learn new behaviors, and adapt those skills to specific jobs.

At the center of this effort is the NVIDIA Isaac platform, a robotics development stack that integrates simulation, data generation, AI model training, and deployment tools into a unified workflow designed to move robots from experimentation to production more quickly.

From Data Bottlenecks to Synthetic Training

One of the biggest challenges in robotics development has traditionally been data.

Unlike large language models, which can train on vast amounts of text from the internet, robots require detailed examples of physical interactions – how to grasp objects, move through environments, or respond to unexpected conditions. Collecting that data in the real world is slow, expensive, and often dangerous.

NVIDIA’s strategy relies heavily on simulation to address this bottleneck. Its Isaac Sim platform allows developers to recreate physical environments digitally, combining real sensor data with simulated scenarios to generate massive training datasets.

These synthetic environments can reproduce edge cases that would be difficult or risky to capture in the real world, such as rare accidents, unusual object configurations, or extreme environmental conditions.

According to industry estimates cited by NVIDIA, synthetic data currently accounts for roughly one-fifth of training data used in edge AI systems. By the end of the decade, that share could exceed 90 percent as simulation-based training becomes the dominant approach.

Training Robot Brains in Virtual Worlds

Once data is generated, robots must learn how to act on it.

NVIDIA’s Isaac platform includes robot “brain” models known as vision-language-action systems, which combine perception, reasoning, and control. One example is the company’s GR00T family of models, which developers can adapt and train for specific robotic tasks.

These systems allow robots to interpret visual input, understand natural language instructions, and translate them into physical actions. A robot trained with such models could theoretically learn tasks ranging from folding laundry to navigating hospital corridors or assembling industrial components.

Training these skills directly on physical robots would be prohibitively slow. Instead, developers use Isaac Lab, a large-scale simulation training environment that allows robots to practice thousands of scenarios simultaneously.

In these virtual worlds, robots can run millions of experiments – learning from successes and failures in parallel – compressing what would normally take years of physical testing into days or weeks of simulation.

Bridging the Gap Between Simulation and Reality

While simulation has become a central tool in robotics development, transferring those skills into the real world remains a critical hurdle.

To address this, NVIDIA integrates multiple physics engines within its simulation environment to ensure that virtual environments behave realistically. These engines simulate gravity, collisions, and object dynamics, enabling robots to learn behaviors that translate more reliably when deployed on physical machines.

The company also supports both software-in-the-loop and hardware-in-the-loop testing, allowing developers to evaluate robot policies both in simulated environments and on real computing hardware before deployment.

Once trained, robots can run their models on NVIDIA’s Jetson edge computing platforms, which provide the processing power required for real-time perception, mapping, and decision-making.

This edge computing layer enables robots to process sensor data locally while maintaining the ability to update or retrain models in the cloud.

Toward the Generalist Robot Era

The long-term goal of these systems is to enable robots that can learn continuously rather than relying on fixed task programming.

NVIDIA’s emerging research frameworks aim to standardize how robots represent body structure, motion, and behavior, allowing developers to transfer skills between different machines without rebuilding software from scratch.

This approach could make it easier to train robots that can operate across different environments and industries, from warehouses and factories to hospitals and homes.

The shift reflects a broader trend across robotics: as AI models grow more capable, the challenge is no longer just building better machines, but creating development pipelines that allow robots to learn faster, adapt more easily, and operate safely outside controlled laboratory settings.

If those pipelines succeed, the result could be a new generation of robots that are not only specialized tools but adaptable physical AI systems capable of working across a wide range of real-world tasks.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

LG Targets Robotics Supply Chain with In-House Actuator Strategy

LG Electronics is expanding its robotics ambitions by developing and manufacturing robot actuators in-house, positioning itself as a key supplier in the global robotics ecosystem.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Industrial robotic actuator system used in robotic arms and humanoid robots, representing the core motion components LG plans to develop in-house. Photo: LG Electronics

LG Electronics is accelerating its entry into the robotics industry with a strategy that focuses not only on building robots but on manufacturing the critical components that make them move.

At its annual shareholder meeting in Seoul, CEO Ryu Jae-cheol outlined the company’s roadmap for the coming decade, placing robotics at the top of five key growth sectors that also include AI data centers, cooling systems, smart factories, and AI-enabled homes.

The announcement signals a shift in LG’s broader strategy. Rather than simply expanding its consumer electronics portfolio, the company is positioning itself as a supplier of core infrastructure for the emerging robotics economy.

Central to that strategy is a focus on robot actuators – the components responsible for generating movement and precision in robotic systems.

Competing Where the Real Value Is

Actuators function as the mechanical “muscles” of a robot, translating electrical signals into controlled motion. They determine how precisely a robot can move, how much weight it can lift, and how efficiently it operates.

In many robotic systems, actuators account for a large portion of total manufacturing costs. Because of their technical complexity, they also represent one of the most strategically valuable parts of the robotics supply chain.

LG plans to begin designing and producing these components internally, with mass production expected to start within the year.

The move allows the company to enter a part of the robotics industry where barriers to entry are high but long-term margins can be attractive. While the market for finished robots is becoming increasingly crowded, the market for advanced robotic components remains relatively concentrated.

Industry forecasts suggest the global actuator market could reach about $23 billion by 2030, making it a potentially significant revenue stream for companies capable of producing high-performance systems at scale.

Building a Robotics Ecosystem

LG’s robotics ambitions extend beyond component manufacturing.

The company has been actively forming partnerships across the robotics sector to strengthen its position within the global ecosystem. One example is its investment in Chinese humanoid robotics developer AgiBot, with whom LG executives have discussed potential collaboration.

These partnerships are intended to provide both technological insight and potential customers for LG’s actuator systems.

By supplying core components to multiple robotics manufacturers, LG could gain influence across the industry without relying solely on its own robot platforms.

This approach mirrors strategies used by companies in other technology sectors, where component suppliers often capture significant value within complex hardware ecosystems.

From Industrial Components to Consumer Robots

While LG’s actuator initiative targets industrial robotics suppliers, the company also continues to develop its own robotic systems.

LG has already deployed service robots in hospitality and logistics environments, including machines designed for delivery and facility operations.

The next stage of development will likely involve consumer-facing robotics.

The company is currently testing its CLOi home robot, a platform intended to assist with tasks in smart home environments. Commercial availability could follow after 2026, depending on technological readiness and market demand.

For LG, the strategy appears to follow a clear sequence: first secure a position within the industrial robotics supply chain, then expand into service robotics and eventually consumer markets.

Robots as Infrastructure

The company’s pivot toward robotics reflects a broader industry trend in which AI is increasingly tied to physical systems.

As artificial intelligence moves from software into machines that interact with the physical world – factory robots, autonomous vehicles, service robots – the importance of hardware components such as actuators, sensors, and edge computing platforms is rising.

LG’s bet is that the real value in robotics may lie less in the visible machines and more in the foundational technologies that power them.

By developing those technologies internally, the company aims to position itself as a central supplier within the emerging physical AI economy.

News, Robots & Robotics, Science & Tech

Renault Deploys ‘Calvin-40’ Robots in EV Factory as Automakers Expand Industrial Automation

Renault has begun deploying Calvin-40 robots in its Douai electric vehicle plant, using AI-trained machines to handle repetitive assembly tasks and accelerate EV production.

By Laura Bennett | Edited by Kseniia Klichova Published:
Calvin-40 industrial robots operating on Renault’s electric vehicle assembly line in Douai, France, where the company is expanding automation to accelerate EV production. Photo: Wandercraft

Renault has begun deploying a new generation of factory robots at its electric vehicle manufacturing complex in northern France, marking another step in the automotive industry’s shift toward AI-assisted production.

The robots, known as Calvin-40, have recently appeared on the Renault assembly line in Douai, where they perform repetitive tasks such as placing tires onto conveyor systems that feed the Renault 5 electric vehicle production line.

While only a handful of units are currently operating, Renault plans to dramatically expand their presence. Over the next 18 months, the company intends to deploy as many as 350 robots across its ElectriCity production network.

The initiative reflects a broader push by automakers to increase automation as EV production scales globally.

Automating Repetitive Work on the Assembly Line

The Calvin-40 machines were developed by French robotics company Wandercraft and are designed specifically for industrial environments.

Unlike traditional industrial robots that operate within fixed cages or workstations, these machines are built to move within factory spaces and interact with storage racks, conveyor systems, and other equipment.

Each robot stands on two legs and uses articulated arms capable of lifting loads of up to 40 kilograms. The robots retrieve parts such as tires or body components from storage bins and place them into the assembly flow.

A camera system mounted on the robot helps it track objects and monitor its work. Visual indicators on the machine provide operators with real-time status information.

According to Renault, tasks now performed by Calvin-40 robots previously required workers to repeat the same lifting and positioning motions hundreds of times per shift.

By shifting these operations to machines, the company hopes to reduce worker fatigue while maintaining a steady pace of production.

Training Robots for Manufacturing Speed

Developing robots capable of operating reliably in a factory environment required extensive AI training.

While the physical design of the Calvin-40 robot was completed relatively quickly, engineers spent several months refining the system’s software so it could operate at the speed required on an automotive production line.

Training involved teaching the robots how to recognize parts, retrieve components from storage racks, and place them accurately onto moving conveyor systems.

Even with these improvements, Renault says the robots are not yet fast enough to operate in every stage of final vehicle assembly. Some sections of the production line still require speeds beyond what the current generation of robots can achieve.

For now, the machines are focused on specific repetitive tasks where automation can deliver immediate productivity gains.

A Practical Approach to Factory Robotics

Renault’s approach differs from the strategy adopted by some technology companies that showcase humanoid robots in demonstration environments.

Instead of building futuristic prototypes first, Renault has focused on introducing practical automation directly into its factories.

The company has also invested financially in Wandercraft to adapt the robots for automotive manufacturing requirements.

Renault executives say the long-term goal is to accelerate vehicle production while reducing manufacturing costs.

The company aims to cut the time required to build a car by roughly one-third and reduce production expenses by about 20 percent within the next five years.

Automation technologies like the Calvin-40 robots will play a key role in reaching those targets.

Automation and the Future of EV Production

As electric vehicle demand grows, automakers are under increasing pressure to scale manufacturing capacity efficiently.

Factories must manage complex supply chains, new battery technologies, and increasingly competitive production costs.

Industrial robots have long played a role in automotive manufacturing, but newer AI-driven systems are beginning to extend automation into tasks that previously required human flexibility.

Renault’s deployment of the Calvin-40 robots highlights how manufacturers are experimenting with new forms of automation that combine mechanical capability with AI-driven perception and control.

While the robots currently handle relatively narrow tasks, their growing presence on factory floors signals how automation is gradually expanding deeper into the production process.

For automakers racing to scale EV production, that evolution could reshape how cars are built in the years ahead.

Automation, Business & Markets, News, Robots & Robotics

Elon Musk Unveils ‘Terafab’ to Build AI Chips for Cars, Robots, and Space Systems

Elon Musk has unveiled Terafab, a massive semiconductor facility planned near Austin, Texas, designed to produce custom AI chips for Tesla vehicles, Optimus robots, SpaceX satellites, and xAI models.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Concept rendering of a large-scale semiconductor fabrication facility designed to produce specialized AI chips for robotics, autonomous vehicles, and space-based computing systems. Photo: Kseniia Klichova / RobotsBeat

Elon Musk has announced plans for a massive semiconductor manufacturing project aimed at producing specialized AI chips for Tesla vehicles, humanoid robots, and space systems.

The facility, called Terafab, will be built near Austin, Texas and jointly developed by Tesla and SpaceX. According to Musk, the plant will focus on producing custom processors designed specifically for artificial intelligence workloads used in autonomous driving, robotics, and satellite-based computing.

The project reflects a growing shift in the AI industry toward vertically integrated hardware ecosystems, where companies design their own chips to support increasingly complex AI systems.

Terafab’s planned production capacity could reach one terawatt of computing power annually, an enormous scale intended to support the expanding compute requirements of Musk’s companies.

Building Chips for Physical AI

The chips produced at Terafab are expected to serve multiple applications across Musk’s technology portfolio.

One category will focus on edge and inference computing, powering real-time AI decision-making in Tesla vehicles, robotaxis, and the company’s humanoid robot platform, Optimus. These chips are optimized for running trained AI models directly on devices where latency and energy efficiency are critical.

A second category will target high-performance AI training, supporting xAI models and large-scale data processing for SpaceX satellite systems.

As robotics and autonomous systems become more sophisticated, the demand for specialized AI processors has surged. Standard GPUs designed for cloud computing are often inefficient for real-time robotic systems that must process sensor data, control motors, and make split-second decisions.

By designing chips tailored to these tasks, companies can dramatically improve performance and energy efficiency.

A Compute Backbone for Musk’s Ecosystem

Terafab is part of a broader strategy to integrate hardware and AI development across Musk’s companies.

Tesla’s autonomous driving systems rely heavily on AI models that process visual and sensor data from vehicles. Meanwhile, Tesla’s Optimus humanoid robot is expected to require similar computing capabilities for perception, navigation, and manipulation tasks.

SpaceX also has growing computing demands. Musk has suggested that future satellite networks could support orbital data centers, enabling AI processing in space for applications ranging from communications to scientific analysis.

Under the Terafab plan, these systems would share a common computing architecture built around custom chips produced at the facility.

The project also reflects Musk’s increasing emphasis on AI as the central technology linking his companies. Tesla, SpaceX, and xAI are all developing systems that rely on large-scale machine learning models operating in physical environments.

A Massive Manufacturing Bet

The proposed facility represents one of the largest semiconductor manufacturing ambitions outside traditional chipmaking giants.

Initial construction is expected to begin in Texas before expanding production capacity over time. Musk has suggested that the long-term vision could extend beyond terrestrial computing infrastructure.

In addition to ground-based production targets of 100 to 200 gigawatts of computing power, the broader concept includes eventually supporting space-based computing systems capable of delivering up to a terawatt of processing power.

Such scale would dramatically increase the computational capacity available for training AI models and operating distributed intelligent systems across vehicles, robots, and satellites.

The Hardware Race Behind AI

The Terafab announcement underscores a broader industry trend: the race to build specialized computing infrastructure for artificial intelligence.

As AI expands beyond software into physical systems – autonomous vehicles, humanoid robots, and industrial machines – companies are increasingly designing hardware optimized for these workloads.

For Musk, the strategy aims to create a tightly integrated ecosystem where chips, software, and machines evolve together.

If successful, Terafab could become a central piece of infrastructure powering Tesla’s robots, SpaceX’s satellites, and the next generation of AI-driven machines operating both on Earth and in orbit.

HD Hyundai Begins Testing Humanoid Welding Robots for Shipyard Automation

HD Hyundai has launched trials of humanoid robots designed for welding in shipyards, aiming to improve safety and efficiency by applying AI trained on the techniques of skilled welders.

By Laura Bennett | Edited by Kseniia Klichova Published:
A large container vessel under construction at an HD Hyundai Heavy Industries shipyard in Ulsan, South Korea. Photo: HD Hyundai

HD Hyundai is moving forward with tests of humanoid robots designed specifically for shipyard welding, marking one of the most ambitious attempts yet to introduce humanoid machines into heavy industrial production.

The South Korean industrial group announced that several of its affiliates have partnered with U.S.-based robotics firm Persona AI to develop and test AI-powered humanoid robots capable of performing welding operations during ship construction.

The project brings together HD Korea Shipbuilding & Offshore Engineering, HD Hyundai Robotics, and Persona AI under a joint development agreement focused on creating robots capable of operating in demanding shipyard environments.

The effort reflects a broader push across heavy industry to apply artificial intelligence and robotics to tasks traditionally performed by highly skilled human workers.

Training Robots on Skilled Welder Expertise

At the center of the project is the challenge of translating human welding expertise into robotic systems.

HD Korea Shipbuilding & Offshore Engineering plans to train AI models using data gathered from experienced welders working in its shipyards. These datasets capture the techniques, motion patterns, and process conditions required for high-quality welding in ship construction.

The goal is to create a robotic system capable of replicating the precision and adaptability of human welders while operating continuously in industrial conditions.

HD Hyundai Robotics will be responsible for integrating the robotic systems and developing technology for monitoring welding quality and controlling the process during operation.

The humanoid platform itself is being developed by Persona AI, whose role includes designing a bipedal robot capable of navigating shipyard environments where narrow walkways, scaffolding, and uneven surfaces can complicate mobility.

Building Robots for Shipyard Conditions

Shipbuilding presents a particularly difficult environment for automation.

Unlike factory assembly lines with predictable layouts, shipyards involve large structures, confined spaces, and constantly changing work environments as vessels move through different construction stages.

Robots designed for these environments must combine multiple capabilities: stable locomotion, environmental perception, precise manipulation, and the ability to operate safely alongside human workers.

The welding robots being developed by HD Hyundai are expected to integrate these functions, allowing them to move between work areas and perform complex welding tasks without requiring fixed automation setups.

Initial prototypes have already undergone early technical evaluations and were judged capable enough to move into expanded testing.

Toward the Smart Shipyard

For HD Hyundai, the project is part of a broader strategy to modernize shipbuilding operations.

Shipyards face growing pressure to improve productivity while addressing worker safety concerns and labor shortages in skilled trades. Welding, in particular, involves physically demanding work often performed in hazardous environments.

Humanoid robots could potentially take on the most dangerous or repetitive tasks while human workers focus on supervision, planning, and specialized operations.

The company describes the welding humanoid as a foundation for future “smart shipyards”, where robotics and AI systems support complex construction processes.

Although the robots remain in the development stage, their eventual deployment could signal a shift in how large-scale industrial infrastructure is built.

As robotics systems become more capable of navigating complex environments and performing skilled manual work, industries such as shipbuilding may increasingly adopt humanoid machines to augment human labor on some of their most demanding tasks.

Automation, Business & Markets, News, Robots & Robotics

Over 300 Humanoid Robots to Compete in Beijing Half Marathon as Robotics Industry Tests Real-World Mobility

More than 300 humanoid robots from universities and companies across China will compete in Beijing’s upcoming android half marathon, highlighting rapid advances in robot locomotion and autonomy.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Humanoid robots prepare for endurance testing ahead of Beijing’s humanoid half marathon, where hundreds of robots will compete alongside human runners. Photo: Beijing Humanoid Robot Innovation Center

More than 300 humanoid robots are expected to line up alongside human runners in Beijing next month for what organizers say will be the largest robotics endurance race ever held.

The 2026 Beijing E-Town Half Marathon and Humanoid Robot Half Marathon, scheduled for April 19, will feature robots developed by companies, universities, and research institutions from across China. Organizers say 76 teams representing 13 provincial-level regions have registered for the event.

The race marks a major expansion compared with the inaugural competition last year and reflects how quickly humanoid robotics development is accelerating in the country.

According to officials from the Beijing Bureau of Economy and Information Technology, the number of participating teams has increased nearly fivefold since the first race, while the diversity of participants now spans corporate labs, academic institutions, and training programs.

From Robotics Demo to National Testbed

The event will feature 26 robot brands and more than 300 humanoid machines attempting to complete the half marathon course.

While the competition has a public spectacle element, organizers frame it primarily as a technical proving ground for robotics systems.

Running long distances forces developers to address multiple engineering challenges simultaneously: locomotion stability, energy efficiency, heat management, mechanical durability, and motion control.

Unlike short demonstrations in laboratories or trade shows, endurance races expose weaknesses that may only appear after sustained operation.

For robotics developers, these competitions can reveal how well machines perform under continuous real-world stress.

Autonomy Takes Center Stage

One of the most significant shifts in this year’s event is the growing emphasis on autonomous navigation.

Organizers say roughly 38 percent of participating teams will deploy robots capable of navigating the course independently using onboard perception systems and mapping technology.

Last year, many robots relied on human assistance, remote control, or pacing guidance to stay on track.

Autonomous navigation introduces a much higher level of complexity. Robots must perceive their environment, interpret terrain changes, plan routes, and make real-time adjustments to maintain balance and speed.

These capabilities are central to the broader goal of deploying humanoid robots outside controlled environments.

A Rapidly Growing Ecosystem

University participation has surged as well. Twenty universities will enter robots into the race – ten times more than during the inaugural event.

This growth highlights the increasingly close relationship between academic robotics research and industrial development.

Humanoid robotics in China has seen significant momentum over the past year, fueled by advances in mechanical design, control systems, and AI-based motion planning.

The marathon event provides a rare opportunity to evaluate these technologies in a shared, competitive setting.

Beyond the Finish Line

While humanoid robots remain far from matching elite human athletes in endurance running, events like Beijing’s android marathon serve a different purpose.

They are designed to stress-test robotic systems under conditions that resemble real-world deployment – continuous movement, changing terrain, and unpredictable conditions.

The lessons learned from such competitions can influence future applications ranging from logistics and industrial work to search-and-rescue operations and infrastructure inspection.

As humanoid robots move from laboratory prototypes toward practical machines, endurance tests like the Beijing race may increasingly become benchmarks for measuring progress in embodied AI.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech
Exit mobile version