Wayve Raises Up to $1.5 Billion to Scale Plug and Play Robotaxi Software

U.K.-based Wayve has secured up to $1.5 billion in new funding to expand its AI-driven autonomous driving software, with robotaxi deployments planned in London and integration into consumer vehicles by 2027.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Wayve’s autonomous driving software is designed to operate across multiple vehicle platforms, enabling robotaxis and consumer vehicles without requiring proprietary hardware systems. Photo: Wayve

Wayve, a London-based autonomous driving startup, has raised up to $1.5 billion in new funding to accelerate deployment of its AI-driven vehicle software, marking a shift toward licensing-based autonomy as robotaxis prepare to launch on public roads. The round values the company at $8.6 billion and positions it to expand globally as robotaxi services using its technology enter commercial operation this year.

The funding includes $1.2 billion from investors such as SoftBank, Microsoft, Nvidia, Uber, Mercedes-Benz, Nissan, and Stellantis, with Uber committing an additional $300 million contingent on performance milestones. The capital injection brings Wayve’s total funding to more than $3 billion since its founding in 2017, underscoring growing investor confidence in software-centric approaches to autonomous driving.

Robotaxis powered by Wayve’s system are expected to begin operating in London in partnership with Uber later this year, while Nissan plans to integrate the company’s technology into consumer vehicles starting in 2027.

A Software First Approach to Autonomy

Wayve’s strategy differs fundamentally from competitors such as Waymo and Tesla, which have invested heavily in vertically integrated models involving proprietary vehicles or company-owned fleets. Instead, Wayve focuses on building a generalized AI driving system that can be integrated into vehicles produced by automakers or operated by third-party fleet providers.

The company’s software is designed as a plug-and-play platform capable of functioning across different hardware configurations, including vehicles equipped with cameras alone or those using lidar, radar, and other sensors. This flexibility allows automakers and mobility providers to deploy autonomous functionality without redesigning vehicles around proprietary sensor stacks.

Wayve co-founder and CEO Alex Kendall has described the licensing model as the most scalable path forward, allowing the company to focus on developing the AI software layer rather than managing vehicle manufacturing or fleet operations. This approach also reduces the capital intensity associated with building and maintaining autonomous fleets.

The model mirrors the broader shift in robotics and physical AI toward software-defined systems, where intelligence becomes the primary value layer and hardware serves as a deployment platform.

Commercial Deployment Signals Industry Transition

The new funding arrives as autonomous driving moves from pilot programs into early commercialization. Waymo, currently the leading operator of commercial robotaxis, has expanded its services to additional U.S. cities and continues to scale operations globally. Other companies, including Waabi and Tesla, are also pursuing robotaxi deployment, though at different stages of readiness.

Wayve’s entry into commercial service in London represents a key milestone, particularly because its business model depends on adoption by external partners rather than direct fleet ownership. Success will depend on whether its AI can operate reliably across different vehicle platforms and environments without extensive customization.

The company has already conducted testing across multiple international locations, including Germany, Japan, and the United States, reflecting its ambition to build a globally deployable autonomous driving system.

Partnerships with automakers such as Nissan, Mercedes-Benz, and Stellantis also extend Wayve’s reach beyond ride-hailing into consumer vehicles. This dual deployment model, spanning robotaxis and privately owned cars, could significantly expand the total addressable market for autonomous driving software.

Autonomous Driving Becomes a Software Platform Market

Wayve’s funding highlights a broader shift in autonomous driving toward platform-based business models, where companies compete to provide the AI systems that enable autonomy rather than owning the physical vehicles themselves.

This approach resembles developments in other robotics sectors, where software platforms increasingly define system capabilities and scalability. By separating the intelligence layer from the hardware, companies can deploy across diverse vehicle types and geographies without replicating infrastructure.

The outcome of this transition remains uncertain. Vertically integrated operators such as Waymo retain advantages in controlling system performance and deployment environments, while licensing-focused companies like Wayve aim to scale faster by leveraging existing automotive manufacturing and mobility networks.

If Wayve’s model succeeds, it could reshape the economics of autonomous driving, transforming autonomy from a capital-intensive infrastructure business into a software-driven platform integrated across the global automotive industry.

Agibot Declares 2026 “Deployment Year One”, Unveils Five Robot Platforms and Open AI Architecture

Agibot used its annual partner conference to declare 2026 the first year of large-scale commercial deployment for embodied AI, unveiling five new robotic platforms, eight AI models, and an open-source development architecture called AIMA.

By Rachel Whitman | Edited by Kseniia Klichova Published:
: A lineup of humanoid and wheeled robotic platforms displayed at an embodied AI product conference. Photo: Agibot

Agibot used its annual partner conference, APC 2026, to declare 2026 as “Deployment Year One” for embodied AI – the point at which the industry transitions from validating robot capabilities to generating measurable productivity at scale. The company unveiled five new robotic platforms, eight AI models, and a full-stack open development architecture called AIMA, framing the announcements as the infrastructure layer for its next phase of commercialization.

The declaration is grounded in operational data. Agibot rolled out its 10,000th humanoid robot in March 2026, and its humanoid robot revenue grew more than 22-fold in 2025 to become the company’s largest revenue stream, according to figures the company has previously reported.

Five Platforms, One Unified Architecture

Agibot positioned itself at APC 2026 as the only company offering a full-series lineup spanning humanoids, wheeled platforms, and multi-form robots across different sizes and deployment scenarios. The five new platforms are built on what the company calls a “One Robotic Body with Three Intelligences” framework, integrating motion intelligence, interaction intelligence, and operation intelligence into a unified system.

The architecture is designed to address a core limitation of earlier robotic deployments: systems optimized for a single task or environment. By coupling perception, decision-making, and physical execution within one hardware and software stack, Agibot argues that robots can generalize across complex real-world environments rather than operating within narrowly defined parameters.

Seven Commercial Solutions

To support faster enterprise adoption, Agibot introduced seven standardized productivity solutions targeting specific industrial scenarios: loading and unloading, industrial handling, logistics sorting, guidance and retail assistance, retail service stations, security patrol, and industrial and commercial cleaning. Each solution bundles hardware, AI models, and data infrastructure into a repeatable deployment package, reducing the integration complexity that has historically extended robotics pilot timelines.

“The industry is moving from proving what robots can do, to proving what value they can consistently deliver at scale,” said Edward Deng, Founder, Chairman, and CEO of Agibot.

AIMA: An Open Architecture for Embodied AI

The most structurally significant announcement was the launch of AIMA – AI Machine Architecture – described as the first complete open technology system for embodied intelligence. Built on a “1+3+X” design, AIMA consists of a unified robot operating system called Link-U OS, three development platforms covering motion creation, interaction design, and task development, and an extensible layer supporting third-party applications and the AGIBOT Embodied Agent Framework.

The architecture provides an end-to-end toolchain from low-level system control to high-level application development, and Agibot intends to continue open-sourcing components to attract developers and partners. The company plans to invest more than 2 billion yuan over five years to expand the ecosystem, targeting partnerships with universities, industry operators, and a large-scale developer community.

A Three-Stage Industry Roadmap

Agibot also presented a long-term framework for the embodied AI industry structured around three development curves. The current phase, running through 2026, covers foundational development and early adoption. The period from 2026 to 2030 is characterized as a deployment growth stage, during which robot productivity is expected to approach human levels and scenario-based deployment scales significantly. From 2030 onward, the company projects a qualitative leap in generalization capability, collective intelligence, and robots beginning to surpass human productivity in selected domains.

The roadmap is a strategic positioning exercise as much as a technical forecast. With more than 150 humanoid robot manufacturers active in China alone, the company that can credibly claim platform and ecosystem leadership – rather than competing on individual robot specifications – is likely to capture a disproportionate share of the emerging enterprise market.

Artificial Intelligence (AI), Business & Markets, News, Robots & Robotics, Science & Tech

NVIDIA and Partners Demonstrate Production-Ready AI Manufacturing Systems at Hannover Messe 2026

NVIDIA and more than a dozen industrial partners used Hannover Messe 2026 to demonstrate AI-driven manufacturing systems already operating in live production environments, from humanoid robots in electronics factories to vision AI agents on automotive assembly lines.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Robotic systems and AI-driven automation equipment operating on a factory floor during a major industrial technology exhibition. Photo: HANNOVER MESSE

NVIDIA used Hannover Messe 2026 to present a coordinated set of industrial AI deployments across robotics, simulation, vision AI, and edge computing – alongside partners spanning Siemens, Microsoft, ABB, Dassault Systèmes, and a range of specialized software and hardware firms. The common thread across the demonstrations was an emphasis on systems already operating in production rather than technology previews, with several partners presenting quantified outcomes from live deployments.

The show ran April 20-24 in Hannover, Germany, and served as a staging ground for NVIDIA’s physical AI ecosystem, built around its Omniverse, Isaac, Jetson, and IGX compute platforms.

Humanoid Robots in Live Production

The most concrete robotics demonstration came from Humanoid, whose HMND 01 wheeled humanoid has completed autonomous logistics operations at a Siemens electronics factory in Erlangen, Germany – described as a first proof of concept within a live production environment. The robot runs NVIDIA’s Jetson Thor edge AI module for on-device compute and was developed using Isaac Sim and Isaac Lab for simulation and reinforcement learning.

A simulation-first development approach compressed what typically takes up to two years of hardware development down to seven months, according to Humanoid.

A second notable humanoid deployment involves Hexagon Robotics, whose AEON robot is preparing for assembly operations at a BMW plant in Leipzig – one of the first humanoid deployments in a German production environment. The system was developed using NVIDIA’s Physical AI Data Factory Blueprint and IGX Thor for industrial-grade edge compute with functional safety certification.

SCHUNK’s GROW automation cell demonstrated a standardized, deployable form of physical AI for small and medium-sized manufacturers. The system uses NVIDIA Omniverse and Isaac simulation to train and validate robot behavior before deployment, with Wandelbots’ NOVA platform managing continuous refinement on the shop floor and EY designing the operating model for European SME rollout.

Vision AI Agents on the Factory Floor

Several partners demonstrated vision AI systems built on NVIDIA’s Metropolis and Cosmos platforms, targeting quality control, safety monitoring, and operational intelligence.

Invisible AI launched its Vision Execution System at the show, an agent-based platform that captures and analyzes every production cycle in real time using the NVIDIA Metropolis VSS Blueprint and Cosmos Reason 2 models. The system is already deployed at major automotive manufacturers including Toyota.

Tulip Interfaces showcased Factory Playback, which synchronizes machine telemetry, operator workflows, quality events, and video into a searchable operational timeline. Terex, an industrial equipment manufacturer operating more than 40 plants, uses the platform and is projected to achieve a 3% yield increase and 10% reduction in rework.

Fogsphere demonstrated vision AI deployment in high-risk industrial environments, with Saipem using the platform to detect and respond in real time to safety and environmental events on energy infrastructure.

Sovereign AI Infrastructure

Underlying many of the deployments on display is the Industrial AI Cloud, built in Germany by Deutsche Telekom on NVIDIA infrastructure and designed as a sovereign AI platform for European industry. The facility provides a secure foundation for running AI workloads – from factory-scale digital twins to software-defined robotics – under European data governance requirements.

ABB, Dassault Systèmes, Kongsberg Digital, Microsoft, and Siemens each demonstrated digital twin capabilities built on NVIDIA Omniverse libraries, with applications ranging from real-time asset performance analysis to stress-testing factory configurations before physical changes are made.

QNX expanded its collaboration with NVIDIA to cover safety-critical edge AI, with QNX OS for Safety 8.0 now integrated on NVIDIA IGX Thor alongside the NVIDIA Halos safety stack – a combination targeting robotics, medical, and industrial applications where functional safety certification is a deployment requirement.

NEURA Robotics and AWS Partner to Scale Physical AI Training and Deploy Robots in Amazon Fulfillment Centers

NEURA Robotics and Amazon Web Services have announced a strategic collaboration to train, validate, and deploy cognitive robots at scale, with Amazon exploring deployment of NEURA systems in select fulfillment centers as a real-world data source for Physical AI development.

By Laura Bennett | Edited by Kseniia Klichova Published:
A cognitive humanoid robot operating in a warehouse fulfillment environment alongside human workers. Photo: NEURA Robotics

NEURA Robotics, the German cognitive robotics company, and Amazon Web Services have announced a strategic collaboration to accelerate the development and global deployment of physical AI systems. AWS will serve as the primary cloud provider for NEURA’s Neuraverse platform, handling AI training, real-world data processing, and shared intelligence across robot fleets. Amazon will separately explore deploying NEURA robotic systems in select fulfillment centers, providing production-environment data to accelerate the development of new robotic capabilities in logistics and warehouse operations.

The partnership addresses what both companies describe as the central constraint on physical AI progress: the data gap. Unlike large language models trained on trillions of internet-sourced data points, robotic AI systems have access to a fraction of that volume, and the data they need can only be generated through real-world operation.

Three Areas of Collaboration

The agreement spans cloud infrastructure, AI development, and real-world validation. AWS will provide the computational backbone for the Neuraverse, NEURA’s platform for training and sharing robotic intelligence across fleets. NEURA Gym – a purpose-built training environment where robots practice complex tasks in controlled settings alongside high-fidelity simulation – will integrate with Amazon SageMaker to accelerate joint training pipelines across NEURA and partner use cases.

The real-world validation component is the most strategically significant element. Amazon’s fulfillment centers represent one of the most operationally demanding and data-rich environments available for robotic deployment – high throughput, variable product mix, and continuous operation at global scale. Each deployment generates the kind of sensor data, task variety, and edge-case exposure that controlled training environments cannot replicate.

“Physical AI will only reach its full potential if intelligence can be trained, validated, and continuously improved in the real world,” said David Reger, CEO and founder of NEURA Robotics. “With AWS, we gain the infrastructure to scale the Neuraverse globally. With Amazon, we have the opportunity to bring Physical AI into one of the most advanced operational environments in the world.”

The Data Infrastructure Problem

The collaboration is built around a structural challenge that applies across the robotics sector. Simulation can approximate physical environments but cannot fully replicate the variability of real-world conditions – surface irregularities, lighting changes, unexpected object configurations, and human interaction patterns. Continuous feedback loops between simulation and real-world deployment are required to close that gap over time.

AWS’s role is to make those loops faster and more scalable. By running the Neuraverse on cloud infrastructure with global reach, NEURA can distribute trained intelligence across its entire robot fleet in near real time, so improvements derived from one deployment environment propagate across all systems.

Ecosystem and Scale

The AWS partnership extends a network NEURA has been assembling across cloud, semiconductors, and industrial deployment. The company’s existing partners include Kawasaki – one of the ten largest robotics companies globally – alongside Schaeffler, Bosch, and Qualcomm Technologies. The stated goal is to enable millions of cognitive robots by 2030.

NEURA has not disclosed the financial terms of the AWS agreement, the timeline for Amazon fulfillment center deployments, or the specific robotic systems under consideration for those pilots. The fulfillment center component is framed as exploratory, meaning commercial deployment at scale remains contingent on performance outcomes from the initial trials.

Business & Markets, News, Robots & Robotics, Science & Tech

DEEPX and Hyundai Motor Group Robotics LAB Partner to Build On-Device AI Chips for Robots

DEEPX and Hyundai Motor Group’s Robotics LAB have announced a strategic collaboration to co-develop an ultra-low-power AI computing platform capable of running large-scale generative AI models on-device within robotic systems.

By Rachel Whitman | Edited by Kseniia Klichova Published:
An advanced AI semiconductor chip designed for low-power on-device inference in robotic and autonomous systems. Photo: DEEPX

DEEPX, a South Korean AI semiconductor company specializing in ultra-low-power inference chips, has announced a strategic partnership with Hyundai Motor Group’s Robotics LAB to co-develop a next-generation AI computing platform for robotic systems. The two organizations have been working together on low-power edge AI technology for robotics over the past three years, and the new agreement formalizes that collaboration into a joint architecture program.

The partnership targets a specific technical problem: running large-scale generative AI models in real time on robotic hardware, without relying on cloud connectivity or data center-level power consumption.

The Technical Challenge

Modern robotics AI is increasingly built around Vision-Language-Action and Vision-Language Model architectures – systems that allow robots to interpret camera input, process natural language instructions, and make autonomous decisions in real time. These models are computationally intensive and have historically required significant power and connectivity to run, which limits their deployment in mobile, battery-powered, or field-based robotic systems.

The collaboration will focus on four areas: ultra-low-power AI semiconductor architecture, AI computing hardware systems for robotics, a physical AI software stack, and robotics application AI libraries. The goal is a cohesive computing platform that can support VLA and VLM models at the edge – on the robot itself – rather than offloading inference to external infrastructure.

At the center of the technical effort is DEEPX’s DX-M2, a next-generation chip the company describes as a Physical GenAI semiconductor, designed specifically to run large-scale AI models in ultra-low-power environments for robotics, autonomous mobile systems, and industrial automation applications.

Why On-Device Inference Matters

The shift toward on-device AI computation in robotics has direct implications for deployment viability. Robots operating in warehouses, factories, or outdoor environments cannot always maintain low-latency cloud connections, and the power budgets of mobile platforms place hard limits on the compute hardware they can carry. A chip capable of running generative AI models locally removes both constraints.

“The AI industry is rapidly shifting from data center-centric models to a Physical AI era,” said Lokwon Kim, CEO of DEEPX. “Ultra-low-power computing capable of running AI in real-world systems will become the core infrastructure.”

Hyundai Motor Group’s Robotics LAB frames the partnership as part of a broader strategy to build a proprietary technology ecosystem for robots that operate alongside people. “In the era of Physical AI, robots are becoming the closest point of contact between AI technology and people,” said Dong Jin Hyun, Vice President and Head of Robotics LAB at Hyundai Motor Group.

Market Context

The physical AI semiconductor market is projected to reach approximately $123 billion by 2030, with robotics and humanoid systems identified as the primary demand drivers. The segment is attracting attention from both established chipmakers and specialized startups, as the requirement for on-device AI inference in physical systems creates demand that general-purpose data center chips are not optimized to meet.

DEEPX and Hyundai have not disclosed a product timeline or the specific robotic platforms the DX-M2 is intended to power. The partnership agreement covers joint architecture development, suggesting the platform is still in the design phase rather than approaching commercial deployment.

Artificial Intelligence (AI), Business & Markets, News, Robots & Robotics

Hannover Messe 2026 Marks Shift from AI Pilots to Industrial Deployment at Scale

More than 3,000 exhibitors at Hannover Messe 2026 showcased physical, generative, and agentic AI systems moving from demonstration into production, with Siemens, Schneider Electric, NVIDIA, and Microsoft presenting deployments already delivering measurable results on factory floors.

By Laura Bennett | Edited by Kseniia Klichova Published:
Industrial robots and AI systems operating on an automated factory floor at a major manufacturing technology exhibition. Photo: HANNOVER MESSE

Hannover Messe 2026 opened this week with more than 3,000 exhibitors, and the central message from the world’s largest industrial technology trade show was consistent: AI systems are no longer in pilot phases. The conversation at this year’s event focused on measurable outcomes – throughput figures, cost reductions, and hours of autonomous operation – rather than capability demonstrations.

The convergence of physical AI, agentic software, and industrial robotics dominated the exhibition floor, with major technology firms presenting deployments already operating in live production environments.

Siemens, NVIDIA, and Humanoid on the Factory Floor

The most concrete demonstration of physical AI in manufacturing came from a collaboration between Siemens, NVIDIA, and robotics firm Humanoid. The HMND 01, a wheeled humanoid robot built on NVIDIA’s physical AI stack, has moved beyond testing and is performing autonomous logistics tasks at Siemens’ electronics factory in Erlangen, Germany.

At the show, the robot’s performance was framed around a specific operational metric: 60 tote moves per hour, handling the picking and placing of containers for human operators. The system integrates with Siemens’ Xcelerator portfolio, using a simulation-first training approach that allows skills developed in digital environments to transfer directly into production settings with real-time edge inference.

“Factories of the future demand robots that can perceive, reason and adapt autonomously alongside human workers,” said Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA. “This deployment paves the way for humanoid robots meeting real production targets on a live factory floor.”

Schneider Electric and Microsoft Cut Engineering Time by Half

While Siemens focused on physical automation, Schneider Electric used the event to demonstrate agentic AI in engineering workflows. Its Industrial Copilot, built on Microsoft Azure AI, is already in production with customers including h2e POWER, an Indian green hydrogen supplier.

The h2e POWER deployment reported 6,000 hours of stable autonomous operation, a 10% reduction in the levelized cost of hydrogen, and an estimated €500,000 in savings. Schneider cited engineering time reductions of up to 50% for early adopters of the platform.

“This open architecture means we can redeploy intelligence across our entire installed base across multiple locations, without the lock-in that has constrained industrial innovation for decades,” said Siddharth Mayur, founder of h2e POWER.

Infrastructure as the Binding Constraint

Several exhibitors addressed the gap between AI capability and deployment readiness, arguing that computational infrastructure – not the AI itself – remains the primary barrier for most manufacturers.

Schneider and Dell presented a full-lifecycle AI deployment framework covering operational technology groundwork, digital twin planning via AVEVA and NVIDIA Omniverse, and modular prefabricated data centers for rapid scaling. Schneider also demonstrated its EcoStruxure Automation Expert running on AWS cloud infrastructure, using Amazon EC2 for virtualized control and AWS IoT Greengrass at the edge to enable consistent AI-driven automation across distributed sites.

Bernd Wagner, chief strategy officer of Schwarz Digits, the IT division of Schwarz Group, told attendees: “Robust IT infrastructures are now the foundation of global competitiveness.” The emphasis on data sovereignty alongside computational efficiency reflects a concern that is particularly acute for European manufacturers operating under stricter data governance frameworks than their U.S. and Chinese counterparts.

What Hannover Messe 2026 Signals

The cumulative picture from this year’s event is that the industrial AI stack – physical robots, agentic software, digital twins, and edge infrastructure – is maturing simultaneously across multiple layers. Deployments that would have been described as pilots two years ago are now being presented with operational data and return-on-investment figures. The open question is how quickly the economics of these systems reach manufacturers outside the large enterprise tier, where integration costs and infrastructure requirements remain prohibitive for most operators.

Exit mobile version