Trener Robotics Raises $32 Million to Build Robot-Agnostic Skills Platform

Trener Robotics has raised $32 million in Series A funding to expand Acteris, its robot-agnostic AI skills platform. The company aims to replace traditional robot programming with conversational, production-ready automation across industrial environments.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Trener Robotics Raises $32 Million to Build Robot-Agnostic Skills Platform
Trener Robotics’ Acteris platform enables industrial robots from multiple brands to be programmed through natural language, signaling a shift from procedural coding to AI-driven skills deployment. Photo: Trener Robotics

Industrial robotics has long been constrained by rigid programming models that limit machines to narrowly defined, repetitive tasks. Last week, Trener Robotics announced a $32 million Series A round aimed at changing that equation with a robot-agnostic AI skills platform designed to make industrial automation more adaptable and accessible.

The funding, co-led by Engine Ventures and IAG Capital Partners, brings the company’s total capital raised to more than $38 million. Strategic investors including Cadence and Geodesic Capital, through Nikon’s NFocus Fund, also participated.

Replacing Procedural Code with Skills

Founded in 2024 and headquartered in San Francisco and Trondheim, Norway, Trener Robotics is building Acteris, a control layer that abstracts away vendor-specific robot programming. Instead of writing custom code for each robotic arm, operators can describe tasks in natural language. The platform translates conversational input into executable automation sequences.

“For decades, industrial robotics has been limited by dynamic complexity, confining millions of robotic arms to repetitive, single-purpose tasks in highly controlled environments,” said Dr. Asad Tirmizi, co-founder and CEO of Trener Robotics. He argues that replacing procedural programming with a reusable skills library enables robots to operate more like adaptable teammates than fixed-function tools.

Acteris is trained on visual, haptic, language, and action data, allowing it to adapt to variable parts and semi-structured production settings. The company says the system supports part identification through machine vision, motion optimization that responds to environmental changes, collision avoidance, and real-time production monitoring dashboards.

Crucially, the platform integrates with existing hardware. In 2025, Trener worked with more than 15 systems integrators across Europe and the U.S., integrating robot brands including ABB, Universal Robots, and FANUC.

A Control Layer for Physical AI

The broader significance lies in where automation is heading. Industrial robotics adoption is expanding beyond high-volume, low-variability production into high-mix environments that demand flexibility. According to Mordor Intelligence, the market for flexible automation is growing at a 14.3 percent compound annual growth rate, driven by labor shortages and cost pressures.

Historically, reprogramming a robot cell for a new product run required specialized engineers and downtime that eroded return on investment. By abstracting control into a generalized AI skills layer, Trener is positioning Acteris as infrastructure rather than an application.

Investors describe the company as building an intelligence layer for physical automation. Reed Sturtevant, general partner at Engine Ventures, said the firm saw an opportunity to address one of automation’s structural bottlenecks. Dennis Sacha, partner at IAG Capital Partners, characterized industrial automation as being at an inflection point, particularly for small and mid-sized manufacturers seeking scalable AI capabilities.

The Competitive Context

Trener is entering a crowded but evolving landscape. Traditional robot manufacturers have introduced proprietary low-code interfaces, while research-focused generalist AI robotics platforms promise broad adaptability but often lack production validation.

Acteris attempts to bridge that gap. The company emphasizes that its system runs on equipment manufacturers already own, differentiating itself from research-first generalist systems that may not yet be shop-floor proven.

The central question for the sector is whether AI-driven abstraction layers can meaningfully reduce integration complexity while maintaining safety, reliability, and deterministic performance. If successful, such platforms could lower the barrier to entry for automation and expand robotics deployment beyond specialized factories into mainstream manufacturing.

With fresh capital earmarked for research and development, talent acquisition, and partner expansion, Trener Robotics is betting that the future of industrial automation will not be defined by individual robot brands, but by the intelligence layer that orchestrates them.

Business & Markets, News, Robots & Robotics, Science & Tech

Surgeon in London Removes Prostate via Robot 1,500 Miles Away in Gibraltar

A surgeon in London remotely controlled a robotic system to remove a prostate from a patient in Gibraltar, demonstrating how teleoperated robotics could expand access to specialized surgical care.

By Laura Bennett Published: Updated:
Surgeon in London Removes Prostate via Robot 1,500 Miles Away in Gibraltar
A robotic surgical system performs a prostate removal in Gibraltar while the surgeon operates remotely from London, demonstrating how robotics and high-speed networks enable long-distance medical procedures. Photo: MicroPort / LinkedIn

A surgeon in London has successfully removed a patient’s prostate using a robotic system located roughly 1,500 miles away in Gibraltar, highlighting the growing potential of long-distance robotic surgery, reports The Guardian.

The procedure involved a 62-year-old patient, Paul Buxton, who underwent a robotic prostatectomy at St Bernard’s Hospital in Gibraltar while the operation was conducted remotely from London’s Harley Street district. The surgery was performed by Prof. Prokar Dasgupta, a leading urologist and head of the robotic centre of excellence at The London Clinic.

Using a specialized surgical console, Dasgupta controlled the Toumai Robotic System developed by Shanghai-based MicroPort’s MedBot. The robot, equipped with four articulated arms and a high-definition 3D camera, executed the delicate procedure inside the operating theatre while transmitting real-time visual feedback to the surgeon.

A Milestone in Remote Robotic Surgery

The operation relied on a high-speed fibre optic connection linking London and Gibraltar, supported by a backup 5G network to ensure continuity in case of connectivity issues. According to the medical team, the system maintained a delay of only 60 milliseconds between the surgeon’s commands and the robot’s movements.

“We operated on an NHS patient in Gibraltar from the London Clinic 2,400km away using a robot with a 3D HD camera with four arms,” Dasgupta said. “The robot is completely controlled from a console using high-speed lines with a time delay of only 0.06 seconds.”

Medical staff were present in the operating room in Gibraltar throughout the procedure, prepared to intervene if the connection failed or complications arose. The surgery was completed successfully, and Buxton reported feeling “fantastic” within days of the operation.

For the patient, who has lived in Gibraltar for four decades, the alternative would likely have involved travelling to the United Kingdom for treatment and spending weeks waiting for surgery.

“If I hadn’t gone for the telesurgery in Gibraltar, I would have had to fly to London and go on the NHS waiting list,” Buxton said, adding that taking part in the procedure felt like being involved in “medical history.”

Rapid Growth of the Toumai Surgical Robot

The Toumai system used in the operation is part of a rapidly expanding global surgical robotics platform. According to its developer, Toumai has surpassed 200 commercial orders worldwide across nearly 50 countries and regions, with close to 130 systems already installed in hospitals.

Adoption has accelerated quickly, doubling from just over 100 orders in late 2025 to more than 200 within a few months. Growth has been particularly strong in emerging healthcare markets such as India and Brazil, while hospitals in developed markets including Spain and Australia are also expanding deployments.

The platform has supported thousands of procedures across urology, thoracic surgery, general surgery, gynecology, and head and neck operations. Nearly 800 remote robotic surgeries have already been performed in more than 20 countries, all reportedly completed successfully.

Expanding Access to Specialist Care

Remote robotic surgery has long been viewed as a way to connect expert surgeons with patients in regions where specialized care is difficult to access. Instead of transporting patients long distances, the technology allows experienced doctors to perform procedures remotely while local medical teams provide on-site support.

Dasgupta said the potential humanitarian impact could be significant, particularly for patients in remote locations or smaller healthcare systems that lack highly specialized surgeons. “I think it is very, very exciting,” he said. “The humanitarian benefit is going to be significant.”

The success of the Gibraltar procedure comes as hospitals and technology providers worldwide continue experimenting with telesurgery, aided by improvements in robotics, fiber-optic networks, and low-latency communication systems. Dasgupta is expected to repeat the remote procedure soon while broadcasting it to thousands of surgeons attending a major European urology conference.

While remote surgery still requires rigorous safeguards and reliable connectivity, the milestone operation suggests that robotics may increasingly enable expert medical care to reach patients regardless of distance.

MWC 2026: HONOR Wins More than 70 Awards for AI Devices and Robotics

HONOR captured more than 70 media awards at Mobile World Congress 2026, highlighting its push into AI-powered devices, robotics concepts, and next-generation foldables.

By Daniel Krauss Published: Updated:
MWC 2026: HONOR Wins More than 70 Awards for AI Devices and Robotics
HONOR showcases its latest AI-powered devices and robotics concepts at Mobile World Congress 2026, where the company received more than 70 industry awards for innovation. Photo: HONOR

At Mobile World Congress 2026 in Barcelona, Chinese technology company HONOR received more than 70 media and industry awards for its latest devices and AI initiatives, underscoring the company’s growing focus on artificial intelligence, robotics-inspired hardware, and next-generation foldable smartphones.

The recognition highlighted HONOR’s broader strategy around “Augmented Human Intelligence,” a vision aimed at integrating AI capabilities deeply into consumer electronics.

The company showcased several new products and concepts at the event, including the Magic V6 foldable smartphone, the MagicPad 4 tablet, the MagicBook Pro 14 laptop, and experimental AI-driven hardware concepts such as its “Robot Phone.”

Robotics Concepts And Embodied AI

One of the most discussed demonstrations at the event was HONOR’s Robot Phone, a concept device designed to showcase how AI-powered hardware might physically interact with users in future devices.

The prototype combines robotic-style movement with AI-powered imaging and sensing capabilities. According to reports from major media outlets including Bloomberg and Reuters, the device represents an early experiment in embodied AI, where computing systems combine perception, reasoning, and physical interaction.

A video demonstration released during the event showed how the device can visually recognize objects and move to capture photos automatically. Coverage from Bloomberg, Reuters, and CNBC highlighted the concept as part of a wider shift among technology companies exploring physical AI devices.

Technology publications also discussed the device’s design approach. Engadget described it as resembling a compact personal robot integrated into a smartphone form factor, while GadgetMatch suggested it offered a preview of devices that behave more like intelligent assistants than traditional electronics.

Magic V6 Highlights Foldable Innovation

Alongside its experimental concepts, HONOR also received strong recognition for its flagship Magic V6 foldable smartphone, which reviewers widely described as one of the most advanced foldable devices introduced at the event.

Technology outlets including TechRadar, Android Authority, and Trusted Reviews praised the device for its thin design, durability improvements, and advanced battery technology.

The smartphone incorporates a silicon-carbon battery, a newer battery chemistry that improves energy density while allowing thinner device designs. According to reports from Stuff and GSMArena, the device also runs on Qualcomm’s Snapdragon 8 Elite Gen 5 processor.

HONOR’s battery innovation also received the Best Disruptive Device Innovation award at the Global Mobile Awards, presented during MWC. The winners were announced by the GSMA during the event’s annual ceremony, according to the official MWC GLOMO awards announcement.

Expanding AI Device Ecosystem

Beyond smartphones, HONOR used the Barcelona event to expand its broader AI device ecosystem.

The MagicPad 4 tablet and MagicBook Pro 14 laptop were presented as productivity-focused devices designed to integrate tightly with AI services and cross-device workflows. Reviewers highlighted the tablet’s lightweight design and performance improvements, while the laptop emphasized AI-assisted computing features.

Several outlets, including TechRadar and TechAdvisor, placed HONOR devices on their “Best of MWC 2026” lists, reflecting strong reception from technology journalists covering the show.

As smartphone makers increasingly compete on AI capabilities, HONOR’s presentation at MWC suggests the company is positioning itself not just as a smartphone manufacturer but as a broader AI hardware platform provider, experimenting with robotics-inspired designs and intelligent device ecosystems.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

Texas Instruments and NVIDIA Partner to Accelerate Physical AI and Robotics

Texas Instruments and NVIDIA are expanding their collaboration to accelerate robots and other physical AI systems by combining advanced sensing, power electronics, and AI computing platforms.

By Daniel Krauss Published: Updated:
Texas Instruments and NVIDIA Partner to Accelerate Physical AI and Robotics
Texas Instruments and NVIDIA are combining sensing, power electronics, and AI computing platforms to accelerate the development of robots and other physical AI systems. Photo: Texas Instruments

Texas Instruments and NVIDIA have expanded their collaboration to accelerate the development of robots and other machines powered by physical AI. The initiative brings together Texas Instruments’ sensing and power technologies with NVIDIA’s AI computing platforms to support the next generation of autonomous systems operating in the physical world.

The partnership reflects a broader industry shift as artificial intelligence moves beyond data centers and digital applications into machines capable of sensing, reasoning, and acting in real environments. From industrial robots and autonomous vehicles to intelligent infrastructure, physical AI systems depend on tight integration between sensors, control electronics, and high-performance AI processors.

Bridging Sensing Hardware With AI Computing

Physical AI systems rely on a continuous loop of perception, decision-making, and action. Texas Instruments supplies many of the semiconductor components that allow machines to capture real-world signals, manage energy, and control motion with high precision.

Through the expanded collaboration, TI’s sensing, power management, and real-time control technologies will work alongside NVIDIA’s AI computing platforms used in robotics and autonomy.

“Our collaboration with NVIDIA will help engineers accelerate the development of autonomous machines by combining TI’s expertise in sensing and power with NVIDIA’s AI computing platforms,” said Amichai Ron, senior vice president of embedded processing at Texas Instruments.

By linking sensing hardware with high-performance computing, the companies aim to simplify the architecture required to build intelligent robots and autonomous machines that must operate safely in unpredictable environments.

Building Infrastructure for the Physical AI Era

The collaboration also underscores the growing importance of technology infrastructure designed specifically for machines interacting with the real world. Physical AI systems require hardware and software capable of interpreting sensor data, navigating complex environments, and executing precise mechanical actions in real time.

NVIDIA has been investing heavily in platforms that support robotics and autonomy, positioning physical AI as a major growth area across industries.

“Physical AI will enable a new generation of intelligent machines that can perceive, reason and act in the real world,” said Jensen Huang, founder and CEO of NVIDIA.

Together, NVIDIA’s computing platforms and Texas Instruments’ sensing and control technologies form a foundational stack for companies developing robots, autonomous vehicles, and industrial automation systems. As robotics moves into more dynamic real-world environments, such integrated ecosystems are expected to play a central role in scaling physical AI deployment across industries.

China Establishes First National Standards for Humanoid Robots

China has introduced its first national standard system for humanoid robotics, aiming to unify technical specifications and accelerate commercial deployment across industries.

By Laura Bennett | Edited by Kseniia Klichova Published:
China Establishes First National Standards for Humanoid Robots
Officials and industry experts gather in Beijing to unveil China’s first national standard system for humanoid robotics, aimed at accelerating commercialization and ensuring safety alignment.

China has formally introduced its first national standard system for humanoid robotics, marking a coordinated effort to structure one of the country’s fastest-growing technology sectors.

The framework was unveiled at the Humanoid Robots and Embodied Intelligence Standardization meeting in Beijing. It establishes unified technical guidelines intended to streamline development, reduce fragmentation, and accelerate the transition from pilot projects to commercial deployment.

The move signals that policymakers view humanoid robotics not as an experimental field, but as an emerging industrial category requiring formal governance.

Six Pillars for Industrial Alignment

The standard system is organized around six core pillars: foundational and common standards, neuromorphic and intelligent computing, limbs and key components, full-system integration, application scenarios, and safety and ethics.

Together, these categories define technical specifications, interface protocols, and evaluation benchmarks. Committee experts involved in the initiative said the goal is to reduce coordination friction between suppliers, lower production costs, and shorten iteration cycles across the value chain.

By clarifying interfaces and performance metrics, the framework is designed to enable interoperability between hardware platforms, software systems, and embodied AI models. It also embeds safety and ethical considerations into early-stage development, reflecting regulatory awareness as robots move into workplaces and homes.

From Prototypes to Scaled Deployment

According to China’s Ministry of Industry and Information Technology, 2024 marked the country’s first year of humanoid robot mass production. More than 140 domestic companies released over 330 models, with deployments expanding into manufacturing, household services, healthcare, and elderly care.

Until now, much of that growth has occurred in a relatively fragmented environment, with companies developing proprietary architectures and evaluation criteria. National standards are expected to impose structure on a rapidly expanding ecosystem.

The framework could also serve a strategic function. As Chinese firms compete globally in embodied AI and humanoid robotics, standardized technical benchmarks may strengthen export readiness and ecosystem coordination.

While many humanoid deployments remain in early stages, the introduction of national standards suggests the industry is entering a new phase, where commercialization and regulatory alignment advance in parallel.

News, Policy & Regulation, Robots & Robotics

University of Southampton Develops Adaptive Robot Fin for Underwater Stability

Researchers at the University of Southampton have developed a flexible robotic fin with embedded electronic skin that automatically adapts to changing water currents, improving underwater robot stability and efficiency.

By Daniel Krauss | Edited by Kseniia Klichova Published:
University of Southampton Develops Adaptive Robot Fin for Underwater Stability
The adaptive robotic fin developed at the University of Southampton integrates electronic skin and hydraulic actuation to automatically counteract flow disturbances in underwater environments. Photo: University of Southampton

Autonomous underwater vehicles are built to withstand unpredictable ocean conditions, but their rigid fins often require significant energy to counteract sudden currents and turbulence. Researchers at the University of Southampton are proposing a different approach: fins that sense water flow and adjust their shape in real time.

The team has developed a flexible robotic fin embedded with electronic skin capable of detecting subtle changes in water movement. The system automatically modifies the fin’s stiffness and curvature to stabilize underwater robots while reducing energy consumption.

The research, published in npj under the title “Harnessing proprioception in aquatic soft wings enables hybrid passive-active disturbance rejection,” reflects a broader push toward soft robotics and adaptive control in marine environments.

Inspired by Biological Sensing

The design draws from biological proprioception mechanisms observed in birds and fish. Birds detect airflow changes through sensory feedback in their feathers, while fish rely on lateral line systems and fin rays to perceive water disturbances.

To replicate similar sensing capabilities, the Southampton engineers embedded flexible liquid metal wiring inside a silicone fin. When water flow deforms the fin, the integrated electronic skin registers changes in electrical resistance. These signals are transmitted to a hydraulic system inside the robot’s body, which adjusts internal pressure through connected hoses to alter the fin’s shape.

Rather than relying solely on active propulsion corrections, the system combines passive flexibility with active hydraulic adjustment.

Reducing Energy Use in Turbulent Waters

Rigid AUVs typically expend substantial energy to maintain orientation when struck by waves or shifting currents. According to the researchers, the adaptive fin significantly improves disturbance rejection.

In controlled tests, the fin reduced unwanted buoyancy effects caused by sudden water flow by 87 percent compared with a similar vehicle using rigid fins. The robot demonstrated improved self-stabilization and maneuverability while consuming less energy to maintain position.

The findings suggest potential advantages for underwater inspection, environmental monitoring, and defense applications where energy efficiency and stability are critical.

Technical Constraints Remain

Despite promising results, integration challenges remain. Scaling the flexible system to larger vehicles and embedding it into rigid hull designs could complicate deployment. Long-term durability of the electronic skin and hydraulic components in harsh marine environments also requires further validation.

The researchers note that more robust actuators and structural refinements may help address these constraints.

The project illustrates how bio-inspired sensing and soft robotics are reshaping underwater vehicle design. As offshore energy, marine research, and subsea infrastructure monitoring expand, adaptive control systems such as this may become increasingly relevant to improving endurance and operational stability in dynamic ocean conditions.

News, Robots & Robotics, Science & Tech