Alibaba Launches RynnBrain Model to Push Deeper Into Physical AI and Robotics

Alibaba has unveiled RynnBrain, an artificial intelligence model designed to help robots understand and interact with the physical world, marking the company’s latest move into physical AI and embodied robotics.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
Alibaba Launches RynnBrain Model to Push Deeper Into Physical AI and Robotics
Alibaba Group Corporate Campus (Xixi, Hangzhou, China). Photo: Alibaba Group

Alibaba on Tuesday introduced RynnBrain, an artificial intelligence model built to support robotics, as competition intensifies among global technology companies to define the software foundations of physical AI. The launch reflects how leading AI developers are moving beyond text and images toward systems that can interpret and act within real-world environments.

RynnBrain is designed to help robots perceive their surroundings, identify objects, and coordinate movement accordingly. In a demonstration released by Alibaba’s DAMO Academy, a robot identifies pieces of fruit and places them into a basket. While visually unremarkable, the task requires a tight integration of perception, reasoning, and motion control, areas that have long constrained the commercial deployment of autonomous machines.

Physical AI as a Strategic Priority

Robotics is increasingly grouped under the broader category of physical AI, which includes machines such as industrial robots and self-driving vehicles that rely on artificial intelligence to operate in dynamic environments. China has made physical AI a strategic focus, viewing it as a critical arena in its technological competition with the United States.

Industry leaders have echoed the scale of the opportunity. Nvidia chief executive Jensen Huang has described AI and robotics as a multitrillion-dollar growth market, a framing that helps explain why major model developers are investing in systems that extend beyond digital tasks. For Alibaba, RynnBrain represents an entry point into this emerging layer of the AI stack, complementing its existing work on large language models under the Qwen brand.

Competing World Models Take Shape

Alibaba’s move places it alongside a growing list of companies developing so-called world models, AI systems intended to help machines understand and simulate the physical world. Nvidia has introduced several robotics-focused models under its Cosmos platform, while Google DeepMind has developed Gemini Robotics-ER, aimed at embodied reasoning and control. Tesla is pursuing a similar approach internally for its Optimus humanoid robot.

The competition is particularly pronounced in humanoid robotics, where China is widely viewed as moving faster than the U.S. toward scaled production. Several Chinese manufacturers have signaled plans to ramp output this year, suggesting that software capable of generalizing across tasks and environments will be a key differentiator.

Open Source as an Adoption Strategy

Like Alibaba’s other recent AI releases, RynnBrain is being offered under an open source model, allowing developers to use and modify it freely. Open sourcing has been central to Alibaba’s strategy for expanding the reach of its AI ecosystem, particularly outside China, and contrasts with more closed approaches taken by some Western rivals.

The decision also reflects a broader industry debate over how foundational physical AI systems should be distributed. By lowering barriers to experimentation, Alibaba is positioning RynnBrain as infrastructure rather than a proprietary product, betting that widespread adoption will accelerate progress in robotics while reinforcing its role in the global AI landscape.

World’s First Humanoid Robot Combat League Launches in Shenzhen

China has launched the world’s first humanoid robot free combat league in Shenzhen, turning martial arts-style competition into a real-world testing ground for embodied AI and robotics.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
World’s First Humanoid Robot Combat League Launches in Shenzhen
A humanoid robot developed by EngineAI performs martial arts-style movements during the opening of the world’s first robot combat league in Shenzhen, highlighting China’s push to test embodied AI in real-world conditions. Photo: EngineAI

China has taken another step in turning humanoid robotics from laboratory experiments into public spectacle and industrial validation. On February 9, the world’s first humanoid robot free combat league officially opened in Shenzhen, positioning competitive robot fighting as both entertainment and a stress test for embodied artificial intelligence.

The league, known as the Ultimate Robot Knockout Legend (URKL), will run through the end of 2026 and award a championship belt valued at 10 million yuan, or roughly $1.4 million. Organized by EngineAI, the competition brings together development teams from across China and provides each participant with a standardized humanoid robot platform, removing hardware barriers and shifting the focus toward software, control systems, and real-world performance.

Combat as a Testbed for Embodied AI

At the center of the league is EngineAI’s T800 humanoid robot, a full-size platform capable of executing complex martial movements such as aerial rotations, kicks, and rapid directional changes. While the visual appeal is undeniable, organizers and analysts emphasize that the event is designed less as a stunt and more as a proving ground for robotics technology under extreme conditions.

Combat scenarios place unusual demands on humanoid systems, including dynamic balance, motion planning under impact, actuator durability, and rapid decision-making. According to industry observers, these high-stress environments expose weaknesses that may remain hidden in controlled factory or laboratory settings, accelerating iteration cycles and hardware-software integration.

Experts note that robot combat also serves an important cultural function. By framing humanoid robots within a familiar and dramatic format inspired by martial arts, the league challenges public perceptions of robots as purely industrial tools and introduces them as expressive, adaptable machines. This visibility, particularly among younger audiences, may help build long-term talent pipelines into robotics and AI research.

Opportunity and Limits of Robot Fighting

Analysts caution, however, that combat performance does not automatically translate into commercial readiness. Optimizing robots for short bursts of high-impact activity can divert attention from the requirements of industrial, service, or household deployment, where endurance, safety, and cost efficiency matter far more than spectacle.

Even so, supporters argue that entertainment-focused trials can play a meaningful role in early-stage technology development. Real-world combat provides data that simulations struggle to replicate, especially around mechanical stress, failure modes, and system resilience. Some estimates suggest that such environments can shorten development cycles by more than 30 percent.

The league arrives as China’s humanoid robotics sector accelerates rapidly. Industry projections estimate the domestic humanoid robot market could approach 870 billion yuan by 2030, driven by advances in embodied intelligence and growing interest in both industrial automation and consumer-facing robots.

For now, the combat league is less about harvesting immediate commercial products and more about planting ideas. In Shenzhen, humanoid robots are no longer just walking, lifting, or performing demos. They are fighting, failing, adapting, and learning in public – offering a glimpse into how embodied AI might evolve beyond the lab and into everyday life.

AGIBOT Night Brings Humanoid Robots to the Stage in the World’s First Robot-Led Live Gala

Chinese robotics company AGIBOT staged an unprecedented live gala where humanoid robots performed comedy, magic, music, and dance, signaling a new phase in human-robot cultural interaction.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:

AGIBOT has taken an unusual step in demonstrating the maturity of humanoid robotics, hosting a live, robot-led gala that blurred the line between technical showcase and cultural performance. The event, titled AGIBOT Night, featured humanoid robots performing comedy routines, magic tricks, live music, and choreographed dance in front of an in-person audience, marking one of the first large-scale attempts to position robots as entertainers rather than industrial tools.

Held in Shanghai, the gala was designed to highlight AGIBOT’s progress in physical AI, embodied intelligence, and real-time human-robot interaction. Unlike scripted demonstrations common at technology expos, the performances emphasized timing, expression, and coordination, requiring robots to operate in dynamic environments under stage lighting, sound cues, and live audience conditions.

From Engineering Demos to Cultural Presence

AGIBOT Night reflects a broader shift in how robotics companies are framing their technology. Instead of focusing solely on factory automation or laboratory benchmarks, AGIBOT used performance as a way to demonstrate perception, motion control, multimodal interaction, and system reliability in a setting where errors are immediately visible.

The robots on stage executed synchronized movements, interacted with human hosts, and responded to cues with minimal delay. While the performances were carefully choreographed, they still demanded stable locomotion, precise manipulation, and consistent behavioral execution over extended periods – challenges that mirror real-world deployment scenarios in service, retail, and public environments.

Industry observers noted that the event echoed early moments in consumer electronics history, when companies used spectacle to help the public emotionally connect with new technologies. In this case, humor and performance served as an accessible gateway to understanding the capabilities of modern humanoid systems.

A Signal of Where Humanoid Robotics Is Headed

AGIBOT’s decision to stage a robot-only gala comes amid rising global interest in general-purpose humanoid robots, particularly in China, where companies are accelerating commercialization timelines. Rather than presenting humanoids as distant future products, AGIBOT framed them as systems already capable of participating in social and cultural spaces.

The company positioned AGIBOT Night not as a one-off show, but as a signal of how embodied AI could evolve beyond functional labor into roles requiring presence, adaptability, and engagement. While industrial and logistics applications remain the near-term focus for most humanoid platforms, events like this suggest that public-facing use cases are moving closer to reality.

AGIBOT Night ultimately served as both performance and proof point – illustrating how far humanoid robotics has progressed, and how the next phase may be as much about interaction and experience as it is about efficiency and automation.

News, Robots & Robotics

Humanoid Robots Train with Shaolin Monks in Symbolic Moment for Physical AI

Humanoid robots have joined monks at China’s Shaolin Temple for kung fu training, highlighting how physical AI is increasingly intersecting with human movement, culture, and embodied learning.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:
Humanoid Robots Train with Shaolin Monks in Symbolic Moment for Physical AI
AGIBOT humanoid robots delivered a series of visually striking stage performances at AGIBOT NIGHT gala show on February 8, 2026. Photo: AGIBOT

Shaolin Temple has long been synonymous with discipline, physical mastery, and martial arts tradition. This month, it became the setting for a striking demonstration of how far physical artificial intelligence has advanced.

Humanoid robots were filmed training alongside Shaolin monks, practicing kung fu forms in coordinated sessions that blended centuries-old tradition with cutting-edge robotics. Videos of the robots mirroring stances, punches, and balance drills quickly went viral, sparking widespread discussion about the future of embodied AI.

The robots involved were developed by Chinese robotics firm AgiBot, which has been working on humanoid systems capable of learning complex human motion through imitation, sensor feedback, and reinforcement learning.

Training the Body Not Just the Brain

Unlike industrial robots optimized for repetitive tasks, humanoid systems require an understanding of balance, timing, and force distribution. Martial arts training offers a unique testing ground for these capabilities.

Kung fu movements emphasize controlled power, rapid transitions, and precise posture – qualities that are difficult for robots to master using traditional programming alone. By training alongside monks, engineers can expose robots to highly refined human motion patterns and evaluate how well physical AI systems adapt in real time.

Developers have described these sessions as experimental rather than ceremonial. The goal is not to turn robots into fighters, but to improve their ability to coordinate full-body movement, respond to physical constraints, and learn from human demonstration.

A Cultural Backdrop for Physical AI

The setting itself added symbolic weight. The Shaolin Temple is globally recognized not only as a religious site, but as a center of physical discipline where the body and mind are trained together. Placing humanoid robots in this environment underscored a broader idea gaining traction in robotics: intelligence does not exist solely in software models, but emerges through interaction with the physical world.

China has been investing heavily in humanoid robotics as part of its broader push into physical AI, with applications spanning manufacturing, logistics, healthcare, and public services. Demonstrations like this serve both as technical validation and as cultural storytelling – positioning robots not just as tools, but as learners embedded in human environments.

While the sight of robots practicing kung fu may feel theatrical, it reflects a serious shift in robotics research. As humanoid machines move closer to everyday settings, their ability to move, adapt, and coexist with humans is becoming just as important as raw computational intelligence.

News, Robots & Robotics

Addverb Launches India’s First Wheeled Humanoid Robot Called Elixis-W

Indian automation firm Addverb has unveiled Elixis-W, the country’s first domestically developed wheeled humanoid robot, marking a shift from experimental robotics to real industrial deployment.

By Laura Bennett Published: Updated:
Addverb Launches India’s First Wheeled Humanoid Robot Called Elixis-W
Addverb's Elixis-W wheeled humanoid robot was unveiled at LogiMAT India 2026, signaling India's transition from experimental robotics to practical, industrial humanoid systems. Photo: Addverb

Indian automation company Addverb has officially entered the humanoid robotics race with the launch of Elixis-W, a wheeled humanoid robot designed specifically for industrial environments. The robot was unveiled at LogiMAT India 2026, one of the country’s largest logistics and supply chain exhibitions, and positions India as an active contributor to the global shift toward physical AI.

Unlike consumer-focused humanoids or lab prototypes, Elixis-W was introduced as a practical system built for warehouses, factories, and intralogistics operations. Industry reaction at the event reflected a broader inflection point: humanoid robotics in India is moving beyond demonstrations and into controlled, real-world deployment.

Addverb executives framed the launch not as a spectacle, but as an extension of the company’s long-standing focus on automation systems that solve concrete operational problems.

Why Addverb Chose a Wheeled Humanoid

Elixis-W combines wheeled mobility with dual-arm manipulation, perception systems, and learning capabilities. The decision to adopt wheels rather than bipedal locomotion was deliberate. Most industrial facilities are built around flat, structured floors, where wheeled platforms offer greater stability, efficiency, and faster integration with existing infrastructure.

According to Addverb leadership, this approach allows teams to validate use cases such as material handling, repetitive tasks, and higher-risk operations without requiring costly changes to factory layouts. Legged humanoids remain part of the company’s longer-term roadmap, but the wheeled form factor enables earlier commercialization and safer deployment today.

The robot is intended to assist with physically demanding and repetitive work while operating alongside human workers. Addverb has emphasized that Elixis-W is designed to augment human labor, not replace it, with people retaining responsibility for supervision, decision-making, and exception handling.

From Automation Systems to Physical AI Platforms

The launch of Elixis-W reflects a broader evolution in Addverb’s strategy. The company initially built its reputation by solving structured automation challenges in warehouses and manufacturing. With Elixis-W, it is moving toward collaborative robotics capable of functioning in environments designed for humans.

Deployment will begin through limited, closely supervised proof-of-concept programs in controlled industrial settings. This phased rollout is intended to validate safety, reliability, and consistency before any wider adoption.

Alongside the robot, Addverb also introduced Addverb.ai, a platform aimed at advancing general-purpose robotics and physical AI. By opening the platform to developers and researchers, the company hopes to accelerate progress toward robots that can adapt across tasks rather than being locked into single-purpose roles.

For Addverb, Elixis-W is not positioned as a final product, but as a foundation. It represents a pragmatic step toward human-robot collaboration at scale, rooted in industrial realities rather than futuristic promises.

Li Auto Signals Strategic Shift Toward AI and Humanoid Robotics

China’s Li Auto is expanding beyond electric vehicles, positioning artificial intelligence and humanoid robotics as core pillars of its long-term strategy.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:
Li Auto Signals Strategic Shift Toward AI and Humanoid Robotics
Li Auto is repositioning itself as an AI-driven company, expanding its focus from electric vehicles to humanoid robotics and physical artificial intelligence. Photo: Li Auto

Li Auto is increasingly redefining what it means to be an automaker. As competition in China’s electric vehicle market intensifies and margins tighten, the company is framing artificial intelligence and embodied robotics as the foundation of its next decade of growth.

Rather than treating AI as a feature layered onto vehicles, Li Auto’s leadership is presenting intelligence as the product itself, with cars, and eventually robots, serving as physical embodiments of that capability. The shift reflects a broader reassessment underway across the auto industry, where autonomy, perception, and decision-making are emerging as the primary sources of long-term differentiation.

From Smart Vehicles to Embodied AI

Li Auto has spent years building in-house AI systems to support advanced driver assistance and autonomous driving. These systems handle perception, motion planning, and real-time decision-making in complex environments. Company executives now argue that the same technology stack can extend beyond vehicles into general-purpose physical AI.

Humanoid robotics has emerged as a focal point of that vision. Like autonomous cars, humanoid robots must operate safely in spaces designed for people, interpret human intent, and adapt continuously to changing conditions. Engineers see deep overlap between the two domains, particularly in vision systems, control software, and large-scale data training.

Li Auto has not announced a standalone humanoid product or timeline. Instead, the company has described its robotics ambitions as a long-term effort focused on foundational intelligence rather than near-term commercialization.

Li Xiang Frames the Car as a Robot

That philosophy was made explicit on Feb. 5, when Li Auto CEO Li Xiang published a lengthy post on Weibo outlining the company’s view of the automobile’s ultimate form. In the essay, Li revisited a question posed when the company was founded in 2015: what should a car ultimately become?

While the industry largely viewed the answer as faster or smarter transportation, Li Auto’s internal conclusion was different. The company saw the car evolving into a robot, a vision Li described as guiding its strategy for the past decade. He emphasized that the company’s long-standing slogan was not symbolic, but a literal roadmap that has shaped product and technology decisions over time.

Li pushed back on speculation that Li Auto’s focus on AI signals a retreat from vehicles. Instead, he argued that embodied intelligence must be grounded in a strong automotive foundation to create real value. He noted that the majority of his time remains devoted to the car itself, ensuring the organization understands how to evolve a vehicle into an intelligent system step by step.

According to Li, the newly released Li Auto L9 represents a turning point in that process. He described the vehicle as an intelligent agent equipped with perception, cognition, and actuation, transforming the car from a passive tool into an active partner that recognizes users, anticipates needs, and responds proactively. In his framing, the L9 is not just a new model, but the first concrete expression of Li Auto’s embodied AI vision as the company enters its second decade.

An Industry Looking Beyond Cars

Li Auto’s messaging aligns with a broader shift across China’s technology and manufacturing sectors. As electric vehicles become increasingly commoditized, companies are looking to intelligence, not hardware alone, as the defining advantage. Humanoid robotics and embodied AI offer a path to reuse autonomy expertise while opening new markets in logistics, services, and industrial automation.

For Li Auto, cars remain the primary product and revenue driver. But the company is increasingly clear about its long-term ambition. It does not see itself solely as an automaker, but as a builder of intelligent machines whose form may evolve over time.

Whether humanoid robots become a commercial reality or remain a research frontier, Li Auto’s strategy underscores a growing belief across the industry: the future of mobility companies may be determined less by what they manufacture, and more by how intelligent those machines can become.

Faraday Future Pivots Toward Humanoid Robots and Embodied AI

Electric vehicle startup Faraday Future is shifting its strategy toward humanoid robots and embodied AI, joining a growing list of automakers betting on robotics to redefine their future beyond cars.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
Faraday Future Pivots Toward Humanoid Robots and Embodied AI
Faraday Future is repositioning itself around embodied AI and humanoid robotics as the company looks beyond electric vehicles toward new growth opportunities. Photo: Faraday Future

Faraday Future is no longer framing intelligence as something confined to cars. The company has officially launched its embodied AI robotics division and introduced three commercial robotic products, marking a decisive expansion beyond electric vehicles and into physical artificial intelligence.

The move positions Faraday Future alongside a growing list of automakers and tech firms redefining mobility as an intelligent, autonomous system rather than a traditional product category. Company executives argue that embodied AI represents a natural evolution of the vehicle-as-robot concept Faraday Future has promoted for nearly a decade.

The company has launched a new internal initiative centered on embodied artificial intelligence and revealed plans to develop a lineup of humanoid robots designed for real-world interaction. Executives described the effort as an expansion of the company’s mission rather than a departure from mobility, arguing that modern vehicles are increasingly becoming intelligent robotic systems.

The announcement arrives amid a broader industry shift. Just days earlier, Tesla announced it would wind down production of its flagship Model S and Model X vehicles to redirect resources toward Optimus, its humanoid robot platform. Together, the moves highlight how legacy automotive players increasingly see physical AI – not just transportation – as the next frontier.

Three Robots, One Ecosystem Strategy

Faraday Future’s first robotics lineup includes three distinct platforms. FF Futurist is positioned as a full-size professional humanoid robot designed for public-facing and enterprise roles, from hospitality and retail to education and exhibitions. FF Master targets consumer and institutional environments as a more athletic, interaction-focused humanoid companion. FX Aegis rounds out the lineup as a quadruped robot aimed at security, patrol, and outdoor assistance tasks.

All three products are being offered with a new ecosystem-based pricing model that separates hardware from software capabilities. Base prices range from $2,499 for FX Aegis to $34,990 for FF Futurist, with optional ecosystem skill packages enabling advanced AI functions, customization, and continuous upgrades. The company describes the model as demand-driven, designed to mirror how software platforms evolve over time rather than traditional one-time hardware sales.

Faraday Future said more than 1,200 units are already covered by paid B2B deposits, signaling early commercial interest ahead of first deliveries expected by the end of February. Production preparation is underway, while customization, testing, and data training continue in parallel to accelerate deployment.

Why Faraday Future Believes Robotics Can Scale Faster Than Cars

Company leadership argues that embodied AI robotics offer structural advantages over vehicles: lighter capital requirements, faster iteration cycles, and earlier paths to positive cash flow. Robotics and vehicles are positioned as dual engines, sharing AI models, manufacturing expertise, and distribution channels.

At the technical level, Faraday Future is betting on a modular AI architecture built around what it calls a three-part system: physical devices, an open AI brain platform, and a decentralized data factory designed to continuously improve robot performance through real-world usage. The company believes this approach allows robots to adapt across industries without being locked into single-purpose designs.

The company also sees automotive dealerships as a natural sales and service channel for robotics. Executives argue that future dealers will evolve into intelligent terminal operators, offering vehicles and robots through shared infrastructure, financing models, and service networks.

Faraday Future’s leadership frames the launch not as a speculative experiment, but as the opening move in a long-term transition. As embodied AI advances and robots move from labs into everyday environments, the company believes ownership could eventually surpass today’s global vehicle fleet.

For Faraday Future, the future of mobility is no longer defined by wheels alone – but by intelligent machines capable of working, learning, and interacting alongside humans.

Moya Debuts as the World’s First Biomimetic AI Robot with Human Like Movement

A Shanghai-based robotics team has unveiled Moya, a humanoid robot designed with warm synthetic skin, expressive facial reactions, and a walking gait that closely mirrors human movement.

By Daniel Krauss Published: Updated:

A robotics team in Shanghai has introduced Moya, a humanoid robot engineered to closely replicate the way humans look, move, and physically interact with their surroundings. Described by its developers as the world’s first fully biomimetic AI robot, Moya combines warm synthetic skin, expressive facial reactions, and a walking gait designed to mirror natural human motion.

The unveiling places Moya at the center of a growing global effort to move humanoid robots beyond industrial environments and into settings where appearance, trust, and physical presence matter as much as technical performance.

Designed to Replicate Human Presence

One of Moya’s defining features is its exterior. The robot is covered in a soft synthetic skin that maintains a temperature similar to the human body, addressing a long-standing challenge in humanoid design. Cold, rigid surfaces have historically reinforced the sense of artificiality, often limiting acceptance in social contexts.

Moya’s face is capable of subtle expressions such as smiling, blinking, and winking. These movements are synchronized with head orientation and eye contact, allowing the robot to respond dynamically to people nearby rather than relying on fixed animations. Developers say these design choices are intended to study human comfort and emotional response during prolonged interaction.

Human Like Walking and Physical AI

Beyond appearance, Moya’s movement system has attracted the most attention. Engineers involved in the project say the robot’s walking pattern achieves approximately 92 percent similarity to human gait, based on comparisons of stride length, balance transitions, and joint motion derived from motion capture data.

Achieving stable, fluid bipedal walking remains one of the hardest problems in humanoid robotics. Moya relies on compliant joints, real-time balance correction, and AI-driven motion planning to maintain smooth locomotion rather than rigid step cycles. This reflects a broader shift in physical AI toward adaptive control systems that respond continuously to environmental feedback.

Social Robots and the Uncanny Question

Unlike humanoid robots designed primarily for factories or warehouses, Moya is positioned as a research platform for social interaction. Potential applications include customer engagement, education, and studies of companionship and human-robot communication.

The robot’s realism has reignited discussion around the uncanny valley, the phenomenon where near-human machines provoke discomfort rather than trust. Developers acknowledge these concerns and emphasize that Moya is not a commercial product, but a system designed to explore how far biomimetic design can go before human perception shifts.

Moya’s debut also reflects China’s growing investment in humanoid robotics as a strategic technology area. While the robot is unlikely to appear in everyday settings soon, it signals a shift in focus from what humanoid robots can do to how human they should be while doing it.

Artificial Intelligence (AI), News, Robots & Robotics

China Unveils Bolt, Humanoid Robot that Runs at Near-Human Sprint Speeds

Engineers from Zhejiang University and local startups have revealed Bolt, a full-size humanoid robot capable of running at up to 10 meters per second, pushing humanoid locomotion closer to elite human performance.

By Daniel Krauss Published: Updated:

A team of engineers from Zhejiang University, working in partnership with Chinese technology startups, has unveiled a humanoid robot designed for one purpose few machines have mastered – speed. Named Bolt after sprinting legend Usain Bolt, the robot can reportedly run at up to 10 meters per second, placing it within striking distance of elite human athletic performance.

The achievement marks a significant milestone for humanoid robotics, a field that has long struggled to balance speed, stability, and energy efficiency. While many humanoid robots can walk, climb stairs, or carry objects, sustained high-speed running has remained an elusive goal.

Designed to Move Like a Human

Bolt stands approximately 1.75 meters tall and weighs around 75 kilograms. Engineers deliberately chose human-like proportions rather than optimizing purely for mechanical efficiency. The goal was to study how closely a robot built at human scale could replicate natural running dynamics without relying on oversized feet, extended limbs, or non-anthropomorphic designs.

During a public demonstration, Bolt raced against a human competitor – the head of startup Mirror Me, one of the project’s collaborators – as well as alongside Wang Hongtao, president of Zhejiang University. The comparison highlighted how closely the robot’s stride length, cadence, and posture now resemble those of a trained runner.

At its top speed, Bolt approaches the pace of Usain Bolt, who famously completed the 100 meters in a world-record 9.58 seconds. While the robot does not yet match that burst performance or endurance, researchers say the gap is narrowing faster than many expected.

Advances in Control and Balance

Behind Bolt’s performance lies a combination of lightweight structural materials, high-torque actuators, and advanced control algorithms. High-speed bipedal running requires precise coordination between balance, force distribution, and real-time adjustment to ground contact – problems that become exponentially harder as speed increases.

According to researchers involved in the project, Bolt relies on predictive motion planning and rapid sensor feedback to maintain stability at high velocity. Rather than reacting after imbalance occurs, the robot anticipates changes in momentum and adjusts its gait accordingly.

This approach mirrors broader trends in robotics research, where machine learning and model-based control are increasingly combined to handle dynamic, unpredictable motion. Similar techniques are being explored in legged robots developed in the United States and Europe, though few humanoids have demonstrated comparable sprinting capability.

Why Speed Matters in Humanoid Robotics

High-speed locomotion is not just a spectacle. Researchers argue that speed is a proxy for overall system performance, touching everything from actuator power density and thermal management to control robustness and mechanical durability.

A humanoid that can run reliably is also likely capable of navigating uneven terrain, recovering from slips, and operating in time-sensitive environments. Potential applications include disaster response, industrial inspection, and security scenarios where rapid movement is critical.

At the same time, the Bolt project reflects China’s growing investment in humanoid robotics as a strategic technology area. Universities, startups, and government-backed research programs are increasingly aligned around building full-stack robotic systems that combine hardware, AI, and large-scale manufacturing.

A Signal of What Comes Next

The developers caution that Bolt remains a research platform rather than a commercial product. Sustaining top speed over long distances, improving energy efficiency, and ensuring safe operation around humans remain open challenges.

Still, the demonstration suggests that humanoid robots are entering a new phase – one where athletic capabilities once thought exclusive to biology are becoming engineering problems rather than physical impossibilities.

As global competition in humanoid robotics accelerates, Bolt’s sprint may be remembered less for how fast it ran and more for what it signaled: that the limits of human-like robotic motion are being rewritten.

Artificial Intelligence (AI), News, Robots & Robotics

Starbucks Bets on Robots and AI to Brew a Customer Comeback

Starbucks is investing heavily in artificial intelligence and automation to speed service, support baristas, and restore growth – betting that technology can strengthen, not replace, human connection.

By Laura Bennett Published: Updated:
Starbucks Bets on Robots and AI to Brew a Customer Comeback
As sales rebound after a two-year slump, Starbucks is betting that artificial intelligence and automation can improve efficiency without sacrificing the human touch. Photo: Starbucks

At some Starbucks drive-throughs, customers may still hear a friendly greeting, but the voice taking their order is no longer always human. Artificial intelligence is increasingly stepping in, quietly reshaping how the coffee giant operates as it works to regain momentum after years of uneven sales.

The shift is part of a broader push by Starbucks to integrate AI and automation across its business. The company has invested hundreds of millions of dollars in technology designed to speed up service, reduce operational friction, and support baristas with behind-the-scenes tasks. Early results suggest the approach is beginning to pay off. Starbucks recently posted its first U.S. same-store sales increase in two years, a notable turnaround in a market that generates roughly 70 percent of its revenue.

Technology is now embedded throughout the Starbucks workflow. Inside stores, baristas can rely on AI-powered virtual assistants to recall drink recipes, manage shift schedules, and answer operational questions in real time.

In back rooms, automated scanning systems handle inventory checks, replacing one of retail’s most time-consuming manual tasks and helping address out-of-stock issues that have frustrated customers. At select drive-through locations, AI systems are being tested to process orders, freeing employees to focus on drink preparation and customer interaction.

The company frames these tools not as replacements for workers, but as support systems. In a recent statement, Starbucks described AI as a way to “support the moments that matter,” arguing that technology should reduce distractions and allow staff to focus on human connection. New features under development include AI chat tools that help customers discover drinks based on preferences or mood, as well as scheduling options that allow orders to be timed in advance to minimize waiting.

The investment has not come without trade-offs. Starbucks’ spending spree – which also includes about $500 million to increase staffing levels – has weighed on margins, contributing to investor concerns and recent share price volatility. Management has pledged to find $2 billion in cost savings over the next three years, a goal that makes automation central to its long-term profitability.

Leading the effort is Brian Niccol, who took over in 2024 amid rising prices, labor tensions, and intensifying competition. Since then, he has paused price hikes, simplified the menu, set a four-minute service target, and cut thousands of corporate roles. Underperforming stores have been closed, and Starbucks has reduced its exposure to China by selling a large stake in the business there.

Despite the focus on technology, Niccol has repeatedly emphasized that Starbucks lost its way by prioritizing efficiency over experience. His turnaround plan includes a renewed emphasis on handwritten names on cups, more comfortable seating, ceramic mugs, and store redesigns aimed at restoring a neighborhood coffeehouse feel. Each store refresh can cost up to $150,000 and will take several years to complete.

To Niccol, the combination is not contradictory. Automation handles repetitive and invisible work, while baristas are meant to deliver warmth and speed at the counter. In that sense, Starbucks’ AI push mirrors a wider trend across retail and food service, where technology is increasingly used to stabilize operations and enhance consistency rather than replace frontline workers.

Whether robots and algorithms can help Starbucks fully recapture its cultural relevance remains uncertain. But for now, the company is betting that a carefully calibrated blend of automation and human touch can turn improved sales into lasting recovery.

Overland AI Raises $100M as Ground Autonomy Moves From Experiment to Battlefield

Overland AI has secured $100 million in new funding as militaries accelerate the shift from testing autonomous ground vehicles to deploying them directly with operational units.

By Laura Bennett Published: Updated:

Overland AI has raised $100 million in new funding, underscoring how quickly autonomous ground systems are moving from controlled trials into everyday military operations. The Seattle-based company said the capital will be used to meet rising demand for its ULTRA vehicle as armed forces integrate ground autonomy directly into frontline units.

The round reflects a broader shift in defense technology. What was once treated as experimental robotics is increasingly viewed as operational infrastructure. Militaries are no longer asking whether autonomous ground vehicles can work, but how fast they can be deployed, trained, and trusted in complex environments.

“Demand for ground autonomy has moved decisively from experimentation to operational integration,” said Stephanie Bonk, Overland AI’s co-founder and president. She said the company is scaling alongside military units, training warfighters directly and refining its systems through continuous field feedback.

From Research to Real-World Operations

Founded in 2022, Overland AI builds on more than a decade of research in robotics and machine learning by its founding team. That research-first foundation helped the company tackle one of the hardest problems in autonomy: enabling individual vehicles to navigate unpredictable terrain without constant human oversight.

In 2025, Overland AI completed the DARPA RACER program (Robotic Autonomy in Complex Environments with Resiliency), a multi-year effort focused on proving that autonomous vehicles can operate reliably under battlefield conditions. The company said lessons from RACER directly informed its current platform, which is now being used in operational settings.

Overland AI’s systems are already supporting missions ranging from intelligence, surveillance, and reconnaissance to contested logistics, counter-drone operations, and resupply in hostile terrain. The company is working with multiple branches of the U.S. military, including Army, Marine Corps, and Special Operations units, signaling growing institutional confidence in ground autonomy.

Making Dangerous Missions Safer

One of the most significant applications of Overland AI’s technology is in breaching operations, among the most dangerous tasks in ground combat. Breaching requires forces to clear obstacles such as minefields, wire, or fortified barriers, often under enemy fire.

By integrating autonomous vehicles into these missions, Overland AI aims to remove combat engineers from the point of greatest risk. In collaboration with Army units, including engineer brigades, the company has demonstrated human-machine formations that allow robots to operate at the front of a breach while soldiers coordinate from safer positions.

Company executives argue that solving autonomy at the single-vehicle level was essential before attempting coordinated fleets. “We were right to solve the hardest problem first,” said CEO Byron Boots, pointing to vehicle-level intelligence as the foundation for future multi-vehicle collaboration.

Investors Bet on Operational Autonomy

The funding round was led by 8VC, with continued backing from Point72 Ventures, Ascend Venture Capital, Shasta Ventures, and Overmatch Ventures. New investors include Valor Equity Partners and StepStone Group, alongside a $20 million venture debt facility from TriplePoint Capital.

For investors, Overland AI represents a category shift in defense technology: autonomy that is not confined to labs or test ranges, but embedded directly into military doctrine and daily operations. The company has also expanded beyond defense, partnering with civilian agencies such as California’s wildfire response authorities, highlighting potential dual-use applications.

As global militaries rethink how ground forces operate in increasingly contested environments, Overland AI’s trajectory suggests that autonomous vehicles are becoming less of a future concept and more of a present necessity. The question now is not whether ground autonomy belongs on the battlefield, but how quickly it becomes standard equipment.

Artificial Intelligence (AI), News, Robots & Robotics, Startups & Venture

Duke Engineers Create Lego-Like Cubes with Programmable, ‘Living-Like’ Mechanics

Engineers at Duke University have created modular building blocks whose stiffness, damping, and motion can be reprogrammed on demand – without changing their shape.

By Daniel Krauss Published: Updated:
Duke Engineers Create Lego-Like Cubes with Programmable, ‘Living-Like’ Mechanics
Lego-like modular blocks developed at Duke University can switch between soft and rigid behaviors, allowing engineers to reprogram motion and mechanical response without altering form. Photo: FORTYTWO / Unsplash

Mechanical engineers at Duke University have developed a new class of programmable materials that blur the line between passive structures and living systems. The work, which was published in Science Advances, demonstrates solid building blocks whose mechanical behavior can be rewritten on demand – allowing the same structure to behave like soft rubber, rigid plastic, or something in between without being rebuilt or reshaped.

At the center of the research are Lego-like cubes, each composed of 27 internal cells. Every cell contains a gallium-iron composite that can switch between solid and liquid states at room temperature. By selectively heating individual cells with small electrical currents, researchers can liquify precise regions inside the block, effectively encoding stiffness, damping, and movement into an otherwise rigid object.

The geometry never changes. Only the internal state does.

In early demonstrations, the team assembled multiple cubes into beams and columns. Simply altering which internal cells were liquified caused the same structure to bend, vibrate, or resist motion in dramatically different ways. Mechanical behavior was no longer fixed at the time of manufacture but became a variable that could be adjusted repeatedly after assembly.

One of the most striking experiments took place underwater. Researchers connected ten cubes into a straight column and attached it to a motor, forming a programmable tail for a robotic fish. With the same motor input, different internal configurations caused the fish to swim along sharply different paths. Motion was altered not by changing motors or control software, but by reprogramming the material itself.

“We want to make materials that are alive,” said Yun Bai, the study’s first author and a PhD student at Duke. “Traditional manufacturing lets you print a material with a certain stiffness, but to change it you have to start over. We wanted something closer to human muscle – a material that can adjust its mechanical response in real time.”

Unlike shape-shifting systems, the Duke approach does not rely on changing form. Instead, it rewrites how forces propagate through a structure. In two-dimensional tests, thin sheets made from the same composite demonstrated a wide range of stiffness and damping behaviors while maintaining identical shapes. In performance tests, the sheets rivaled or exceeded commercially available materials across multiple mechanical metrics.

The modular design adds another layer of flexibility. Each cube can be attached or removed like a building block, allowing engineers to assemble larger systems with highly customized mechanical behavior. Once a configuration has been tested, freezing the structure at zero degrees Celsius returns all internal cells to a solid state, effectively resetting the system for reprogramming.

“This gives us a way to build three-dimensional structures whose mechanical properties are not fixed,” Bai said. “You can test one configuration, reset it, and try another – again and again.”

The researchers see applications far beyond robotics. By adjusting the metal composition, the freezing and melting points could be tuned for environments such as the human body. Miniaturized versions could one day navigate blood vessels, form adaptive medical implants, or create electronics that physically respond to changing conditions.

“Our long-term goal is to construct larger systems using these composite materials,” said Xiaoyue Ni, an assistant professor of mechanical engineering and materials science at Duke. “We want to enable robots and machines to adapt mechanically to different tasks and environments without redesigning the entire system.”

The work suggests a future where materials are no longer passive components, but active participants – structures that can be programmed, reset, and adapted as easily as software.

Waymo Raises $16 Billion to Scale Robotaxi Operations Globally

Waymo has secured $16 billion in new funding to expand its autonomous ride-hailing service, accelerate fleet deployment, and support international launches in Europe and Asia.

By Daniel Krauss Published: Updated:
Waymo Raises $16 Billion to Scale Robotaxi Operations Globally
Waymo is raising fresh capital to expand its autonomous robotaxi fleet, scale operations, and push its self-driving service into new global markets. Photo: Waymo

Waymo has raised $16 billion in a major new investment round, signaling renewed momentum behind autonomous ride-hailing as the company moves to scale operations beyond its early U.S. strongholds. The funding positions Waymo to significantly expand its robotaxi fleet, invest in infrastructure, and accelerate launches in international markets including Europe and Asia.

The round underscores growing confidence that autonomous driving is shifting from long-running pilot programs into a capital-intensive phase of real-world deployment. Waymo said the funding will support vehicle procurement, fleet operations, mapping, safety validation, and continued development of its autonomous driving system.

Waymo is widely regarded as the most mature self-driving operator in the market, with fully driverless commercial services already running in Phoenix, San Francisco, and Los Angeles. Riders in these cities can summon autonomous vehicles without a safety driver, marking a milestone that most competitors have yet to reach.

Scaling Fleets and Expanding Globally

A central goal of the new capital is rapid fleet expansion. Waymo plans to add thousands of vehicles over the coming years, increasing ride availability and reducing wait times in existing markets. The company is also preparing for launches in new U.S. cities while laying the groundwork for international deployments.

Executives have pointed to London and Tokyo as priority markets, reflecting a broader strategy to enter dense, regulation-heavy cities where ride-hailing demand is high and public transit integration is critical. These expansions will require close coordination with regulators, city governments, and transportation agencies, as well as localized mapping and safety validation.

Waymo’s approach differs from earlier autonomy hype cycles by focusing on controlled geographic rollouts rather than nationwide launches. The company has emphasized that scaling safely, even if slower, is essential to long-term viability and public trust.

From Moonshot to Business Unit

Originally founded as a self-driving research project inside Google, Waymo has evolved into a core subsidiary of Alphabet. The latest fundraising reflects its transition from experimental technology to operational transportation business, with unit economics now under closer scrutiny.

Waymo generates revenue through ride-hailing and commercial partnerships, but profitability remains a long-term goal. Autonomous fleets require heavy upfront investment in vehicles, sensors, compute, and operations, making scale essential to improving margins. The company has said the new funding gives it the runway needed to reach that scale without compromising safety standards.

Industry analysts note that Waymo’s progress has helped reset expectations across the autonomous vehicle sector, which has seen several high-profile retrenchments and shutdowns over the past two years.

Competitive Landscape and Investor Confidence

While rivals such as Cruise, Zoox, and various Chinese autonomous driving firms continue development, Waymo remains the clear leader in fully driverless commercial operations. Its ability to raise $16 billion in a challenging capital environment highlights investor belief that autonomy, while slower than once promised, still represents a transformative shift in transportation.

The funding also reflects broader interest in physical AI systems that operate reliably in the real world, a theme gaining traction across robotics, logistics, and industrial automation.

Waymo said it will continue to prioritize safety, transparency, and incremental expansion as it deploys the new capital. For the autonomous vehicle industry, the round marks one of the clearest signs yet that robotaxis are moving from experimental novelty toward global infrastructure.

Artificial Intelligence (AI), Automation, News, Robots & Robotics, Startups & Venture

First Patient Enrolls in Clinical Trial for Wandercraft’s Atalante X Exoskeleton

Wandercraft has enrolled the first patient in a clinical trial of its Atalante X exoskeleton, marking a key step toward broader medical deployment of autonomous walking systems.

By Laura Bennett Published: Updated:

Wandercraft has enrolled the first patient in a clinical trial evaluating its Atalante X exoskeleton, a milestone that brings the company closer to expanding the use of autonomous robotic walking systems in rehabilitation settings. The trial is designed to assess safety, performance, and therapeutic outcomes for patients undergoing gait rehabilitation.

The enrollment represents a significant step for Atalante X, which is intended for use in hospitals and rehabilitation centers. Unlike many powered exoskeletons that require crutches or walkers, Atalante X is designed to operate hands-free, allowing patients to walk, turn, and stop autonomously under clinical supervision.

Wandercraft’s technology targets patients with neurological or mobility impairments who require intensive gait therapy, an area where staffing constraints and physical demands often limit treatment frequency and duration.

Autonomous Walking in Rehabilitation

Atalante X builds on Wandercraft’s earlier Atalante platform, which has been used in rehabilitation centers across Europe. The system uses a combination of onboard sensors, real-time control algorithms, and dynamic balance technology to enable stable walking without external support.

The exoskeleton is designed to adapt to each patient’s gait and posture, allowing clinicians to customize therapy sessions based on individual needs. By enabling hands-free movement, the system aims to promote more natural walking patterns and reduce the physical burden on therapists, who often must support patients manually during sessions.

In the clinical trial, researchers will evaluate how patients interact with the system over time, including measures of comfort, safety, and functional improvement. Data collected during the trial will help inform regulatory submissions and guide future product development.

From Research to Clinical Validation

Clinical validation is a critical step for robotic rehabilitation technologies, where safety standards are high and real-world performance can differ significantly from laboratory demonstrations. Wandercraft’s trial will assess Atalante X in controlled clinical environments, with close monitoring by medical professionals.

The company has emphasized that the goal is not to replace therapists, but to augment rehabilitation teams by enabling longer and more consistent therapy sessions. Autonomous exoskeletons can allow patients to practice walking repeatedly, which is often essential for neurological recovery.

Wandercraft’s approach reflects a broader trend in medical robotics toward systems that combine advanced autonomy with clinician oversight, balancing efficiency with patient safety.

Expanding the Role of Physical AI in Healthcare

The Atalante X trial comes amid growing interest in physical AI systems that can operate safely in close proximity to humans. In healthcare, robotics companies are increasingly focused on technologies that address labor shortages while improving patient outcomes.

By enrolling its first patient, Wandercraft moves Atalante X from development into a formal evaluation phase that could pave the way for wider adoption. Successful trial results could support expanded use in rehabilitation centers and potentially open the door to additional indications.

As healthcare systems face rising demand for rehabilitation services, autonomous exoskeletons like Atalante X are being positioned as tools that can help scale therapy without compromising quality. The outcome of Wandercraft’s clinical trial will be closely watched as an indicator of how quickly such systems can move from specialized use to standard care.

News, Robots & Robotics, Startups & Venture

New York Robotics Formally Launches as the City Emerges as Global Robotics Hub

New York Robotics has formally launched as the Tri-State region surpasses 160 robotics startups, signaling New York’s rise as a major global center for robotics and physical AI.

By Laura Bennett Published: Updated:
New York Robotics Formally Launches as the City Emerges as Global Robotics Hub
Robotics startups and research teams gather in New York City as the region rapidly expands into a global hub for robotics and physical AI innovation. Photo: New York Robotics

New York Robotics has formally launched amid a surge in robotics investment, startup formation, and talent concentration across the New York Tri-State region. The announcement marks a turning point for New York’s robotics sector, which now counts more than 160 robotics startups across New York, New Jersey, and Connecticut, with nearly 100 based in New York City alone.

“Our vision is to leverage New York’s central role in the global economy to build a leading robotics hub,” said Jacob Hennessey-Rubin, founding board member and executive director of New York Robotics. “This is a place where founders, researchers, enterprises, and investors can collaborate to shape the future of robotics, while positioning the sector as a serious investment category on Wall Street.”

The milestone signals New York’s emergence as a serious global contender in robotics and embodied AI, joining established hubs such as Boston, Silicon Valley, Pittsburgh, Munich, and Zurich. Long viewed as strong in finance, media, and enterprise software, New York is increasingly becoming a center for robotics commercialization as the technology moves from research into real-world deployment.

A Dense and Expanding Robotics Network

New York Robotics was formed to address a long-standing gap in the region’s innovation landscape. While the city has deep pools of capital, customers, and technical talent, robotics activity historically developed in silos. NYR’s mission is to unify startups, investors, enterprises, researchers, and government organizations into a coordinated ecosystem capable of supporting companies from early research through large-scale deployment.

Over the past two years, New York Robotics has quietly assembled one of the most comprehensive robotics networks in the United States. The organization now engages more than 450 robotics startups worldwide, including over 160 in the Tri-State region, alongside more than 80 corporations, 20 academic institutions, 40 research labs, 300 venture capital firms, and dozens of domestic and international government organizations.

Founding members include organizations such as J.P. Morgan, New York University, AlleyCorp, EisnerAmper, and Cybernetix Ventures. The group has also hosted or co-hosted more than 20 ecosystem events, including the first dedicated robotics programming at NY TechWeek, helping elevate robotics as a visible and strategic sector within the city’s broader technology economy.

NYR’s official launch coincided with its sponsorship of AlleyCorp’s inaugural Deep Tech NY conference, underscoring the growing convergence of robotics, capital, and enterprise demand in the region.

Capital, Talent, and Commercial Pull

Unlike many robotics hubs driven primarily by academic research, New York’s ecosystem is shaped by proximity to enterprise customers in finance, healthcare, logistics, construction, and real estate. This commercial pull is attracting startups focused on deploying robots in real operating environments rather than remaining confined to labs.

“We see New York Robotics as a kind of exchange,” said Randy Howie, founding board member and managing partner of New York Robotics. “It’s a platform where startups, enterprises, investors, academia, and government can connect efficiently and translate robotics innovation into broad economic impact.”

Industry leaders echo that view. Founders point to New York’s ability to attract global engineering talent while offering access to industrial infrastructure in surrounding areas such as Long Island and New Jersey, where robotics companies can scale manufacturing and testing.

Mapping the Ecosystem

As part of its platform, New York Robotics recently released a private beta of the NYR Index, an ecosystem intelligence tool designed to map startups, investors, labs, and enterprise participants. The tool is intended to give partners visibility into deal flow, talent movement, and sector trends as robotics investment accelerates.

NYR has also partnered with organizations such as C10 Labs, an operator selected by NYCEDC for the NYC AI Nexus, to support applied AI ventures across robotics, advanced manufacturing, healthcare, and sustainability.

With robotics increasingly viewed as essential infrastructure for addressing labor shortages, productivity challenges, and demographic shifts, New York Robotics’ formal launch reflects a broader reality: New York is no longer just a consumer of robotics technology. It is becoming a place where the next generation of robots is built, funded, and deployed.