Tennis Playing Humanoid Robot Learns from Imperfect Data and Beats Its Creator

Researchers from Tsinghua University and Galbot have developed a humanoid robot that learned to play tennis using imperfect human motion data. Within days of deployment, the robot improved enough to outperform its human creator.

By Daniel Krauss | Edited by Kseniia Klichova Published: Updated:
Tennis Playing Humanoid Robot Learns from Imperfect Data and Beats Its Creator
A Unitree G1 humanoid robot trained through the LATENT learning system returns a tennis ball during testing, demonstrating how robots can learn complex athletic motion from imperfect human data. Photo: Tsinghua University / Galbot

The ability to teach robots complex physical skills has long depended on carefully curated training data and highly controlled demonstrations. A new research project suggests that requirement may be loosening.

Researchers from Tsinghua University and robotics company Galbot have demonstrated a humanoid robot capable of learning tennis using imperfect human motion clips rather than idealized training data. The system, called LATENT, enabled a Unitree G1 humanoid robot to improve rapidly in real-world play – eventually defeating the researcher who trained it.

The project highlights a growing shift in robotics toward learning systems that can extract usable behaviors from messy, incomplete demonstrations. Instead of relying on precise instruction, the robot learns how to combine imperfect examples into effective motion strategies.

According to project lead Zhikai Zhang, the robot’s progress was strikingly fast. On its first day of real-world deployment it failed to return a single serve. By the final day of testing, Zhang reported that he could no longer beat the robot in rallies.

Teaching Robots with Imperfect Demonstrations

Traditional robotic skill learning typically requires large datasets of clean, carefully labeled demonstrations. Capturing these datasets can be expensive and time consuming, particularly for tasks involving dynamic whole-body motion such as sports.

The LATENT system approaches the problem differently. Instead of relying on perfect motion capture data, the system trains on fragmented human tennis clips that include errors, inconsistencies, and incomplete movements.

From these clips, the model constructs what the researchers call a latent action space – a structured representation of primitive movements extracted from imperfect examples. A higher-level AI policy then functions as a coordinating controller, selecting and refining those primitive actions to produce effective gameplay behavior.

The training process occurs first in simulation, where the robot practices thousands of interactions without risk. Once the controller stabilizes, the policy is transferred to a physical robot using sim-to-real techniques, allowing the learned behaviors to operate on the Unitree G1 humanoid platform.

This approach allows the robot to learn usable skills even when the input demonstrations are flawed, something that more traditional robotics pipelines struggle to accommodate.

Why Messy Data May Be the Future of Robot Training

While a tennis-playing robot may appear like a demonstration project, the underlying method addresses a broader challenge in robotics: scaling physical skill acquisition.

Most robots today still depend on highly structured environments and carefully engineered training pipelines. In real-world settings such as warehouses, construction sites, or disaster response zones, collecting perfect demonstrations is rarely practical.

Learning from imperfect data could significantly lower the barrier to training robots for complex tasks. Instead of requiring a flawless example every time, robots could observe ordinary human activity and gradually assemble functional behaviors.

This shift mirrors trends already underway in large-scale AI systems, where models increasingly learn from vast amounts of noisy real-world data rather than tightly curated datasets.

For robotics, the implications are particularly significant because physical tasks often involve unpredictable conditions, subtle motor control, and continuous feedback from the environment. Systems that can tolerate imperfect training signals may adapt more quickly to these realities.

A Testbed for Embodied AI

Sports have become a useful proving ground for embodied AI research because they combine perception, motion planning, balance control, and fast decision making.

Tennis, in particular, requires precise timing, whole-body coordination, and real-time reaction to an unpredictable opponent. Successfully sustaining rallies demonstrates that a robot can integrate visual perception with dynamic locomotion and arm control.

In this case, the tennis court served as a compact test environment for evaluating how well the LATENT system could convert imperfect demonstrations into coordinated action.

The research team has made the project details and code publicly available, allowing other researchers to replicate and extend the approach.

If the underlying method proves scalable, it could influence how robots are trained for tasks far beyond sports – from industrial manipulation to collaborative human-robot work. Instead of waiting for perfect datasets, robots may increasingly learn from the same imperfect movements humans produce every day.

News, Robots & Robotics, Science & Tech

Foxconn Turns to Physical AI Robotics as AI Server Boom Reshapes Manufacturing

Foxconn is expanding its robotics strategy as demand for AI infrastructure surges. The company is working with partners including Skild AI, ABB, and NVIDIA to deploy intelligent robots in electronics assembly.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Foxconn Turns to Physical AI Robotics as AI Server Boom Reshapes Manufacturing
Robotic assembly systems operate on an electronics production line at Foxconn, where the company is exploring AI-powered robotics platforms to support the growing demand for AI hardware. Photo: Foxconn

The rapid expansion of artificial intelligence infrastructure is reshaping the world’s largest electronics manufacturing operations, and Foxconn is increasingly turning to robotics to keep pace.

The Taiwanese manufacturing giant, formally known as Hon Hai Precision Industry, said strong demand for AI servers is expected to drive growth in 2026, even as geopolitical tensions and supply chain pressures continue to affect global technology markets. At the same time, the company is expanding partnerships aimed at introducing AI-driven robotics into its production lines.

Foxconn has traditionally been known for assembling consumer electronics, most notably Apple’s iPhone. But the company has spent the past several years shifting toward higher-value sectors including AI infrastructure, electric vehicles, and advanced manufacturing automation.

Chairman Young Liu told analysts that demand for AI servers remains strong and is expected to accelerate further. AI-related hardware has become one of the fastest-growing segments of the company’s business, reflecting the global surge in spending on data centers and machine learning infrastructure.

Robotics Moves into Electronics Assembly

As manufacturing volumes for AI hardware grow, Foxconn is experimenting with new robotics systems designed to increase precision and throughput in complex assembly tasks.

The company is piloting an AI robotics platform developed with ABB and NVIDIA, aimed at bringing advanced perception and decision-making capabilities to industrial robots working on electronics assembly lines. The system uses simulation tools and digital twin technology to model factory operations before deploying robots on the production floor.

Another initiative involves integrating a generalized robotics intelligence system developed by Skild AI. The technology is designed as a shared “robot brain” that can be deployed across different types of robots and tasks, allowing machines to adapt to multiple workflows without extensive reprogramming.

Foxconn plans to use the system to support electronics assembly processes tied to its AI hardware production, particularly as the complexity of advanced computing components continues to rise.

The push reflects a broader shift in robotics, where manufacturers are moving away from narrowly programmed automation toward AI-enabled systems that can adapt to changing production environments.

AI Hardware Demand Reshapes Foxconn’s Business

Foxconn’s move into robotics coincides with a major change in its revenue mix driven by AI infrastructure.

Cloud and networking products, which include AI servers, now represent a significantly larger share of the company’s business than in previous years. The segment accounted for roughly 40 percent of revenue in 2025, up from about 30 percent the year before.

The growth comes as technology companies worldwide increase spending on computing power required to train and operate large AI models. Foxconn is one of the key manufacturers producing servers used in these systems, including hardware built for NVIDIA’s AI platforms.

The company reported net profit of NT$189.4 billion in 2025, a 24 percent increase from the previous year, with total revenue reaching NT$8.1 trillion.

At the same time, executives acknowledged that the broader environment remains uncertain. Tariffs, geopolitical tensions, and supply chain disruptions continue to affect global technology manufacturing. Rising energy prices linked to international conflicts have also introduced cost pressures across logistics and industrial operations.

Despite those challenges, Liu said the company expects strong growth in AI server shipments, forecasting high double-digit quarter-on-quarter increases in AI rack demand early in 2026.

For Foxconn, the combination of AI infrastructure demand and robotics deployment signals a strategic shift in how electronics manufacturing will evolve. As factories become more automated and AI-driven, manufacturers may increasingly rely on intelligent robotic systems not just for efficiency but to manage the growing complexity of advanced computing hardware.

RoboForce Raises $52 Million to Deploy Physical AI Robots for Industrial Labor

RoboForce has raised $52 million in a round led by YZi Labs to expand deployment of its TITAN physical AI robots. The company is targeting labor shortages across sectors including solar energy, logistics, mining, and data center construction.

By Rachel Whitman | Edited by Kseniia Klichova Published:
RoboForce Raises $52 Million to Deploy Physical AI Robots for Industrial Labor
RoboForce’s TITAN industrial robot is designed for demanding field environments such as solar construction and logistics infrastructure, where companies face growing labor shortages. Photo: RoboForce

A new robotics startup focused on industrial labor automation has raised $52 million to accelerate deployment of physical AI systems designed for some of the most demanding jobs in modern infrastructure.

Silicon Valley-based RoboForce announced the funding round led by YZi Labs, with additional backing from investors including technology entrepreneurs and institutional partners. The company is developing a full-stack robotics platform aimed at replacing or augmenting human labor in sectors such as renewable energy construction, logistics, mining, and data center development.

The investment signals growing investor interest in what many industry leaders describe as the next phase of artificial intelligence: machines capable of operating in the physical world rather than purely digital environments.

RoboForce’s flagship system, known as TITAN, is designed to work in environments where heat, repetition, and safety risks make human labor increasingly difficult to sustain. The company says it has already received letters of intent representing demand for more than 11,000 robots as it transitions from pilot deployments to larger-scale production.

Automation for the Hardest Industrial Jobs

The company’s founding thesis emerged from firsthand observation of labor-intensive industrial work.

Co-founder and CEO Leo Ma, who previously worked on autonomous systems and mobility technologies, has described visiting numerous industrial sites where the same challenge repeatedly appeared: physically demanding jobs that were difficult to staff consistently.

Solar energy construction offers a clear example. Utility-scale solar installations require workers to secure millions of panels across large outdoor sites, often in extreme heat. In the United States alone, labor shortages contributed to delays affecting tens of gigawatts of solar capacity in recent years.

Similar gaps exist across logistics hubs, mining operations, and infrastructure construction. These jobs require endurance, precision, and safety compliance, but often struggle to attract or retain workers.

RoboForce is positioning its robots as a solution to this structural workforce gap. TITAN is designed for millimeter-level precision and sustained operation in harsh environments, allowing it to perform tasks such as assembly, installation, and materials handling in large industrial projects.

Building a Physical AI Data Flywheel

Beyond the hardware itself, the company’s strategy centers on what it calls a “physical AI data flywheel”.

Each deployed robot collects operational data from real-world environments. That data feeds back into RoboForce’s foundation model, allowing the system to improve its capabilities over time and adapt to new industrial tasks.

The concept mirrors trends in autonomous vehicles and large-scale AI systems, where real-world data becomes a key competitive advantage. The more robots operating in the field, the faster the learning cycle accelerates.

RoboForce is developing its platform in collaboration with NVIDIA’s robotics ecosystem. Its systems use NVIDIA Jetson Thor for edge computing while relying on Isaac simulation tools, Isaac Lab training frameworks, and Cosmos world models to train robotic behaviors before deploying them in physical environments.

The approach allows robots to practice tasks in simulation and refine them with real-world feedback, narrowing the gap between experimental demonstrations and production deployment.

The company’s visibility increased when NVIDIA CEO Jensen Huang highlighted RoboForce’s technology during a keynote presentation at GTC, framing AI-powered robotics as a key driver of a broader industrial transformation.

Investors Bet on Physical AI Infrastructure

For YZi Labs, which manages more than $10 billion in assets, the investment reflects a growing conviction that robotics will become a central layer of future infrastructure.

Ella Zhang, managing partner and head of the firm, said the investment aligns with the belief that the next wave of AI innovation will extend beyond digital applications into machines that interact directly with the physical world.

Zhang will join RoboForce’s board as part of the investment.

The company was founded in 2023 by engineers and researchers from organizations including Carnegie Mellon University, the University of Michigan, Amazon Robotics, Google, Waymo, Tesla Robotics, and ABB.

The funding will be used to expand the company’s robot foundation models, scale manufacturing of its robotic systems, and convert existing pilot programs into full production deployments.

For the broader robotics sector, the deal reflects a wider shift in how automation is framed. Rather than focusing solely on factory efficiency, a growing number of companies are targeting labor-intensive sectors where workforce shortages threaten economic growth.

If those systems prove reliable at scale, robots may increasingly become a structural component of infrastructure development itself – helping build the energy systems, data centers, and logistics networks that underpin the global economy.

NVIDIA Expands Physical AI Ecosystem to Accelerate Real World Robotics Deployment

NVIDIA is expanding its robotics platform with new world models, simulation frameworks, and partnerships with leading robot manufacturers. The move aims to accelerate the deployment of AI-powered robots across manufacturing, logistics, healthcare, and humanoid robotics.

By Laura Bennett | Edited by Kseniia Klichova Published:
NVIDIA Expands Physical AI Ecosystem to Accelerate Real World Robotics Deployment
NVIDIA CEO Jensen Huang presents new physical AI infrastructure at GTC, highlighting partnerships with global robotics manufacturers building next-generation intelligent machines. Photo: NVIDIA

The race to bring artificial intelligence into the physical world is accelerating, and NVIDIA is positioning itself at the center of the emerging robotics stack.

At its recent announcements surrounding the GTC conference, the company unveiled a broader physical AI platform combining simulation software, world models, and robotics foundation models designed to support the development and deployment of intelligent machines. The initiative is backed by partnerships with major robotics companies including ABB Robotics, FANUC, KUKA, Agility Robotics, Figure, Universal Robots, and Yaskawa.

The effort reflects a wider shift across the robotics industry. As robots become more autonomous and adaptable, companies are moving beyond traditional automation toward systems that can perceive environments, reason about tasks, and act with greater flexibility.

NVIDIA founder and CEO Jensen Huang framed the shift as a structural change in industrial technology. “Physical AI has arrived,” Huang said, arguing that many industrial companies will increasingly operate as robotics companies as intelligent machines become embedded in manufacturing, logistics, infrastructure, and transportation systems.

Building the Infrastructure for Robot Intelligence

The company’s robotics strategy centers on providing the underlying computational and software infrastructure required to train and operate intelligent robots at scale.

New components include updated NVIDIA Isaac simulation frameworks, the Cosmos family of world models, and Isaac GR00T robot foundation models designed to help robots learn generalized skills across different environments. Together, these tools allow developers to generate synthetic environments, train policies in simulation, and transfer those behaviors to real machines.

Simulation plays a central role. Industrial robotics companies including ABB, FANUC, Yaskawa, and KUKA are integrating NVIDIA’s Omniverse and Isaac technologies to create digital twins of production lines, allowing engineers to design and test robotic systems virtually before deploying them on factory floors.

The companies are also incorporating NVIDIA Jetson edge computing modules into their controllers to enable real-time AI inference directly on robots. With millions of industrial robots already operating globally, these integrations aim to gradually layer advanced intelligence onto existing automation infrastructure.

The approach reflects a broader industry consensus that robotics development will increasingly rely on large-scale simulation, synthetic data generation, and foundation models rather than traditional rule-based programming.

A Push Toward General Purpose Robot Brains

Another key focus of the initiative is the development of generalized robotic intelligence.

Companies such as Skild AI and FieldAI are using NVIDIA’s Cosmos world models and Isaac simulation environments to train AI systems that can operate across different robotic embodiments. Instead of building task-specific software for every application, developers are attempting to create “robot brains” capable of adapting to new environments and tasks with limited retraining.

One of the most visible deployment efforts involves Skild AI working with ABB Robotics and Universal Robots to integrate generalized AI systems into widely deployed industrial and collaborative robots. The goal is to expand automation into more dynamic tasks that traditionally required human adaptability.

Skild AI is also collaborating with Foxconn on assembly systems used in NVIDIA’s Blackwell chip production lines. These systems rely on AI-driven dual-arm manipulators designed to perform highly precise electronics assembly operations.

The broader strategy aligns with NVIDIA’s belief that the next generation of robots will combine the reliability of industrial automation with the adaptability of modern AI systems.

Humanoid Robots and Surgical Systems Join the Platform

Beyond industrial automation, NVIDIA’s ecosystem now extends into humanoid robotics and healthcare.

Developers including Agility Robotics, Figure, NEURA Robotics, and AGIBOT are using the company’s simulation tools and robotics models to accelerate development of humanoid robots capable of operating in human environments. Building such machines requires integrating perception, locomotion, dexterous manipulation, and decision-making within tightly constrained safety requirements.

Healthcare robotics is another area of expansion. Companies including CMR Surgical, Johnson & Johnson MedTech, and Medtronic are using NVIDIA simulation and computing platforms to train and validate AI-assisted surgical systems before clinical deployment.

These applications require particularly strict validation processes, making simulation and digital twin technology especially valuable.

The expansion of NVIDIA’s robotics ecosystem comes as demand for AI computing continues to surge. Huang recently projected that AI chip sales could eventually reach $1 trillion annually as industries transition toward what he described as a new computing era driven by AI systems embedded across both digital and physical infrastructure.

For robotics, the implication is clear: as machines become more capable of perceiving and interacting with the real world, the boundary between AI software and industrial hardware is increasingly dissolving. Companies that control the infrastructure connecting those layers may shape how quickly intelligent machines move from research labs into everyday operations.

Pokémon Go Data Is Now Training Delivery Robots

Location data collected from millions of Pokémon Go players is now being used to train delivery robots, highlighting the unexpected role of consumer AR games in robotics development.

By Laura Bennett | Edited by Kseniia Klichova Published:
Pokémon Go Data Is Now Training Delivery Robots
A small delivery robot navigates a city sidewalk as spatial data originally collected through mobile AR games helps improve robotic navigation in urban environments. Photo: Coco Robotics

Millions of people who spent years chasing virtual Pokémon through city streets unknowingly helped create one of the largest real-world datasets now being used to train robots.

Niantic Spatial, a company spun out of the augmented reality developer behind Pokémon Go, has partnered with Coco Robotics to improve navigation for urban delivery robots. The collaboration uses spatial mapping data collected from players of Niantic’s games to help robots move through complex city environments.

The project highlights an unexpected intersection between gaming, artificial intelligence, and robotics: the same technology used to place digital creatures on a phone screen can also guide autonomous vehicles through real-world streets.

Turning AR Gameplay into Spatial Intelligence

When Pokémon Go launched in 2016, millions of players explored cities while using their smartphones to capture virtual creatures layered onto real-world environments.

Behind the scenes, the game relied on Niantic’s Visual Positioning System (VPS), a technology designed to understand a user’s location by analyzing surrounding landmarks rather than relying solely on GPS signals.

Players contributed to this system by scanning buildings, monuments, and other public spaces from different angles using their phones.

Over time, these scans created detailed three-dimensional maps of real-world locations.

The data helped Niantic improve AR accuracy in its games, but it also built a massive spatial dataset describing how cities look from ground level – exactly the type of information robots need to navigate sidewalks and intersections.

The Same Problem as Catching Pikachu

Niantic Spatial now aims to apply that dataset to robotics.

The company’s first robotics partnership is with Coco Robotics, which operates a fleet of small autonomous delivery robots designed to transport food and groceries through city streets.

Coco’s robots currently operate in several cities, including Los Angeles, Chicago, Miami, Jersey City, and Helsinki.

Navigating urban environments is one of the hardest problems in robotics. Tall buildings interfere with GPS signals, sidewalks are crowded with pedestrians, and conditions change constantly.

Niantic’s VPS technology helps solve this problem by allowing robots to identify their exact location by comparing camera images to its spatial database of landmarks.

According to Niantic Spatial CEO John Hanke, the underlying technical challenge is surprisingly similar to what players experienced in the AR game.

In both cases, software must understand how objects move through a complex physical world.

A Dataset Built by Millions of Players

Much of the data powering the system was collected indirectly by players.

Niantic introduced features that encouraged users to photograph and scan locations in exchange for rewards within the game, such as items or rare Pokémon.

These contributions helped build a detailed visual model of cities under different lighting conditions, weather, and viewing angles.

While Niantic has long acknowledged that its games collect environmental data, the revelation that these datasets are now helping train robots may surprise some players.

Still, for robotics developers, such datasets are extremely valuable.

Unlike simulation environments or small-scale robotics experiments, Niantic’s data reflects the messy complexity of real-world cities.

Gaming Data Meets Urban Robotics

The collaboration illustrates a broader trend in robotics development: companies are increasingly relying on large-scale datasets collected outside traditional robotics research.

Consumer technologies – including smartphones, cameras, and games – are generating vast amounts of real-world visual information that can help train autonomous systems.

For delivery robots attempting to navigate dense urban environments, that data may prove critical.

If the partnership succeeds, the hours players spent exploring parks, sidewalks, and landmarks in search of digital creatures may end up helping robots find their way through the same streets.

In other words, catching Pikachu may have helped teach a robot how to deliver pizza.

Automation, News, Robots & Robotics

Samsung Targets Robotic Hands as the Next Breakthrough in Humanoid Robotics

Samsung has launched a dedicated robotics research group focused on developing advanced robotic hands, betting that dexterity will be the key to unlocking practical humanoid robots.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Samsung Targets Robotic Hands as the Next Breakthrough in Humanoid Robotics
A robotic hand prototype designed for precision manipulation illustrates the growing focus on dexterous robotics systems for manufacturing and humanoid robots. Photo: Kseniia Klichova / RobotsBeat

Samsung Electronics is placing a major strategic bet on one of the most difficult problems in robotics: building robot hands capable of manipulating objects with human-like precision.

The company recently established a new research group called Hand Lab within its Future Robotics Task Force. The initiative focuses on developing advanced robotic hands that could eventually enable humanoid robots and automated manufacturing systems to handle delicate tasks currently performed by humans.

Industry analysts view the move as a signal that Samsung intends to compete more aggressively in the emerging humanoid robotics market.

While robots have become increasingly capable of walking, navigating environments, and maintaining balance, engineers say the real challenge lies elsewhere – dexterous manipulation.

Why Robotic Hands Matter

In robotics research, the ability to move like a human is no longer the primary obstacle.

Modern robots can climb stairs, recover from falls, and navigate complex environments with increasing reliability. But performing tasks that humans consider simple – tightening a screw, picking up a fragile object, or assembling small components – remains extremely difficult.

These tasks require a combination of force control, tactile feedback, and coordinated finger motion that traditional industrial robots struggle to achieve.

Most factory robots rely on simple grippers designed for highly structured environments. Humanoid robots, however, must interact with tools, components, and devices originally designed for human hands.

The result is a growing consensus within robotics research that the future of humanoid robots depends heavily on hand design.

Samsung’s decision to create a specialized research group dedicated to robotic hands reflects this shift in priorities.

A Tendon-Driven Approach to Dexterity

According to industry reports, Samsung’s robotic hand project is exploring a tendon-driven design, a system inspired by the anatomy of the human hand.

Instead of placing motors directly inside each finger, artificial tendons – cables running through the arm – pull and control finger movements. This architecture allows for smoother motion, finer force control, and potentially greater energy efficiency.

The approach is significantly more complex to engineer and manufacture than conventional robotic grippers, which is why most industrial robots avoid it.

However, tendon-driven systems can produce more natural and adaptable movements, making them well suited for humanoid robotics.

Samsung also plans to incorporate tactile sensors that allow robotic fingers to detect pressure, texture, and contact forces. These signals could feed into machine-learning systems that help robots adjust their grip in real time.

Such capabilities are considered essential for what researchers increasingly call physical AI – systems that combine artificial intelligence with real-world robotic interaction.

Building a Robotics Ecosystem

Samsung’s focus on robotic manipulation is part of a broader strategy to build a vertically integrated robotics ecosystem.

Over the past several years, the company has expanded its investments in robotics technology across multiple business units.

Samsung SDI is developing batteries tailored for robotics systems, while Samsung Electro-Mechanics is working on actuators and components for robotic motion.

The company also acquired a controlling stake in Korean robotics developer Rainbow Robotics, known for its humanoid and dual-arm robotic platforms.

Together, these initiatives could allow Samsung to integrate hardware, sensors, computing, and AI into a unified robotics platform.

The company has also outlined a longer-term plan to create AI-powered autonomous factories by 2030, where intelligent robots perform tasks ranging from logistics and inspection to complex assembly.

In such environments, robotic hands capable of delicate manipulation could become the key enabling technology.

Global Competition Intensifies

Samsung’s push into robotic manipulation also reflects rising global competition in humanoid robotics.

China’s robotics sector is expanding rapidly, with analysts projecting tens of thousands of humanoid robots could be produced annually within the next few years.

Chinese manufacturers have already achieved scale in service robots such as delivery and cleaning machines, often competing on cost.

Samsung appears to be taking a different approach – focusing on technological differentiation rather than mass production.

If the company succeeds in developing robotic hands capable of human-level dexterity, it could unlock new applications not only in electronics manufacturing but also across logistics, construction, and industrial automation.

Within robotics circles, engineers often summarize the challenge with a simple observation:

Many robots can walk.

Very few can truly use their hands.

Samsung’s new Hand Lab is designed to change that.