Elon Musk Unveils ‘Terafab’ to Build AI Chips for Cars, Robots, and Space Systems

Elon Musk has unveiled Terafab, a massive semiconductor facility planned near Austin, Texas, designed to produce custom AI chips for Tesla vehicles, Optimus robots, SpaceX satellites, and xAI models.

By Rachel Whitman | Edited by Kseniia Klichova Published:
Elon Musk Unveils ‘Terafab’ to Build AI Chips for Cars, Robots, and Space Systems
Concept rendering of a large-scale semiconductor fabrication facility designed to produce specialized AI chips for robotics, autonomous vehicles, and space-based computing systems. Photo: Kseniia Klichova / RobotsBeat

Elon Musk has announced plans for a massive semiconductor manufacturing project aimed at producing specialized AI chips for Tesla vehicles, humanoid robots, and space systems.

The facility, called Terafab, will be built near Austin, Texas and jointly developed by Tesla and SpaceX. According to Musk, the plant will focus on producing custom processors designed specifically for artificial intelligence workloads used in autonomous driving, robotics, and satellite-based computing.

The project reflects a growing shift in the AI industry toward vertically integrated hardware ecosystems, where companies design their own chips to support increasingly complex AI systems.

Terafab’s planned production capacity could reach one terawatt of computing power annually, an enormous scale intended to support the expanding compute requirements of Musk’s companies.

Building Chips for Physical AI

The chips produced at Terafab are expected to serve multiple applications across Musk’s technology portfolio.

One category will focus on edge and inference computing, powering real-time AI decision-making in Tesla vehicles, robotaxis, and the company’s humanoid robot platform, Optimus. These chips are optimized for running trained AI models directly on devices where latency and energy efficiency are critical.

A second category will target high-performance AI training, supporting xAI models and large-scale data processing for SpaceX satellite systems.

As robotics and autonomous systems become more sophisticated, the demand for specialized AI processors has surged. Standard GPUs designed for cloud computing are often inefficient for real-time robotic systems that must process sensor data, control motors, and make split-second decisions.

By designing chips tailored to these tasks, companies can dramatically improve performance and energy efficiency.

A Compute Backbone for Musk’s Ecosystem

Terafab is part of a broader strategy to integrate hardware and AI development across Musk’s companies.

Tesla’s autonomous driving systems rely heavily on AI models that process visual and sensor data from vehicles. Meanwhile, Tesla’s Optimus humanoid robot is expected to require similar computing capabilities for perception, navigation, and manipulation tasks.

SpaceX also has growing computing demands. Musk has suggested that future satellite networks could support orbital data centers, enabling AI processing in space for applications ranging from communications to scientific analysis.

Under the Terafab plan, these systems would share a common computing architecture built around custom chips produced at the facility.

The project also reflects Musk’s increasing emphasis on AI as the central technology linking his companies. Tesla, SpaceX, and xAI are all developing systems that rely on large-scale machine learning models operating in physical environments.

A Massive Manufacturing Bet

The proposed facility represents one of the largest semiconductor manufacturing ambitions outside traditional chipmaking giants.

Initial construction is expected to begin in Texas before expanding production capacity over time. Musk has suggested that the long-term vision could extend beyond terrestrial computing infrastructure.

In addition to ground-based production targets of 100 to 200 gigawatts of computing power, the broader concept includes eventually supporting space-based computing systems capable of delivering up to a terawatt of processing power.

Such scale would dramatically increase the computational capacity available for training AI models and operating distributed intelligent systems across vehicles, robots, and satellites.

The Hardware Race Behind AI

The Terafab announcement underscores a broader industry trend: the race to build specialized computing infrastructure for artificial intelligence.

As AI expands beyond software into physical systems – autonomous vehicles, humanoid robots, and industrial machines – companies are increasingly designing hardware optimized for these workloads.

For Musk, the strategy aims to create a tightly integrated ecosystem where chips, software, and machines evolve together.

If successful, Terafab could become a central piece of infrastructure powering Tesla’s robots, SpaceX’s satellites, and the next generation of AI-driven machines operating both on Earth and in orbit.

Genesis AI Unveils GENE-26.5 Foundation Model and Human-Scale Robotic Hand for Dexterous Manipulation

Genesis AI has unveiled GENE-26.5, a robotics foundation model paired with a human-scale robotic hand and a data-collection glove that enables 1:1 skill transfer from humans to robots, targeting complex long-horizon manipulation tasks at commercial scale.

By Laura Bennett | Edited by Kseniia Klichova Published: Updated:

Genesis AI has unveiled GENE-26.5, a robotics foundation model designed to give robots human-level dexterous manipulation capability, alongside a proprietary robotic hand and data-collection glove system built to generate training data at scale. The San Carlos, California-based company, which emerged from stealth with $105 million in funding last year, simultaneously announced that a first general-purpose robot built on the technology will be revealed soon.

The announcement addresses what Genesis AI frames as the central bottleneck in physical AI development: the shortage of high-quality manipulation data, and the gap between human hand capability and what robotic end effectors can reliably execute.

What GENE-26.5 Can Do

Genesis AI released a demonstration video showing GENE-26.5 performing a range of complex, multi-step manipulation tasks. These include cooking a 20-step meal involving chopping, one-handed egg cracking, and two-hand coordination; preparing a smoothie with mid-air serving; conducting laboratory experiments requiring pipetting and liquid transfer with delicate instrumentation; wire harnessing described by the company as one of the most difficult tasks in electronics manufacturing; solving a Rubik’s Cube through continuous in-air manipulation; simultaneously grasping four objects of varying sizes with one hand and sorting them into bins; and playing piano at human performance level.

The task range spans from domestic service to precision industrial applications – a deliberate demonstration that the model generalizes across contexts rather than being optimized for a single domain.

The Hardware System

The robotic hand mirrors the human hand in form and function, designed to close the embodiment gap that has historically limited robots’ ability to learn from human demonstration data. It pairs with a data-collection glove equipped with tactile-sensing electronic skin. When worn by a human operator, the glove creates a 1:1:1 mapping between the glove, the human hand, and the robotic hand – allowing human task execution to translate directly into robot training data without the lossy conversion that conventional teleoperation introduces.

Genesis AI says the glove costs 100 times less than typical data-collection hardware and achieves up to five times greater data collection efficiency than traditional teleoperation methods in internal testing. The company is engaging partners to deploy the glove in real-world work environments, where workers wearing the device during normal operations would continuously generate new categories of training data – building what Genesis AI describes as a potential global human skill library.

The data engine also draws on egocentric video from humans wearing cameras and large-scale internet video of human activity, giving the model exposure to the full range of how people interact with physical environments.

Simulation and the Sim-to-Real Gap

Genesis AI has developed a proprietary simulation system using hyper-realistic physics and rendering to narrow the gap between synthetic training environments and real-world conditions. The system allows teams to train and evaluate models significantly faster than physical testing, which is slow, expensive, and difficult to scale.

“General-purpose robotics stands to reshape the global economy while opening an entirely new chapter for AI,” said Eric Schmidt, former CEO of Google and an investor in Genesis AI. “This marks an important milestone for their team and the robotics industry more broadly.”

Genesis AI is backed by Eclipse, Bpifrance, and HSG, alongside Schmidt, Xavier Niel, and AI researchers Daniela Rus and Vladlen Koltun.

Artificial Intelligence (AI), News, Robots & Robotics, Science & Tech

1X Technologies Begins Full-Scale NEO Humanoid Production at Hayward, California Facility

1X Technologies has launched full-scale production of its NEO humanoid robot at a 58,000-square-foot facility in Hayward, California, with consumer shipments scheduled for 2026 and early units already operating on the factory floor to generate training data.

By Rachel Whitman | Edited by Kseniia Klichova Published: Updated:

1X Technologies has launched full-scale production of its NEO humanoid robot at a new 58,000-square-foot facility in Hayward, California. The factory is the primary manufacturing hub for NEO, a home-focused humanoid designed to operate quietly – at a noise level below that of a modern refrigerator – while navigating domestic spaces. Consumer shipments are scheduled to begin in 2026, with the first year’s production allocation already sold out after pre-orders were taken in five days following the October launch.

The Hayward facility is vertically integrated, processing raw materials into finished components on-site rather than sourcing from global suppliers. 1X describes the operation as a “factory OS” – a real-time production management system covering every stage from raw input to completed robot.

What the Factory Produces

The production floor is organized around specialized zones. Copper coils are wound on automated lines to produce custom motors. A joint and limb assembly area constructs the robot’s tendon-driven actuators and 3D-lattice cushioned limbs. A final integration stage is where robots stand for the first time and receive their machine-washable nylon knit suits, available in tan, gray, and dark brown. A dedicated reliability lab subjects hardware to more than 20 million stress-test cycles to identify failure modes before units reach consumers.

Early NEO units are already operating inside the Hayward facility, handling internal logistics and stocking parts. The dual function is deliberate: the robots perform useful labor while generating real-world operational data that feeds back into training the NEO Cortex, the robot’s AI brain built on the NVIDIA Jetson Thor platform.

“Humanoid robots require high-performance, real-time AI inference and continuous training and testing in simulation for safe and reliable operation,” said Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA. “By using NVIDIA Jetson Thor as the brain and the NVIDIA Isaac open robotics platform as its training ground, 1X is able to accelerate the development and deployment of intelligent robots like NEO that can work safely alongside humans.”

Strategic Positioning

1X Technologies, headquartered in Palo Alto and backed by Norwegian investors, is positioning the Hayward facility as evidence that consumer humanoid production can be built and scaled in the United States. The vertically integrated model gives 1X tighter control over component quality and supply chain risk than an assembly-only approach would allow, at the cost of higher capital requirements and operational complexity.

“This is more than just a factory opening – it’s proof that the future of humanoid robotics is being built right here in the U.S.,” said Bernt Børnich, founder and CEO of 1X. “We’re not dreaming about abundance; we’re manufacturing it.”

The consumer home robotics market that NEO is targeting remains largely unproven at scale. 1X is among the first companies globally to move a home humanoid from pre-order into production, placing it ahead of most competitors on manufacturing readiness while still carrying the uncertainty of whether consumer demand at the necessary price point will materialize as units reach homes later this year.

News, Robots & Robotics, Science & Tech

Morgan Stanley: China’s Humanoid Robot Lead Will Drive Its Global Manufacturing Share to 16.5% by 2030

A Morgan Stanley report argues that China’s early investment in humanoid robotics mirrors its EV strategy a decade ago, and projects the country’s share of global manufacturing will rise from 15% to 16.5% by 2030 as robotics deployment and supply chain dominance accelerate.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Morgan Stanley: China’s Humanoid Robot Lead Will Drive Its Global Manufacturing Share to 16.5% by 2030
Humanoid robots operating on a Chinese manufacturing floor, part of a national strategy to expand automated production capacity across export industries. Photo: UBTECH Robotics

China’s early investment in humanoid robotics will help drive the next phase of its global manufacturing dominance, according to new research from Morgan Stanley. Economists led by Chetan Ahya, the bank’s chief Asia economist, project China’s share of global manufacturing will expand from 15% today to 16.5% by 2030, powered by humanoid robot deployment across its industrial base.

The report draws an explicit parallel with China’s EV strategy. A decade ago, China identified electric vehicles as a strategic growth driver and invested ahead of the curve in manufacturing capacity, battery technology, and supply chain control – a playbook that produced the world’s dominant EV and battery industries. Morgan Stanley’s economists argue that humanoid robotics has followed the same path.

Supply Chain as the Structural Advantage

China’s edge in humanoid robotics is not primarily a matter of individual robot performance but of supply chain depth. The country is building capacity across the full humanoid supply chain – from harmonic reducers and joint motors to sensors, actuators, and AI chips – giving it cost and scale advantages that competitors including the U.S., Japan, and South Korea cannot easily replicate in the near term, as those countries often rely on Chinese-manufactured components.

Government procurement is accelerating domestic adoption, creating a high-volume testing ground that is simultaneously a market and a development environment. Chinese tech parks, factories, and universities are among the most active humanoid deployment sites in the world, generating the operational data that improves AI systems faster than lab-based development allows.

“China has a track record of spotting the next big growth areas early and planning ahead,” Ahya wrote. “The robotics industry has followed a similar path.”

U.S. vs China: Different Approaches

The Morgan Stanley report contrasts the two countries’ strategic approaches. American firms, including Tesla, have focused on high-specification prototypes with an emphasis on testing and validation before scaling production. Chinese manufacturers have moved faster to deploy models at scale, using the domestic market as a live proving ground. The speed-versus-rigor tradeoff reflects different risk tolerances and capital structures – U.S. companies are more exposed to liability and regulatory scrutiny in production environments, while Chinese firms benefit from state support that absorbs some of the commercial risk of early deployment.

Protectionism as a Constraint

The report identifies trade restrictions as a meaningful risk to China’s humanoid export ambitions. Chinese EVs have encountered tariffs and market access barriers across the U.S., Europe, and other regions, and similar dynamics could emerge in robotics as the technology matures and strategic dependence concerns rise. However, the economists note that humanoid robotics is a newer industry with less existing domestic production to protect in importing countries, meaning the protectionist impulse may be slower to develop than it was with EVs.

The scale of the market at stake is significant. Morgan Stanley has separately projected the global humanoid robot market could reach $5 trillion by 2050. Whether China captures a dominant share of that market – or faces the same trade friction that has constrained its EV exports – will depend on how quickly other governments identify robotics as a strategic sector requiring protection.

News, Robots & Robotics, Science & Tech

Meta Develops Agentic AI Assistant and Internal Agent “Hatch” as It Accelerates Physical AI Push

Meta is developing a highly personalized agentic AI assistant powered by its Muse Spark model, alongside an internal agent codenamed Hatch, as the company builds toward an autonomous AI and robotics strategy across its consumer platforms.

By Laura Bennett | Edited by Kseniia Klichova Published:
Meta Develops Agentic AI Assistant and Internal Agent “Hatch” as It Accelerates Physical AI Push
More than 1 billion people use Meta AI every month. Photo: Meta

Meta is developing a highly personalized AI assistant designed to carry out everyday tasks autonomously for its billions of users, the Financial Times reported on May 5, citing people familiar with the matter. The assistant is powered by Meta’s new Muse Spark AI model and is currently being tested internally. The goal, according to the report, is to build a product with capabilities similar to OpenClaw, the agentic AI system owned by OpenAI that can connect hardware and software tools and learn from the data it generates with minimal human intervention.

Separately, The Information reported that Meta is training an internal AI agent codenamed Hatch, also inspired by OpenClaw, with a target of completing internal testing by the end of June. Meta is also planning to integrate a standalone agentic shopping tool into Instagram, with a launch targeted before Q4 2026.

What Agentic AI Means for Meta’s Robotics Ambitions

The agentic assistant development is strategically connected to Meta’s accelerating push into physical AI. Last week, Meta acquired Assured Robot Intelligence, a humanoid robotics startup whose founding team – Lerrel Pinto and Xiaolong Wang – joined Meta’s Superintelligence Labs division. The ARI acquisition was explicitly framed around developing foundation models for whole-body humanoid control and robot self-learning.

An agentic AI layer that can autonomously manage tasks, connect tools, and learn from real-world data is a prerequisite for robots that operate independently in human environments. The same architecture that allows a digital assistant to book a restaurant, manage a calendar, and process a purchase can, in a physical AI context, allow a robot to perceive a task, plan an approach, and execute it without continuous human instruction. Meta’s simultaneous development of consumer-facing agentic software and humanoid robot intelligence reflects a coherent underlying strategy – building the AI capability stack from both ends.

Competitive Context

Meta faces intensifying competition in the agentic AI space. OpenAI’s OpenClaw has established a reference point that multiple companies are now building toward. Google, Apple, and Amazon are each developing agentic systems with varying degrees of hardware integration. For Meta, whose primary distribution channel is social media rather than devices or cloud infrastructure, the Instagram shopping agent represents the most immediate commercial application – a constrained, high-intent environment where agentic AI can generate measurable revenue.

The broader ambition – a personalized assistant that manages everyday tasks across Meta’s platforms for billions of users – requires the kind of persistent personalization and real-world task completion that current AI assistants have not reliably achieved at scale. Meta has not responded to requests for comment on the reports.

Agibot A2 Humanoid Robot Makes Fashion Debut at Met Gala Pre-Event with Alexander Wang

Agibot deployed its A2 humanoid robot at The Mark Hotel in New York ahead of the 2026 Met Gala, in collaboration with designer Alexander Wang, marking the first appearance of an embodied AI robot at a Met Gala pre-event.

By Daniel Krauss | Edited by Kseniia Klichova Published:
Agibot A2 Humanoid Robot Makes Fashion Debut at Met Gala Pre-Event with Alexander Wang
A full-size humanoid robot posing for photographers outside a luxury hotel in New York during a high-profile fashion event, dressed and styled for a red carpet appearance. Photo: AGIBOT

Chinese robotics company Agibot deployed its A2 humanoid robot at The Mark Hotel in New York on May 5, ahead of the 2026 Met Gala, in a collaboration with fashion designer Alexander Wang. The appearance marked the first time an embodied AI robot has attended a Met Gala pre-event, drawing significant attention from media, photographers, and bystanders gathered outside the hotel – a traditional staging ground for celebrity arrivals before the evening’s main event.

The A2 robot posed for photographers, changed posture on request, held items, and served drinks to guests – a set of tasks that demonstrated real-world manipulation and crowd navigation capability in one of the more operationally unpredictable environments a humanoid robot has been deployed in publicly. At one point the robot became briefly stuck exiting an elevator and required assistance from handlers before continuing. The moment was captured on video and circulated widely on social media alongside footage of the robot’s red carpet posing.

A2’s Operational Profile

The A2 is Agibot’s full-size humanoid platform, built with human-like proportions and bipedal locomotion designed for stability in crowded, uncontrolled environments. Its perception and decision-making systems allowed it to navigate the dense scene outside The Mark – cameras, crowds, and continuous movement – without significant incident beyond the elevator delay. The drink-serving and object-handling tasks performed during the event reflect the same dexterous manipulation capabilities Agibot has been demonstrating in its industrial deployments at Longcheer Technology’s electronics manufacturing facility.

Technology Meets Fashion

The collaboration with Alexander Wang aligned with the Met Gala’s 2026 theme of “Fashion is Art”. For Agibot, the appearance served a dual purpose: demonstrating the A2’s capability in a chaotic, high-visibility public environment, and positioning embodied AI as a presence in cultural spaces beyond its established industrial context.

The Met Gala deployment is the latest in a series of public-facing appearances by Chinese humanoid robots – from the Beijing half-marathon to the Spring Festival gala – that serve as demonstration platforms in front of large audiences. The A2’s appearance at one of the most globally watched fashion events of the year extends that pattern into Western cultural territory, with the added dimension of a commercial fashion partnership providing the access and framing.

Agibot has previously stated its ambition to expand embodied AI deployment from manufacturing and logistics into service and consumer environments. The Met Gala appearance, whatever its promotional character, put the A2 in a setting that tested navigation, interaction, and object handling in conditions no factory floor replicates.

News, Robots & Robotics