The leader of OpenAI’s robotics and hardware initiatives has resigned after raising concerns about the company’s agreement with the U.S. Department of Defense, highlighting growing tensions over how advanced artificial intelligence technologies should be used in military settings.
Caitlin Kalinowski, who joined OpenAI in late 2024 to lead the company’s renewed robotics and hardware group, stepped down over the weekend following the announcement of a defense agreement that would allow OpenAI’s AI systems to be deployed within secure Pentagon computing environments.
In public statements posted on social media, Kalinowski said the decision was driven by concerns about governance and the potential risks associated with surveillance and autonomous weapons systems.
“I resigned from OpenAI,” Kalinowski wrote. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
A Dispute Over AI Governance
OpenAI announced its agreement with the Department of Defense in late February. The deal allows the company’s generative AI technology to operate within classified government systems, expanding the use of advanced AI models in national security environments.
The arrangement reflects growing demand from defense agencies for access to large-scale AI systems capable of analyzing data, generating intelligence summaries, and assisting with operational planning.
But the partnership has also intensified debate inside the technology sector about the ethical and governance frameworks surrounding military applications of artificial intelligence.
In a follow-up message explaining her resignation, Kalinowski said her primary concern was not the people involved but the process behind the announcement.
“To be clear, my issue is that the announcement was rushed without the guardrails defined,” she wrote, describing the matter as a governance issue that required deeper deliberation.
OpenAI CEO Sam Altman has said the agreement includes safeguards designed to prevent the company’s technology from being used for mass surveillance or fully autonomous weapons systems.
Robotics and Defense Technology
Kalinowski’s departure is notable in part because it comes from a leader responsible for one of OpenAI’s most strategically important emerging areas: robotics and physical AI.
The company revived its robotics efforts in recent years as advances in machine learning and large language models began to influence the development of autonomous machines. Kalinowski was recruited to lead that effort and help scale the company’s hardware initiatives.
Before joining OpenAI, she led augmented reality hardware development at Meta, where she oversaw teams building next-generation AR glasses.
Although OpenAI’s robotics research has historically focused on manipulation and learning systems rather than military hardware, the broader intersection of AI, robotics, and defense technology has become increasingly complex.
AI systems capable of perception, planning, and decision-making are now being integrated into a wide range of autonomous platforms, from drones to surveillance systems and logistics automation.
A Wider Debate Across the AI Industry
The controversy surrounding OpenAI’s defense agreement is part of a broader debate unfolding across the AI sector.
The Pentagon has recently encouraged leading AI companies to make their technologies available for “all lawful purposes,” a position that has sparked pushback from some developers concerned about how their systems might ultimately be used.
Anthropic, another major AI company, previously attempted to negotiate a separate agreement with the Defense Department that would include explicit limitations on the use of its models for domestic surveillance or fully autonomous weapons systems.
After those negotiations ended without an agreement, the Pentagon reportedly designated Anthropic as a supply-chain risk, a classification the company has said it intends to challenge in court.
These disputes highlight the increasingly strategic role that AI developers play in national security infrastructure.
What This Signals for AI and Robotics
Kalinowski’s resignation illustrates the growing pressure facing companies developing advanced AI systems as governments seek access to their technologies.
While many researchers agree that AI can play a role in national security applications, the boundaries between defensive use, surveillance, and autonomous weapons remain contentious.
For companies building the next generation of robotics and physical AI systems, those questions may become even more significant.
Autonomous machines capable of operating in the physical world introduce new layers of risk and responsibility compared with purely digital AI systems. Decisions about governance, safety, and oversight will likely shape not only how the technology evolves but also who controls it.
As governments and technology companies deepen their relationships around AI infrastructure, debates over ethical guardrails and transparency are likely to intensify.