Caitlin Kalinowski dedicated 16 months to developing OpenAI’s physical AI initiatives. On Saturday, she expressed concerns that the company advanced too rapidly on a matter of critical importance.
The week that started with Anthropic being barred by the Pentagon and ended with OpenAI securing the contract has now resulted in the departure of OpenAI’s top hardware executive.
Caitlin Kalinowski, who took charge of OpenAI’s robotics and consumer hardware division in November 2024, announced her resignation on Saturday via X. Her message was succinct, forthright, and more transparent than any official OpenAI statement regarding the agreement.
“AI plays a vital role in national defense,” she stated. “However, surveillance of American citizens without judicial oversight and lethal autonomous systems without human consent are boundaries that warranted far more careful consideration.”
In a follow-up post, Kalinowski clarified her concerns: “This is primarily a governance issue,” she explained. “Decisions of this magnitude should never be rushed for the sake of deals or announcements.”
She emphasized that her resignation was driven by principles rather than personal conflicts. “This is about values, not individuals,” she wrote. “I hold great respect for Sam and the entire team.”
This statement carries significance, especially since Sam Altman himself admitted the Pentagon contract was “definitely rushed” and acknowledged the considerable backlash it triggered.
Kalinowski’s exit highlights a critical internal dissent: the highest-ranking OpenAI hardware executive, responsible for integrating AI into physical platforms, has concluded that the process for incorporating AI into weapons and surveillance systems lacked sufficient rigor.
Details of the Pentagon Contract
The events leading to this situation unfolded over approximately one week. Anthropic, previously the sole AI firm authorized to operate on the Pentagon’s classified networks following a $200 million contract awarded in July 2025, engaged in intense negotiations with the Department of Defense regarding contract terms.
Anthropic maintained that its AI models should not be used for widespread domestic surveillance or fully autonomous weaponry. Conversely, the Pentagon, led by Defense Secretary Pete Hegseth, demanded language allowing use “for all lawful purposes” without explicit exceptions.
On February 28, after talks broke down, President Trump ordered all federal agencies to cease using Anthropic’s technology and labeled the company “radical woke” on Truth Social.
Subsequently, Hegseth classified Anthropic as a supply-chain risk to national security-a designation previously reserved for foreign adversaries-requiring Department of Defense vendors and contractors to certify they do not utilize Anthropic’s AI models.
Shortly thereafter, Altman announced on X that OpenAI had secured its own agreement to deploy AI models on the Pentagon’s classified network.
OpenAI asserts that its contract includes the same fundamental safeguards Anthropic sought: prohibitions on mass domestic surveillance and autonomous weapons deployment.
The company released a detailed blog post explaining its approach, emphasizing its cloud-only deployment model, integrated safety mechanisms, and contractual terms grounded in existing U.S. law rather than bespoke restrictions. OpenAI argues this framework offers stronger protections than any prior classified AI deployment, including Anthropic’s.
Implications of Kalinowski’s Resignation for OpenAI
Kalinowski’s professional background is notable for its diversity. She spent nearly six years at Apple as a technical lead on projects such as the Mac Pro and MacBook Air, including the original unibody MacBook Pro. She then transitioned to Meta’s Oculus division, where she led virtual reality hardware development for over nine years.
Her last role at Meta was leading Project Nazare, later renamed Orion, an augmented reality glasses initiative unveiled as a prototype in September 2024 and hailed as the most advanced AR glasses to date.
She joined OpenAI the following month.
During her 16 months at OpenAI, Kalinowski spearheaded the company’s physical AI program, including managing a San Francisco-based lab with approximately 100 data collectors training a robotic arm to perform household tasks.
Her departure leaves OpenAI’s hardware efforts without their most seasoned leader at a pivotal moment when the company is ambitiously expanding beyond software.
OpenAI confirmed her resignation and stated: “We believe our agreement with the Pentagon establishes a responsible framework for national security applications of AI, clearly defining our boundaries: no domestic surveillance and no autonomous weapons.
We acknowledge the strong opinions surrounding these issues and remain committed to ongoing dialogue with employees, government entities, civil society, and global communities.”
Broader Consequences and Industry Impact
The repercussions of OpenAI’s Pentagon contract extend beyond internal disagreements. Following the announcement, ChatGPT uninstall rates reportedly surged by 295%, while Anthropic’s Claude app surged to the top spot in the U.S. App Store, overtaking ChatGPT. As of Saturday afternoon, these two apps held the first and second positions respectively.
The resignation of OpenAI’s robotics chief underscores that the full cost of this deal is still unfolding. Altman aimed to ease tensions between the government and the AI sector, and while he may have partially succeeded, the toll in terms of talent loss, trust erosion, and debates over ethical safeguards remains to be fully assessed.




