Key Points
- Anthropic CEO is set to meet White House Chief of Staff Susie Wiles despite company being blacklisted by Pentagon
- Federal judge blocked government from punishing Anthropic but appeals court sided with Pentagon
- White House in talks to access Anthropic’s new Mythos AI model for cybersecurity
Anthropic CEO Dario Amodei is set to meet White House Chief of Staff Susie Wiles even as his artificial intelligence company fights a legal battle against the Trump administration over being blacklisted by the Pentagon. The meeting, first reported by Axios, comes amid a deepening standoff between one of America’s leading AI firms and the US government over military use of its Claude AI model.
The high-stakes encounter signals the administration’s attempt to balance its hardline stance against Anthropic with the national security value of the company’s breakthrough technology.
The dispute centres on the Pentagon’s demand for unrestricted access to Claude, including for autonomous weapons systems that can select and engage targets without human intervention, and for mass surveillance programmes. Anthropic has refused these terms, arguing that AI models are not yet reliable enough for such applications and that US law does not adequately protect citizens from AI-driven surveillance.
Pentagon Labels Anthropic a Security Risk
The conflict escalated after President Donald Trump announced the administration would sever ties with Anthropic following the breakdown in negotiations. The Pentagon subsequently designated Anthropic a “supply chain risk”, a classification previously reserved for companies linked to foreign adversaries such as China. This designation effectively bars Anthropic from government contracts.
Until recently, Claude was the only AI model available on the Pentagon’s classified network, a system used for sensitive military and intelligence operations. The designation strips Anthropic of this privileged access.
Anthropic sued the Trump administration in response. A federal judge in California last month blocked the government’s effort to punish the company, ruling that federal agencies outside the Department of Defense cannot use the supply chain risk label to cut ties with Anthropic.
The government has appealed that ruling.
Appeals Court Sides with Defence Department
Two weeks after the California ruling, the administration secured a victory at the DC Circuit Court of Appeals. In a unanimous decision, the appeals court declined to bar the Defence Department from cutting ties with Anthropic while the legal challenge proceeds.
Advertisement
“Intervening at this stage in the litigation would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the court said.
The Pentagon has maintained it needs complete freedom to use AI tools, particularly during wartime operations. Anthropic has countered that it wants to work with the military but that safety guardrails are essential given the current limitations of AI technology.
Mythos Model Draws Government Interest
Despite the blacklist, the White House is in active discussions with Anthropic about deploying the company’s forthcoming AI model, Mythos Preview, within the federal government. Mythos is designed to identify cybersecurity vulnerabilities, but experts have warned the same capability could provide hackers with a roadmap to attack companies or government systems.
The Office of Management and Budget has already informed federal agencies it is preparing to give them access to Mythos to assess cybersecurity risks.
The standoff carries significance beyond American borders. India, which is developing its own AI governance framework and has sought partnerships with US technology firms, is watching how democratic governments negotiate the balance between military utility and safety constraints on frontier AI systems.
The outcome of Anthropic’s legal challenge and its negotiations with the White House could set precedents for how AI companies worldwide engage with defence establishments.
Your Questions, Answered
Why did the Pentagon blacklist Anthropic?
The Pentagon designated Anthropic a ‘supply chain risk’ after the company refused to grant unrestricted military access to its Claude AI model for autonomous weapons and mass surveillance. This classification, previously used only for companies linked to foreign adversaries, effectively bars Anthropic from government contracts.
What is Anthropic’s Mythos AI model?
Mythos is Anthropic’s forthcoming AI model designed to identify cybersecurity vulnerabilities. While it can help organisations assess security risks, experts warn the same capability could provide hackers with a roadmap to attack systems. The White House is in discussions to gain access to the model.
What did the courts rule in Anthropic’s lawsuit?
A federal judge in California blocked the government from using the supply chain risk designation to punish Anthropic outside the Defence Department. However, the DC Circuit Court of Appeals separately ruled that the Pentagon can cut ties with Anthropic while the legal challenge continues.
Why did Anthropic refuse Pentagon’s terms for Claude?
Anthropic argued that AI models are not yet reliable enough for use in autonomous weapons systems and that US law does not adequately protect citizens from AI-driven mass surveillance. The company said it wants to work with the military but with appropriate safety constraints.
