Anthropic Investors Engage Officials to Prevent Pentagon Ban on AI Systems
Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner Amazon.
Several investors in artificial intelligence developer Anthropic are working to defuse a growing dispute between the company and the U.S. Department of Defense over limits on military uses of its technology, according to seven people familiar with the matter, amid concerns that an escalating conflict could damage the company’s business prospects.
Chief Executive Dario Amodei has discussed the issue in recent days with major investors and partners, including Amazon Chief Executive Andy Jassy, two of the people said. Venture capital firms Lightspeed and Iconiq have also contacted Anthropic executives about the situation, two sources added. Some investors have simultaneously reached out to contacts within the administration of U.S. President Donald Trump in an effort to reduce tensions between the company and the Pentagon.
The discussions are centered on preventing a potential government move to bar Pentagon contractors from using Anthropic’s artificial intelligence systems, the sources said. One person familiar with the situation said Anthropic and the Defense Department continue to hold discussions, though details of those talks were not clear.
The White House has publicly called on Anthropic to assist the government in phasing out its AI systems. Neither the Pentagon nor investors including Amazon responded to requests for comment.
The dispute follows months of disagreement between Anthropic and the Defense Department—renamed the Department of War by the Trump administration—over how the military may deploy the company’s technology in operational settings. The conflict has become a broader test of the degree of control AI developers can retain over the use of their systems once they are integrated into government and commercial applications.
Pentagon officials have urged AI companies to abandon internal usage restrictions and instead accept a contractual framework allowing any use that complies with U.S. law. Anthropic has refused to remove certain safeguards governing its flagship Claude AI models, maintaining prohibitions against the technology being used to operate autonomous weapons or to support large-scale domestic surveillance programs.
Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner Amazon. Last week, rival OpenAI said it had also reached a classified agreement with the Pentagon and added that Anthropic should not be treated as a security risk to the department.
During discussions with Anthropic leadership, investors have reaffirmed their support for the company while urging a negotiated solution with defense officials, the seven people familiar with the talks said. Some investors privately expressed frustration that Amodei’s approach had intensified tensions with the Pentagon rather than easing them.
One person briefed on the discussions described the situation as partly a diplomatic challenge. At the same time, investors acknowledge that Amodei faces internal constraints. Several people familiar with the matter said that if the company appeared to fully concede to administration demands, it could alienate employees and customers who have supported Anthropic partly because of its public stance on AI safety restrictions.
Amodei has not responded to requests for comment. In prior statements, he said the company could not “in good conscience accede” to government demands to remove its safeguards. According to one person who participated in a call with investors late Tuesday, Amodei said Anthropic would continue attempting to find a workable arrangement with the Department of War.
Investors are particularly focused on preventing Anthropic from being designated a “supply-chain risk” by the U.S. government. Such a designation could require federal contractors to discontinue use of the company’s technology, potentially affecting commercial customers that also conduct government work.
Defense Secretary Pete Hegseth has said that a supply-chain risk determination would compel all government contractors to stop using Anthropic’s systems across their operations. Anthropic has publicly challenged that interpretation, stating that Hegseth lacks the statutory authority to prohibit the use of its AI technology outside of direct defense contracts. The Pentagon has not responded to questions about that claim.
Anthropic said last week it would contest any supply-chain risk designation in court.
Even without a formal ban, some investors fear the confrontation could deter potential customers who prefer to avoid conflict with the administration, one person familiar with the matter said.
The dispute comes at a critical stage for the San Francisco-based startup. Anthropic has raised tens of billions of dollars from investors betting on rapid growth in enterprise adoption of its AI systems. The company has previously said enterprise customers account for roughly 80% of its revenue.
Demand for products including its Claude chatbot and the Claude Code programming assistant has expanded rapidly. On Monday, the Claude app ranked as the most downloaded free application in Apple’s App Store, surpassing OpenAI’s ChatGPT.
One person familiar with Anthropic’s finances said the company’s annualized revenue run rate has reached about $19 billion based on current sales, compared with roughly $14 billion only weeks earlier.
Investors say maintaining that growth trajectory is important for the company’s longer-term capital plans. Anthropic is currently allowing employees to sell shares to outside investors in secondary transactions, and the company has previously said no decision has been made regarding a potential initial public offering.
The investor push to calm tensions intensified after several U.S. government agencies began discontinuing Anthropic technology. Following an order issued by President Trump on Friday directing federal agencies to replace Anthropic systems within six months, the State Department switched to OpenAI’s products, according to people familiar with the change.