On Tuesday, Anthropic CEO Dario Amodei released a statement aiming to “clarify the facts” regarding the company’s stance on Trump-era AI policies, addressing what he described as “a recent surge in misleading reports about Anthropic’s policy positions.”
“Anthropic was founded on a clear idea: AI should advance humanity, not put it at risk,” Amodei stated. “That involves creating truly valuable products, being transparent about both the dangers and advantages, and collaborating with anyone genuinely committed to getting this right.”
Amodei’s remarks followed criticism last week from AI industry leaders and senior Trump administration officials, such as AI advisor David Sacks and White House AI policy advisor Sriram Krishnan, who accused Anthropic of fueling alarmism to harm the sector.
The initial criticism came from Sacks after Anthropic co-founder Jack Clark expressed his concerns and “reasonable fears” about AI, describing it as a potent, enigmatic, and “somewhat unpredictable” entity, rather than a controllable tool that can be easily directed.
Sacks responded: “Anthropic is orchestrating a sophisticated regulatory capture campaign based on spreading fear. It is largely to blame for the current regulatory chaos that’s hurting startups.”
California Senator Scott Wiener, who authored the AI safety bill SB 53, defended Anthropic and criticized President Trump’s “attempt to block states from regulating AI without offering federal safeguards.” Sacks then reiterated his stance, alleging that Anthropic was collaborating with Wiener to “enforce the Left’s approach to AI regulation.”
The debate continued, with opponents of regulation like Groq COO Sunny Madra arguing that Anthropic was “disrupting the entire sector” by supporting even moderate AI safety regulations instead of allowing unrestricted progress.
Amodei’s statement emphasized that addressing AI’s societal effects should be about “policy, not politics,” and expressed his belief that everyone wants the U.S. to maintain its AI leadership while ensuring technology serves Americans. He defended Anthropic’s cooperation with the Trump administration on significant AI policy matters and highlighted instances where he personally worked with the president.
As an example, Amodei referenced Anthropic’s collaboration with the federal government, including providing Claude to federal agencies and a $200 million deal with the Department of Defense (which Amodei referred to as “the Department of War,” echoing Trump’s terminology, though any official name change would require Congress). He also mentioned that Anthropic publicly supported Trump’s AI Action Plan and backed Trump’s initiatives to boost energy supply to “win the AI race.”
Despite these efforts to work together, Anthropic has faced criticism from industry colleagues for diverging from the prevailing Silicon Valley views on certain policy topics.
The company first drew criticism from officials with Silicon Valley connections when it opposed a proposed decade-long prohibition on state-level AI regulation, a measure that faced significant bipartisan resistance.
Many Silicon Valley figures, including OpenAI executives, have argued that allowing states to regulate AI would slow innovation and give China an advantage. Amodei countered that the real danger is the U.S. continuing to supply China’s data centers with advanced Nvidia AI chips, noting that Anthropic limits sales of its AI products to China-affiliated firms even at the expense of revenue.
“There are certain products we refuse to develop and risks we won’t take, even if they could be profitable,” Amodei said.
Anthropic also lost favor with some influential figures after it backed California’s SB 53, a moderate safety law requiring the largest AI companies to disclose their frontier model safety measures. Amodei pointed out that the bill exempts companies with less than $500 million in annual revenue, sparing most startups from excessive requirements.
“Some have claimed we’re trying to undermine the startup ecosystem,” Amodei wrote, referencing Sacks’s comments. “Startups are among our key clients. We support tens of thousands of startups and work with hundreds of accelerators and venture capitalists. Claude is enabling a whole new wave of AI-driven businesses. It would be illogical for us to harm that ecosystem.”
Amodei also noted that the company’s run-rate has grown from $1 billion to $7 billion in the past nine months, all while deploying “AI in a careful and responsible manner.”
“Anthropic is dedicated to constructive participation in public policy discussions. When we agree, we say so; when we disagree, we offer alternatives,” Amodei wrote. “We will continue to be open and direct, and will advocate for the policies we believe are best. The implications of this technology are too significant for us to do otherwise.”


