Select Page


Things over at Anthropic are getting wild.

On Friday, the Trump administration ‘fired’ the woke serial copyright infringerindustry disruptor and software-engineer-extinctor after a bruising dispute with the Pentagon came to a head over ethical concerns surrounding Claude’s military use – specifically, domestic surveillance and fully autonomous weapons. The Pentagon demanded to use ClaudeAI for “any lawful purpose” with no guardrails – or having to ask permission from a bunch of blue-haired Karens in a life-or-death scenario. The chatbot’s supposedly idealistic leader (whose sister and Anthropic co-founder, Daniella Amodei, is married to Holden Karnofsky, the founder of Effective Altruism himself) had to signal virtuemaxx to his employees, and said no. OpenAi’s Sam Altman, who is a different kinds of opportunistic sociopath with zero moral qualms, pretended to side with Amodei at first only to immediately swoop in and poach Anthropic’s Pentagon contract. Meanwhile, Amodei’s investors, who had just dumped all their cocaine cash for the next 20 years into his company at a $380 billion valuation, and realized they would never see their money again if the government blacklisted and banned the company from all government supply chains, were terribly vexxed. 

The spat resulted in three things; first, in addition to getting ‘fired,’ Anthropic was deemed a “supply-chain risk” (making them radioactive to the defense industry) – and federal agencies were given six months to ditch Anthropic products. Second, OpenAI’s Sam Altman slid into Hegseth’s DMs (through proper channels, we’re sure) and landed Anthropic’s contract – which they revised to beef up and clarify safety protocols, and third, Anthropic CEO Dario Amodei threw a ripper of a tantrum in a leaked memo sent to over 2,000 employees attacking the Trump administration and OpenAI. 

For Silicon Valley investors and allies, it immediately sank in how absolutely fucked they are if this stands. Now in a PR crisis, Amodei is scrambling to salvage his company’s relationship with the Pentagon (read: the goodwill of his investors) – and has begun last-ditch talks with senior officials in hopes of striking a new deal, FT reports, adding that he’s now personally negotiating with Emil Michael, the Pentagon’s undersecretary of defense for research and engineering, who on Thursday called Amodei as a “liar” with a “God complex after talks with the Pentagon collapsed. 

Pentagon Showdown

Anthropic drew several red lines against allowing its technology to power fully autonomous lethal weapons or mass domestic surveillance, arguing that the level of protections the Pentagon wanted would be ineffective, and that the Defense Department’s language was suspicious.

“Near the end of the negotiation the department offered to accept our current terms if we deleted a specific phrase about ‘analysis of bulk acquired data,’” Amodei wrote in a memo to employees. “That was the single line in the contract that exactly matched the scenario we were most worried about. We found that very suspicious.”

Pentagon officials, meanwhile, claim that Anthropic was demanding they ask permission in life-or-death nuclear scenarios, which Anthropic denied.

A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.

An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor. –Washington Post

Does the last-ditch effort to save things mean that Anthropic is going to budge on their red lines – perhaps matching whatever OpenAI has stipulated or agreed to?

Memo Meltdown

After OpenAI snaked their contract, Amodei dismissed the rival’s safeguards as little more than “20% real and 80% safety theater,” – claiming that OpenAI’s Pentagon deal appears to rely on “safety layers” and monitoring systems intended to block prohibited uses – safeguards he says are easily bypassed.

Refusals aren’t reliable and jailbreaks are common,” he wrote, adding that AI models cannot reliably determine whether they are being used for surveillance or autonomous weapons because they lack visibility into the real-world context of how their outputs are used.

Amodei also blasted the idea that contractors such as Palantir could enforce restrictions through software filters.

“Our sense was that it was almost entirely safety theater,” he wrote, claiming such tools were designed mainly to placate concerned employees rather than actually prevent abuses.

‘We Haven’t Given Dictator-Style Praise To Trump’

Amodei argued that the real reason the Trump administration is targeting Anthropic has nothing to do with technology or national security.

“The real reasons the DoW and the Trump admin do not like us is that we haven’t donated to Trump… we haven’t given dictator-style praise to Trump… and we have supported AI regulation,” he wrote.

Amodei claimed OpenAI leadership – including president Greg Brockman – had donated heavily to pro-Trump political groups while Anthropic refused to play the same game.

He also accused the Pentagon of coordinating messaging with OpenAI to portray Anthropic as unreasonable in contract negotiations.

“Sam is trying to make it more possible for the admin to punish us by undercutting our public support,” Amodei wrote.

Which, again, begs the question of whether or not Anthropic is now willing to budge on their red lines.

Investors Alarmed

Needless to say, Anthropic’s investors and partners are freaked out – with backers including Amazon, Nvidia, Lightspeed Venture Partners and Iconiq Capital scrambling to hold urgent talks with the company in recent days as they attempt to defuse the conflict with Washington.

A major technology industry group representing many of these companies sent a letter to the Hegseth Wednesday warning against the Pentagon labeling any AI provider a supply-chain risk amid a procurement dispute.

But what really matters are Anthropic’s investors – both current but especially future (after all someone has to fund those billions in perpetual losses)  – many of whom blame Amodei’s confrontational approach for escalating the situation.

It’s an ego and diplomacy problem,” one person familiar with the talks told Reuters.

Some investors have reportedly reached out to contacts inside the Trump administration in hopes of calming tensions.

Following Trump and Hegseth’s Friday announcement, several agencies began shifting away from the company. The State Department has reportedly moved to OpenAI following an order from the White House to phase out Anthropic systems within six months.

Meanwhile, Anthropic has raised tens of billions of dollars and is widely expected to pursue a public offering. Enterprise customers account for roughly 80% of the company’s revenue, and its projected annual run rate has reportedly surged from about $14 billion to $19 billion in recent weeks (and do we believe this?).

Loading recommendations…





Source link

Visited 2 times, 2 visit(s) today
GLA NEWS