AI Safety Meets the War Machine

0
business-news-2-768x548.jpg


When Anthropic last year became the first major The company AI approved by the US government for classified use – including military applications – the news did not make a big splash. But this week a second development hit like a cannonball: It's the Pentagon to reconsider their relationship with the company, including a $200 million contract, apparently because the security-conscious AI company objects to participating in certain lethal operations. The so-called Department of War may even designate Anthropic as a “supply chain risk”, a scarlet letter normally reserved for companies that do business with countries investigated by federal agencies, such as China, which means that the Pentagon would not do business with companies that use Anthropic's AI in their defense work. In a statement to WIRED, Pentagon spokesman Sean Parnell confirmed that Anthropic was in the hot seat. “Our nation requires that our partners be prepared to help our warfighters win in any battle. Ultimately, this is about our troops and the security of the American people,” he said. This is also a message for other companies: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, jumping through the necessary hoops to obtain their own high clearances.

There is plenty to unpack here. For one thing, there's a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the coup to impeach Venezuela's President Nicolás Maduro (that is what is reported; the company denies it). There's also the fact that Anthropic publicly supports AI regulation — an outsider stance in the industry and one that's at odds with the administration's policies. But there is a bigger, more disturbing issue at play. Will government requirements for military use make AI even less safe?

Researchers and executives believe that AI is the most powerful technology ever invented. Almost all current AI companies were founded on the premise that it is possible to achieve AGI, or superintelligence, in a way that avoids widespread harm. Elon Musk, the founder of xAI, was once the biggest proponent of cleaning up AI – he co-founded OpenAI because he was worried that the technology was too dangerous to be left in the hands of profit-seeking companies.

Anthropic has carved out a space as the most security conscious of them all. The company's mission is to have guardrails so deeply integrated into its models that bad actors cannot exploit AI's darkest potential. Isaac Asimov said it first and best in his laws of robotics: A robot may not injure a human or, through inaction, cause a human to come to harm. Even if AI becomes smarter than every human on Earth—an eventuality in which AI leaders fervently believe—those guardrails must hold.

So it seems contradictory that leading AI laboratories are struggling to get their products into advanced military and intelligence operations. As the first major lab with a classified contract, Anthropic gives the government a “custom set of Claude Gov models built exclusively for US national security customers.” Still, Anthropic said it did so without violating its own safety standards, including a ban on using Claude to manufacture or design weapons. Anthropic CEO Dario Amodi has specifically said he doesn't want Claude involved in autonomous weapons or AI government surveillance. But that cannot work with the current administration. Department of Defense CTO Emil Michael (formerly Uber's parent company) told reporters this week that the government will not tolerate an AI company limiting how the military uses AI in its weapons. “If there's a swarm of drones coming out of a military base, what are your options to take it down? If the human response time isn't fast enough … how do you go about it?” he asked rhetorically. So much for the first law of robotics.

There is a good argument to be made that effective national security requires the best technology from the most innovative companies. Although even a few years ago, some tech companies flinched when working with the Pentagon, in 2026 they are generally flag-waving military contractors. I've yet to hear an AI executive talk about their lethal force-connected models, but Palantir CEO Alex Karp is not shy to saywith apparent pride, “Our product is used on occasion to kill people.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *