There are many worrisome aspects of AI, but most of them are in the distant future, or so it seems to me. But one immediate one has burst out into the open this week. The essential problem is that the so-called Department of War wants to use AI for surveillance of the general population and, even more troubling, wants to incorporate AI into autonomous weapons that could determine on their own who to target.
The New York Times reports that the Department of War has been negotiating with Anthropic, a leading AI developer, for use of its products in classified weapon systems. Anthropic has agreed to supply the software, as long as it is not used for domestic mass surveillance or autonomous weapons. Pete Hegseth, Secretary of the Department of War, is furious that AI use would be restricted in those ways, and it has given Anthropic until 5 pm today (Friday, March 26, 2026) to drop the restrictions. If it fails to do so, the Department of War may declare Anthropic to be a “supply chain risk”, which would cause cancellation of hundreds of millions of dollars of defense contracts that Anthropic depends on.
Gary Marcus, a long-time critic of AI, has been raising concerns about this situation in his blog. He is worried that AI software is untrustworthy, and so am I. Back in September, I did a little test of AI software and it didn’t go well. I fed Google’s AI software a file of Chester County voter registration data and asked questions about it. The software answered confidently but gave wrong answers. When that was pointed out, it apologized but gave different wrong answers. Ultimately, I decided it was simply ignoring the data I had given it and was giving me plausible text based on the data it was initially trained on. These confident but wrong answers from AI are called “hallucinations” in the industry. The AI companies are very aware of hallucinations but have not been able to keep them from occurring.
Marcus sees potential disaster looming if incompetent officials in the Department of War start insisting that AI software be incorporated into weapons. He writes on his blog that “we are on a collision course to catastrophe. Paraphrasing a button that I used to wear as a teenager, one hallucination could ruin your whole planet.” (The button in question, from around 1980, read “one nuclear bomb could ruin your whole day”.)
Marcus quotes a New Scientist article about a study of AI software from OpenAI, Anthropic, and Google. When used in simulated war games, the software chose to use nuclear weapons in 95 percent of the cases. Do we want this software embedded in our weapons, with no human in the loop when life-or-death decisions are made?
Anthropic isn’t backing down. On Thursday (March 26), Marcus reported that Anthropic was refusing to back down. A statement from Anthropic CEO Dario Amodei, said that Anthropic had worked closely with the Department of War on a variety of projects and has cut off the use of its software by Chinese companies linked to the ruling Communist Party, despite having to forego “several hundred million dollars in revenue”.
Amodei writes, “…in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now….” He goes on to describe the two exceptions: mass domestic surveillance and fully autonomous weapons. On the subject of mass domestic surveillance, Amodei writes, “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI.” He concludes that “we remain ready to continue our work to support the national security of the United States.”
It goes way beyond Anthropic. Anthropic is raising a very important issue: what limitations should be placed on the use of AI in war? This issue needs to be addressed no matter what the outcome of the Anthropic/Department of War dispute is. AI, at least in its current form, is an unreliable technology that cannot be trusted in life-or-death situations. The AI companies, including Anthropic, understand this and still have not been able to solve the reliability problem. It may be inherent to the nature of “large language models”, the basis for all commercial AI software.
We have no idea whether it is even possible to make AI software reliable. We must insist that AI implementation be slowed down, and it certainly must not be put into autonomous weapons.
Rein in AI. Contact your representatives in DC!
