The American company refuses to allow its technologies to be used by the military for fully autonomous systems. But the real problem, rather than an ethical one, concerns the reliability of artificial intelligence in warfare by Andrea Monti – initially published in Italian by La Repubblica-Italian Tech
The news on 9 March 2026 that Anthropic had decided to take legal action against the US Department of War marks a new phase in the controversy with the Trump administration.
The subject of the dispute is Anthropic’s refusal to allow its AI to be used to develop mass surveillance systems and weapons with varying levels of autonomy, following which the US administration excluded Big Tech from the most critical (and therefore profitable) sectors of military contracts.
Protection of rights or negotiation strategy?
The decision to announce the intention to go to court rather than do so directly and then make the news public seems more like an attempt to induce the administration to back down than a manifestation of a real desire for justice.
One clue supporting this hypothesis is the convoluted reasoning on which Anthropic intends to build its case: the violation of freedom of expression. According to Big Tech, exclusion from access to military contracts would be a “retaliation” that amounts to an undue restriction of the company’s right to freely express its thoughts.
If the case does go ahead, it will be up to the judges to assess the issue. Nevertheless, such a decision demonstrates once again how fundamental rights have (become) merely a tool for protecting (economic) interests that have nothing to do with the reason for which they were conceived. From a tool for protecting human beings, they have become part of the arsenal for negotiating economic interests.
However, the latest news leaves unchanged the three fundamental issues that emerge from Amodei’s statement published on the Anthropic website
When Big Tech negotiates on equal terms with governments
The first concerns the issue (certainly not new, given that we discussed it in these columns back in 2021) of what has been called technological neo-medievalism, i.e. the political context in which the state is no longer the sole holder of sovereign power, but must share it with Big Tech.
This has already happened with Apple, which refused to design iOS in such a way as to facilitate judicialinvestigations, with Starlink, which, after testing in the Ukrainian conflict, received an expression of interest from Japan, with Meta, which has become a defence contractor, and with OpenAI, which has prohibited the military from using its technology to create mass surveillance systems.
The common thread linking these events is not the merits of the choices (some companies have opted to support military activities, others have imposed restrictions) but the fact that they have negotiated (or want to negotiate) on equal terms with the executive powers, not on issues relating to costs and supplies, but on aspects involving the strategic sovereignty of a State.
Mass surveillance between rhetoric and security
The issue becomes clearer when addressed in relation to the specific case.
In negotiations with the Department of War, both Anthropic and OpenAI stated that they would not allow their technologies to be used to build mass surveillance systems for American citizens (but not for those of other countries).
Given the widespread global sensitivity to “privacy protection”, such a choice is certainly advantageous from a marketing perspective and in terms of exploiting the rhetoric of “digital rights” to support the “ethical” positioning of products and services. In reality, however, this only reaffirms Big Tech’s positioning as an entity intent on deciding what is and is not a right, and dictating the agenda to executives.
To be clear, on the one hand, OpenAI and Anthropic’s “ethical tension” on fundamental rights stops at national borders and therefore does not apply, for example, to citizens of EU Member States who use their services. On the other hand, the fact that the military wants to encroach on areas reserved for the police and the judiciary with internal surveillance projects is certainly worrying. But whether or not to allow the military, in addition to the police, prosecutors and secret services, to manage internal surveillance programmes is — or should be — the task of the judiciary or parliament, certainly not of a private company.
Furthermore, as a side note, it should be considered that if Anthropic’s problem is the nature of the client, hybrid situations such as those of the National Security Agency (of which the military is a structural part) would in any case allow the military access to specific surveillance technologies, even if the transfer for payment of the service came from a bank account other than that of the Pentagon.
War and the myth of technological ethics
Another argument used by Anthropic to deny the use of its technologies in the military sphere was to oppose the military use of AI, with arguments that go well beyond the debate on the need for ethics in autonomous weapons.
Let’s start with the basics.
As disturbing as the concept may be, there is little doubt that wars are won by killing more enemies than “our own” fall in battle. Just as there is no doubt that war requires weapons, and that weapons are used to kill.
Furthermore, it is quite clear that prevailing in a conflict requires (among other things) having more lethal weapons than your opponent. This means that the arms industry aims to build increasingly lethal weapons for actual use or, as in the case of the atomic bomb and bacteriological weapons, for use as a deterrent.
Finally, as demonstrated by the non-adherence of some countries to the treaty banning the use of anti-personnel mines or their failure to comply with it, even the “ethical” rejection of the use of certain weapons is not an absolute value. As already mentioned, the definitive words on the subject were spoken in unsuspecting times, at the end of the 19th century, by two giants of military doctrine, Carl von Clausewitz and Helmut von Moltke.
The former wrote in 1832 that ‘people of good heart might naturally think that there is an ingenious way to disarm or defeat an enemy without too much bloodshed, and might imagine that this is the true goal of the art of war. However pleasant this may seem, it is a mistaken belief that must be exposed; war is such a dangerous activity that mistakes arising from kindness are the worst.‘ The latter, in 1861, added fuel to the fire by stating that ’the greatest kindness one can do in war is to end it quickly.”
The conclusion is quite clear: if AI allows us to create weapons that are more lethal than those of the “enemy”, there is no reason in principle to avoid using it, as long as its effects are limited to the adversaries and not those who use it.
This consideration introduces the third theme that emerged from Anthropic’s statement.
The problem with AI in warfare is not ethics but reliability
According to Amodei, the real issue to be resolved regarding the use of AI in the construction of autonomous weapons is its current (and actual) degree of reliability.
‘Partially autonomous weapons, like those used today in Ukraine, ’ writes Amodei, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. … fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.” (emphasis added).
Therefore, Anthropic’s position on the military use of AI, although not explicitly stated in these terms, could be interpreted as follows: we are not opposed to the use of our software to create fully autonomous weapons if there is a guarantee that these weapons do not pose a risk to those who use them (but not for those who suffer their effects).
Put differently, and in line with a philosophy of war based on political realism, what matters is that autonomous weapons are safe for those who “press the button”, and it matters little if civilians are among the targets, because in war there are always casualties or collateral damage. As long as they are not “our” civilians, but those of the other side.
Regulate machines or political choices?
The (almost) naive sincerity of Amodei’s words once again highlights the structural error that parliaments and governments around the world systematically fall into: thinking that they need to regulate the tools instead of the consequences of their use.
Agreements on the control of nuclear technologies and those on the absolute ban on the development of bacteriological weapons were motivated not by “ethical” or “technological” issues but by the consideration that the lethality of their use would guarantee mutually assured destruction or, in any case, uncontrollable outcomes. Conversely, therefore, anything that does not reach this threshold is “freely” usable, including autonomous weapons controlled by AI.
Reframed in these terms, the general question of the military use of AI becomes much more understandable and can be summarised in three very simple questions: how lethal should the weapons in our arsenal be, if we have to design them in such a way that we can control their operation, and how much should we worry about the fact that they can kill non-combatant civilians, pardon, cause collateral damage.
