Anthropic isn’t exactly a saint. Not as they’d have us believe

For about a week now, the dispute between Anthropic and the US Department of Defence has been dominating international news. Much has been written on the subject, but the legal action announced and subsequently brought by the company provides concrete evidence for understanding the nature of the dispute and, more generally, the current challenges surrounding the use of AI (not only) in the military sector – Initially published in Italian by La Repubblica – Italian Tech.

Anthropic’s statements

Let us summarise the facts as stated by Anthropic in its ‘complaint’ against the Trump administration.

Fact number 1: Anthropic has developed AI systems based on guidance from national security agencies

(Anthropic has also developed specialized “Claude Gov” models tailored specifically for the national security context. These models have been built based on direct feedback from national security agencies to address real-world requirements, like improved handling of classified information, enhanced proficiency in critical languages, and sophisticated analysis of cybersecurity data. Claude Gov models undergo rigorous safety testing consistent with Anthropic’s commitment to responsible AI.)

Fact number 2: Claude Gov models are built with fewer restrictions (but how much fewer?) than those available to “normal” customers. Thus, for example, the blocks preventing the AI from generating attack plans, critical vulnerability analyses or lethal tactical support – normally inhibited in commercial versions – may have been removed

To make Claude more useful for the military and intelligence components of the federal government, Anthropic does not impose the same restrictions on the military’s use of Claude as it does on civilian customers.

Fact number 3: Anthropic permits the use of its AI to develop weapons, provided there is a human operating them

By its terms, the Policy has always prohibited the use of Anthropic’s services for lethal autonomous warfare without human oversight and surveillance of Americans en masse

and fact number 4, it permits the creation of mass surveillance programmes on anyone, provided they are not American, and on American citizens, provided it is not en masse.

Fact number 5: Anthropic does not rule out that Claude may, one day, be capable of killing and surveilling autonomously, but it is not certain that it would wish to do so

Anthropic did not develop Claude (or the specialized Claude Gov models) to deploy lethal autonomous warfare without human oversight. Claude has not been trained or tested for that use. At least at present, Claude is simply not capable of performing such tasks responsibly without human oversight.)

Fact number 6: as with autonomous weapons, Claude may also make errors in mass surveillance to such an extent that it is not (yet) suitable for unsupervised operation

Second, Anthropic is unwilling to agree to Claude’s use for mass surveillance of Americans. … And surveillance conducted using AI poses significantly greater potential to make mistakes—and to amplify the effect of any mistakes—than traditional techniques

What Anthropic’s statements mean

Read in context and translated from legalese into plain language, Anthropic’s statements clearly highlight certain aspects of economic strategy and technological geopolitics that should be given due consideration on our side of the Atlantic.

The following are analytical considerations, not necessarily ‘intended’ by Anthropic, but deducible from the overall picture of the relationship between Big Tech and public authorities.

No protection for non-Americans

Firstly, Anthropic’s ethical concerns relate exclusively to American citizens (whether military or civilian).

Therefore, there is nothing to prevent Claude Gov from being used to develop (varying degrees of) autonomous weapons and surveillance systems to be used against anyone else. This marks the end of the era of innocence for a company spun off from OpenAI with the aim of focusing on ‘safety’ and which still holds the legal status of a ‘Benefit Corporation’.

Weakened versions for non-US partners?

If Claude Gov were also to be sold to allied countries, which version would they be permitted to use? Certainly not the one available to US defence and security agencies, and so it is realistic to think that such a choice could create problems in strategic and tactical coordination in the event of joint military operations. The difference in analytical capability, even if we limit ourselves to this aspect alone, would result in operational decisions that partners would essentially have to accept, without having any real say in the matter.

It is not certain that Anthropic intends to develop Claude Gov towards total autonomy

Claude Gov still produces many errors, and for this reason the presence of a human operator is required to make it function – in other words, a scapegoat. And that the human being tasked with deciding how to use Claude Gov is a scapegoat is demonstrated by a fairly trivial consideration known for some time: the value of an AI lies in its ability to analyse enormous quantities of data and propose choices; but how is the decision-maker supposed to know which choice is correct? It is clear that human involvement in such a process is purely formal, because the operator will have no real ability or opportunity (in emergencies) to make a conscious and informed decision.

‘Human in the loop’ is a way of making the government pay for damages caused by malfunctions

From a purely industrial perspective, it seems unlikely that Claude Gov could be built to achieve full operational autonomy.

With the presence of a human operator who initiates Claud Gov’s operation, Anthropic absolves itself of legal liability towards the victims of operational errors (or at least reserves the right to involve the government). If, in fact, Claud Gov were to operate in a fully autonomous manner, then it would be up to Anthropic to pay compensation for the damages suffered by the innocent victims of its lethal automation. Conversely, if Claude Gov requires a human being to ‘switch it on’ in order to function, then the responsibility lies with that human being, that is, with the administration to which the scapegoat belongs. In other words, this is not a matter of ethical prudence but of legal caution to limit legal risk.

What general lessons can be drawn from the Anthropic case

Regardless of the outcome of this legal action aimed at protecting Anthropic’s legitimate economic interests, it is worth reflecting on some broader issues of interest to Europe.

The first is that the use of AI for lethal purposes by Member States cannot be a taboo. This means that outside the EU (since Article 4 of the EU Treaty permits it), individual countries may, if they wish, enter into direct agreements for the development of technologies that incorporate AI into decision-making and weapons platforms. This further implies that, within the scope of these restricted agreements, it would be necessary to develop defence-specific AI, trained on data and information not publicly available.

The second is that, particularly in the defence sector, unless we decide to develop a fully autonomous IT ecosystem (operating systems, development environments, datasets, models and platforms to utilise them), the most we will be able to use is the bare minimum the US is willing to grant us.

In other words, we risk finding ourselves in a position of tactical subordination, reminiscent of that faced by the Italian Navy during the Second World War. The use of less powerful versions of the Enigma ciphers, whose encryption was ‘broken’ by the British, produced disastrous results culminating in the defeat at Cape Matapan. Proportionally speaking, this could happen with AI systems that make too many mistakes and/or are not powerful enough.

The third point is that, with all due respect to the EU regulation on artificial intelligence — or rather, precisely because of that regulation — the obligation to apply the human-in-the-loop principle eliminates, or at least significantly reduces, the legal liability of the Big Tech company in question. It is hard to imagine that, within a regulatory framework such as the European one, there is a real incentive to adopt systems that, in theory, are under the operator’s control, whilst in reality they simply turn the operator into a switch designed to set the wheels in motion.

Finally, the fact that other companies in the sector have also decided to side with Anthropic highlights how the issues raised by this specific case are a source of concern for the entire industry. If, in fact, the line advocated by the Trump administration were to ‘prevail’ – whereby refusing to comply with the Defence Department’s requests would mean losing the opportunity to work for the military sector – the impact on profits and (above all) on competition would be enormous and therefore to be avoided at all costs.

The Anthropic case is also extremely useful for Europe because it allows us to look to the future and understand the real problems we will face if we truly wish to utilise AI technologies. The alternative is to carry on as we are doing today, thinking in abstract terms, relying on confused perceptions of the AI phenomenon and thus allowing the rest of the world to move forward without us.