The public debate on the use of AI is affected by an ‘ethical drift’ that diminishes the role of democratic confrontation, denies the plurality of political options and leaves the power to make critical choices in the hands of a few by Andrea Monti – Initially published in Italian by La Repubblica – Italian Tech Why should we be concerned about the ‘ethical use’ of AI in warfare and not really question, not just on paper but on the same assumption, the use of ‘traditional’ instruments such as white phosphorus bombs, anti-personnel mines (the ban on whose use is largely ignored) or expansion bullets, which can cause irreparable damage to the human body rather than ‘merely’ passing through it?
And why should the (real or supposed) autonomy of an AI-controlled weapon system be more dangerous than carpet bombing or dropping a nuclear bomb?
Finally, when it comes to the ethics of AI applied to conflict (and not only), the first question to ask is: whose ethics? And why should the ethics of one philosophical view be preferred to another? But above all, and to get to the point, in the name of what should a state be ‘ethical’ in war?
The definitive words on this subject were spoken in the not too distant past, at the end of the 19th century, by two giants of military doctrine, Carl von Clausewitz and Helmut von Moltke.
The former wrote in 1832 that “good-hearted people may naturally think that there is an ingenious way of disarming or defeating an enemy without too much bloodshed, and may imagine that this is the true aim of the art of war. Pleasant as it may sound, this is a misconception that must be exposed; war is such a dangerous business that the mistakes that result from kindness are the worst’, while in 1861 he reiterated the point by stating that ‘the greatest kindness that can be done in war is to bring it to a speedy end’.
If, therefore, the debate on the ethics of international humanitarian law is in general rather worrying, it becomes even more so when it is extended to more or less formally declared conflicts, because it is based on a factually incorrect philosophical assumption: the existence of universal values that would therefore be applicable across and beyond individual legal systems.
This universalism is the same assumption, this time applied in the political sphere, that led to the construction of the artificial category of ‘universal rights’. Both are based on a monotheistic approach, according to which there are only ‘certain’ values and ‘certain’ rights, and thus have little to do with the democratic dimension that has developed since the French Revolution, in which ethics (and religion) are outside the state and rules – laws – are the result of the political mediation of conflicting values, which are adapted to the spirit of the times and changing social conditions (and beliefs) through the activity of the judiciary.
On the contrary, the proclamation of absolute ‘values’ and ‘rights’ is a geographical, temporal and cultural peculiarity that does not belong to a secular – and above all pragmatic – view of reality. Whether we like it or not, choices based on political realism have, since the days of Grotius and then Kant, pushed aside those based on ethics, inspired by some form of higher law, even of an ethical nature, as a limit to state action.
US strategic doctrine has coined a specific term to describe this approach: lawfare – the use of law as a weapon in the geopolitical arsenal, although the formalisation of the argument actually dates back to the time of Thucydides.
In the section of The Peloponnesian War devoted to the escalation of tensions between Athens and Sparta, the dialogue between the Athenian ambassadors and the Melii stands out. The former applied in no uncertain terms what Otto von Bismarck would later call machtpolitik – the politics of force – while the latter appealed to the reasons of law; and this is how it turned out: We demand instead,’ said the Athenian ambassadors, while the fleet that had accompanied them in the ‘diplomatic’ mission did not exactly inspire feelings of brotherhood and democracy, ‘that you do what is possible according to the real conviction that each of us has, for we are certain before you, informed persons, that in human considerations law is recognised as the result of equal necessity for both sides, while those who are stronger do what they can and those who are weaker yield. … For we believe that by the law of nature he who is stronger commands: whether it be by divinity, we believe it out of conviction; whether it be by man, we believe it because it is evident. And we make use of this law without having first established it ourselves, but because we have received it already existing, and we shall leave it in force for all eternity, certain that you and others would have behaved in the same way if you had found yourselves masters of our own power’.
Now,coming back to the subject of the warlike (but not only) use of AI and to conclude the argument, it is quite clear that no state can accept or run the risk of finding itself in a strategic and tactical inferiority and therefore seeks to maximise its offensive and defensive apparatus (assuming the difference exists, but that is another matter). And it is equally clear, as the choices in the field of nuclear deterrence show, what the misunderstanding underlying the whole discourse on the ethics of AI is: it is not the technologies, but those who control them, who must impose (or be subjected to) limits on their use.
Applying the same reasoning of the proponents of AI ethics to nuclear weapons would lead to the paradox of demanding that the next time Little Boy (the bomb dropped on Hiroshima) live up to its nickname and cause massacres, yes, but not too many, even though this objective is somewhat incompatible with the principle naively established in the Hague Declaration of 1899, in which states agreed that “The only legitimate object which states should seek to achieve in war is to weaken the military forces of the enemy; that for this purpose it is sufficient to disable the greatest possible number of men; that this object would be exceeded by the use of weapons which uselessly aggravate the sufferings of disabled men, or render their death inevitable; that the use of such weapons would therefore be contrary to the laws of humanity. ‘
The real issue, then, is not the development of a technology, but the individual responsibility of those who have the de facto or de jure power to control it, and the role of citizens as guardians of power.
To think of constructing an ‘ethics of AI’ – whatever it is – is to open the way to human, civil and political deresponsibility for decisions that may dramatically and irreparably affect the lives of each of us and society as we (still) know it. Or, worse, it means entrusting the choice to a small number of people with no accountability and no public debate, because whatever happens will be the fault of the computer, or rather the artificial intelligence.