Once again, you don’t need AI to harm people, but the EU still doesn’t realise it

Once again, malfunctioning software has caused damage on an international scale.
This is the case of the very recent flaw in a product of Crowdstrike, a well-known cybersecurity company, which, between 18 and 19 July 2024, paralysed machines running the Microsoft Windows operating system by Andrea Monti – Initially published in Italian by Italian Tech-La Repubblica

Fortunately, there do not appear to have been any consequences as tragic as those of the Boeing 737Max that crashed due to a software defect or the Royal Mail case, where people falsely accused by software of having committed crimes went so far as to commit suicide, but the impact of this programming error grounded planes and stopped trains, as well as hospitals, subways, newsrooms and television stations in different parts of the world. All, without it being the fault of ‘high-risk artificial intelligences’ or other ‘sophisticated deep learning-based platforms’. It was, quite simply, the same old, dear old bug that has been buzzing around in programmes all over the world for as long as computers have existed, and which pops up to block their operation when least expected.

This incident, and the fact that such events keep recurring with alarming frequency, should give pause for thought on some fundamental issues for the survival of a highly digitised society that increasingly resembles a colossus with feet of clay.

No right to properly developed software
The first and most important issue is the widespread conviction (instrumentally supported by the software industry) that is fine if computer programs only ‘work’ so and so – as Alan Cooper wrote in The Inmates are Running the Asylum.Therefore, if something goes wrong, it is an entirely normal and, even though we pay for the licence, we are not entitled to any particular expectation that it will perform.

The second issue, directly related to the previous one, is having allowed the belief to spread that software houses are not responsible for the damage they cause. Even in the case of Crowdstrike, in fact, the crisis management machine is working at full capacity and the ‘wording’ of the communication is all based on the commitment to cooperate with the affected structures, to provide timely information, and so on. Not a word, however, on a fundamental question: who pays for the damage? 

Yeah, who pays?

The irresponsibility of large software houses
In theory, the answer is simple: even software houses have a legal obligation not to cause damage with their products, but in the annals of jurisprudence, in Italian jurisprudence certainly, there are no recollections of convictions of large multinationals for damages caused by blockages, malfunctions and programming errors. 
It would take a long time to go into the merits of why things are the way they are, but to keep it simple, one can say that much depends on the impossibility for the ‘victim’ to access the way the software is made to verify the existence of the defect, the difficulty of proving that the defect was the sole or main cause of the damage and, last but not least, he costs of such a trial.


The EU’s inertia
The third issue is the disarming inertia on the one hand and the compulsion to repeat on the other of the EU legislator. 

Instead of applying existing rules and established legal principles, Brussels persists in issuing directives and regulations (such as the one on AI) that, do not address the underlying issue: the chain of responsibility in the marketing of digital products and services.

Some concrete proposals
And yet, it would not take much to issue a directive harmonising the criteria for the allocation of liability in the IT sector, adopting a number of principles to protect the weaker parties (i.e. any of us). 

One, for instance, could be the obligation for the vendor/producer to prove that he did everything possible to avoid the damage instead of forcing the ‘victim’ to provide proof that he could hardly find. 

Another could be the obligation to make the incriminated source codes available to the parties to the lawsuit and to the judge. Of course, when a piece of software is made up of millions and millions of lines of code, this would be an unenforceable rule, but it would in any case be a deterrent because a manufacturer would be faced with the loss of secrecy in the functioning of its products.

Another could be the compulsory insurance for software producers/vendors with the company’s obligation to compensate the injured party and the possibility of recourse against the software house. The deterrent nature of such a rule is quite clear and requires no further explanation.

Software (including AI) is a simple business tool and should be treated as such.
These are certainly not the only solutions or necessarily the most effective ones, but they do represent an example of what could be done if there were the political will to tackle the problem in structural terms, but, above all, the awareness that even when it comes to software (AI included) we are not talking about science fiction but simple, banal and ordinary work tools that must, quite simply, be well designed in order not to do harm.

Forget ‘human rights risk analysis’…

Leave a Reply

Your email address will not be published. Required fields are marked *