Why we have accepted that software can fail (and why we can no longer afford to)

From aeroplanes to medical care, from algorithms to cars: software errors are everywhere. And without rules on liability, the risk will continue to grow by Andrea Monti – Initially published in Italian by Italian Tech – La Repubblica

For once, a fault in the software that contributes to the proper functioning of an aeroplane was detected and corrected before a catastrophic event occurred. While passengers on Airbus A320s can continue to fly safely thanks to preventive technical intervention, those on two Boeing 737 Max aircraft between 2018 and 2019 suffered a tragically different fate when a software malfunction caused the deaths of 346 people.

When software fails: thirty years of forgotten accidents

These are only the most recent cases involving the aerospace sector, but since at least 1993, there have been reports of accidents — fortunately not fatal — caused by faulty code. To name a few, it is worth remembering the disappearance of the Mars Lander, the explosion of the Ariane 5 launch vehicle, and the crash of the Mars Climate Orbiter.

However, “bad software” does not afflict a specific sector but is, in fact, a cross-cutting issue in industry. Just to mention a few other cases in no particular order, it is enough to remember that between 1985 and 1987, a software error (or rather, an error on the part of those who created it or those who failed to check that it was safe) exposed patients undergoing radiotherapy with the Therac-25 to massive doses of radiation. Or that the first series of Mercedes A-Class cars launched on the market in 1997 was unable to pass the moose test and, in addition to engineering interventions, it was also necessary to intervene on the software that governed the electronic stability control. Or that it took over twenty years for hundreds of British postal workers to be cleared of defamatory accusations caused in 1999 by an error in the accounting software that had “reported” shortages.

And since we are proceeding in chronological order, how can we forget the hysteria generated by the design error in the way motherboard BIOSes handled the transition from 1999 to 2000 (which was not the beginning of the new millennium, but the end of the previous decade)?

Collective addiction to digital errors

Even the new millennium has not been spared from the damage caused by programming errors. Indeed, the irrational rush towards indiscriminate digitalisation has laid the foundations for a small problem to become a global catastrophe.

We are so accustomed, so anaesthetised, to the fact that software “just about works” that we no longer get too upset when a poorly designed security update paralyses half the world, a superficially designed app renders a smartphone unusable, or an operating system update deletes files stored on a computer.

In short, we have stopped — or rather, we have never started — to expect software to work properly.

What Kaner and Cooper tell us: “bad software” is not an unexpected event, but a pattern

The problem of “bad software” has been known since the dawn of the information society. In 1997, Professor Cem Kamer wrote a book with this title, and the following year, Alan Cooper, the father of Visual Basic, published The Inmates Are Running the Asylum, in which he documented at length the consequences of not taking programming seriously.

Almost forty years have passed and we are still there, and indeed, we find ourselves in an even worse situation.

If we cannot manage stupid software, how will we manage “intelligent” software?

The ease with which AI platforms are being integrated into various types of decision-making processes, including judicial, medical diagnosis and industrial machine control, greatly increases the likelihood of increasingly widespread and significant damage.

This is not because there is such a thing as “high-risk AI” — as the bureaucratic and useless European regulation calls it — but because if we have not been able to define simple rules to establish the responsibility of those who develop “stupid” software, it is unlikely that we will be able to do the same with “intelligent” software.

We can no longer afford unreliable software

The Airbus bug is not an isolated event, but yet another sign that software has become a critical component of our world without an adequate system of accountability. We have built industrial processes, public services and entire economic sectors on programmes that can fail without anyone really being held accountable.

If we do not untangle this knot, any discussion of artificial intelligence risks being an ideological exercise or an intellectual pastime.

Like any emerging technology, early artificial intelligence is immature, crude and inefficient, and here too, the god of progress demands his toll of sacrificial victims.

However, it is not comforting to feel that we must be part of a bloodbath to allow further loss of control over fundamental components of our society.