The cultural limits of the EU AI regulation

The European Commission wants to regulate artificial intelligence but does so with a confused and conceptually wrong proposal by Andrea Monti- Initially published in Italian by PC Professionale n. 363

The European Commission has prepared a draft regulation on artificial intelligence based on two mistaken assumptions: the first is that artificial intelligence exists. The second is that only artificial intelligence is potentially dangerous, while the rest of ‘traditional’ software is not.

The consequence is legislation inspired by science fiction rather than an analysis of how the software industry has affected people’s lives. The Commission, for instance, proposes to ban ‘AI systems’ from being used to send subliminal messages to manipulate the behaviour of even minority people and cause them harm (so, one should deduce, is subliminal manipulation that does not harm people lawful? But is not manipulation itself evil in itself?).

Similarly, it proposes to ban the use of these systems to create social scoring and automated reliability assessment tools for people. However, isn’t this already happening, and without the need for ‘artificial intelligence’? Moreover: AI is considered high risk if it manages the safety of products placed on the market. However, has anyone ever noticed that, for example, electro-medical and diagnostic equipment uses software that is licensed ‘as is’ (i.e. without any guarantee of functionality and operation?).

A further conceptual error in the regulation is that it is possible to distinguish AI from what would not be. Article 3(I) of the draft ‘defines’ AI as

software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

According to Annex 1, to which the article refers, software belongs to the category “artificial intelligence” if it is developed using:

Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; Statistical approaches, Bayesian estimation, search and optimization methods.

It is easy to exploit the buzzwords of technology marketing on paper, while it is extraordinarily more complex, in the reality of the digital market, to establish a clear difference between what falls into one category or the other. Especially if those who develop software platforms deliberately choose to muddy the waters.

Moreover, why should the method used make a difference to the consequences of abusing a programme? In other words, the problem is not the phantom ‘AI’ but the use of software regardless of the technique used to build it. If it is established that only programmes based on ‘artificial intelligence’ are subject to the limitations of the regulation, this implies leaving a free hand to all the others that are developed with different methods and that are no less dangerous.

In order to understand why the proposed regulation contains these fundamental errors, it is necessary to start from a consideration: artificial intelligence —at least the one commonly understood as such, i.e. the cinematographic one— does not exist and will not exist. 

The software does not ‘think’ because it is a syntactic machine (in other words, it works by manipulating symbols without understanding their meaning). Human beings are subjects that interact semantically. They create, in other words, meaning.

The two realms are irreconcilable, and however much software may imitate a sentient being, it will never become one. The key concept, in this reasoning, is precisely that of ‘imitating’: some AI supporters believe that if a piece of software performs tasks displaying an ‘intelligence’ indistinguishable from a human being, then it is as intelligent as one.

Apart from the easy joke of noting that many humans are dumber than a programme, it is conceptually wrong to claim that looking intelligent is equivalent to being intelligent. Therefore, it is legally wrong to regulate the use of software using unfounded assumptions. Rights (and duties) arise from the awareness of being alive, which is the difference between living beings and objects. That is why it makes no sense, as has been done at the EU level, to give legal value to the ‘three laws of robotics’ created by Isaac Asimov in a science fiction story.

Moreover, it is still by drawing inspiration from Asimov’s literary creations that the extreme supporters of artificial intelligence defend themselves by admitting that this criticism is correct.

Of course, they say, the cognitive capabilities of software today are still in their infancy. However, as the protagonist of ‘Someday’ —another Asimov story about the Bard, a creative computer torn to pieces by ignorant kids, thinks as they dismember it—  one day programs will be able to do things that you humans…

However, this, too, is meaningless reasoning.

The fact that something is impossible today does not mean that it will become feasible tomorrow. Of course, the history of science and technology teaches us that progress has made unthinkable things possible, but this is not an absolute rule. To understand this, we need only think that human beings, on their own, have never flown, do not fly and will not fly. So it makes no sense to train to wave one’s arms because if not today, he will undoubtedly be able to hover in the air in some time. This flaw in our thinking results in a legal approach based on the concept of ‘patchwork’.

What (should) matter is to protect people from the abuses that can be committed by exploiting any technology, no matter how it works. Consequently, it would make more sense to address software developers’ liability (and punishment) rather than trying to plug holes as they appearsoftware that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *