EU rules on artificial intelligence still leave doors open

The last draft of the AI regulation risks favouring the race of China and the United States, leaving behind the European countries and creating surveillance problems by Andrea Monti – Initially published in Italian by Wired.it

On 11 May, the European Parliament’s Internal Market and Civil Liberties Committees approved the “compromise” text of the AI regulation proposed by the European Commission.

The amendments do not affect the conceptual framework of the future legislation, which, like other important measures such as the GDPR, is characterised by the extended application of the precautionary principle and bureaucratisation, culminating in the creation of yet another “regulatory body”. If this regulation is approved, it will be challenging for researchers and companies based in the EU to test the effectiveness of theoretical models and turn them into products, compared to the greater flexibility that countries such as the US, China and Japan have already shown they can take advantage of.

However, the problem – or rather the problems – of the  upcoming regulation lie not in the individual provisions or in the technicalities of legal engineering, but in the underlying assumptions on which it was designed, and in an idea of artificial intelligence that is more suited to Hollywood screenplays than to reality.

The political discussion about this regulation has gained momentum after the global hysteria caused by the free availability of ChatGPT. It doesn’t matter that the “problems” are the same ones that have always affected search engines, for example, or that they stem from a misunderstood way of using software that is irrationally equated with a sentient oracle.

Faced with the hype generated by perceived or real alerts from individuals of various “backgrounds”, it was politically necessary to “do something”, anything, just to ride the wave in pure social networking style. And that “something” meant speeding up the decision-making process on an ill-conceived measure that should have been slowed down instead of accelerated.

Haste is always a bad advisor, and in the case of the proposed AI regulation this is particularly true, as the underestimation of the technical and political aspects associated with this technology has been apparent from the global approach to the writing of individual provisions.

Starting from the basics, the first critical point concerns the decision to underpone science to politics.

The regulation intends to delve into what AI is and what it is not, and what systems are used to build it. This means two things: first, it will be necessary to follow every technological development to understand whether it should be included in the list of “dangerous software”, leaving a regulatory gap in the meantime, similar to what happens with new drugs, which -in Italy- only become illegal when they are added to an official “list”. Secondly, the EU Parliament, à la Trofim Lysenko, is arrogating to itself the right to decide by law what science and technology are or how they should be defined. Using this approach, it would not be surprising if, ironically, the speed of light or the gravitational constant were soon to be regulated, with accompanying penalties for going too fast or exceeding the limits of gravity’s strenght.

Another critical aspect of the regulation is the choice to consider AI “dangerous” and specifically regulate it, while leaving all the other software, which, conversely, we could call “dumb,” in oblivion. Yet, today everything works based on these “differently intelligent” software, which are no less dangerous or capable of causing even catastrophic damage, from a medical diagnosis error to a plane crash. Does it make sense, therefore, to address the regulation of a specific technology instead of applying a general principle whereby those who create something that causes harm are obligated to face civil consequences—compensate for the damages—and criminal consequences—serve a sentence?

The real issue, therefore, is not worrying about this or that specific gimmick, but rather the responsibility of those who develop, produce,  and sell software, regardless of the technology used to build it.

One would have expected that, finally, the EU would put an end to the legal fiction of considering software like a Shakespeare’s play —an artistic work—by granting it the legal status of a “product” and thus making the protections applicable to those who must use it, but that is not the case. However, this would have been a fairly simple option to address the issue of responsibility associated (also) with AI-driven programs.

The regulatory choice, instead, has leaned towards yet another application of the “precautionary principle” and thus the imposition of the omnipresent (not well-qualified and essentially unquantifiable) “risk analysis.” This element causes great uncertainty in evaluating how software, hardware, and AI-based services should function. On the one hand, it is not clear what the threshold for acceptable risk should be, how it should be identified, and in relation to which events and subjects. Furthermore, from a strictly legal perspective, even if a risk threshold were identifiable, it would allow “manufacturers” to claim they should not bear the consequences of damages caused by their “creations” if the incident falls within the “expected” scope. In other words, blame the user, as has always been the case in the software world.

This choice is connected to another flaw in the regulation’s approach: the creation of a complex bureaucratic infrastructure involving one or more “competent authorities” and a “supervising authority” responsible for monitoring and (probably) sanctioning certain uses, some of which are allowed and many of which are heavily limited or even prohibited.

The most significant and controversial prohibition concerns biometric identification based on AI, even for police purposes, which, unless unlikely changes occur, will not be allowed in any member state.

The issue is extremely delicate and complex and goes beyond the traditional and irreconcilable dialectic between proponents of security at all costs and those who invoke privacy as an absolute value.

On the one hand, the European Parliament intends to block the use of surveillance technology perceived as too dangerous for fundamental rights. At the same time, the European Commission is pushing for the adoption of client-side scanning, the imposition of automated “preemptive search” systems for smartphones and computers before the content of a communication is encrypted and sent, all in the name of “protecting minors,” at the cost of an unacceptable systematic and continuous intrusion into the everyday use of communication systems.

It cannot be said, therefore, that the idea of building a society of control has been rejected in Europe as an absolute statement. On the contrary, looking at the project of constructing a huge biometric database for managing entries into the EU, one might think that mass profiling and surveillance are now (politically) accomplished facts even within the European Union, and the “sacrifice” of AI-based biometric recognition represents almost an ancillary loss in the overall economy of the control system being implemented.

However, and this is the real sore point, it is not even certain that the EU can impose a ban on using AI for police activities. Public and national security, in fact, are areas where Brussels has no jurisdiction. Therefore, it is questionable that it can even adopt a regulation that has direct effect on the legal systems of member states.

There is already a precedent concerning the protection of personal data supporting this conclusion: personal data protection is indeed regulated by a community regulation (the famous “GDPR”), but when it came to police and investigative activities, a simple “directive” was issued (i.e., an act that requires integration into the national legal system through a state law). In other words, beyond legal technicalities, there is a concrete risk that the prohibition of using AI in police activities may remain ineffective unless member states decide to intervene autonomously.

This news, in reality, is less negative than it seems. If political decisions regarding surveillance and security must be made at the national level, it is still possible to fuel an extensive public debate that forces political parties in each country to confront the responsibility of deciding the limits on people’s rights and directly assume the responsibility of the choice, without being able to hide behind the traditional excuse of “Europe asked for it.”

Leave a Reply

Your email address will not be published. Required fields are marked *