In 2003, commenting on the proceedings of the “Open-Source Commission” established by the then government, I wrote in the glorious (and alas, now defunct) Linux&C magazine: “We are creating generations of functional illiterates subservient to the uncritical use of a single platform. People are already using systems with no awareness of their actions. Thus, when the spell-checker suggests that ‘democracy’ is not in the dictionary, they will, without question, simply cease to use the word -and forget about its existence. Twenty years on, these words retain extraordinary relevance when applied to the current developments in generative AI, which unfold under the collective gaze of a substantially indifferent populace – by Andrea Monti – Initially published in Italian by Italian Tech – La Repubblica and in English by Inforrm
Back then, our concerns lay with the loss of control over the knowledge of one’s mother tongue due to the uncritical laziness in using spelling and grammar checkers. Admittedly, those were mere toys compared to today’s technology, but the issue of language—and thus idea—appropriation by private enterprises remains absolutely the same.
Microsoft’s AI, ChatGPT/Dall-E 3, and Stable Diffusion 2.0 are just a few examples of how filtering activitiesperformed during the construction of a generative AI model translate into actions ranging from the uncritical application of rules worthy of the blindest bureaucracy to outright acts of preemptive censorship.
An instance of the former is the refusal to generate images representing copyright-protected content. On more than one occasion, while using OpenAI’s platform (on a paid basis), I was informed that my prompt referred to protected works and thus could not be processed. Unfortunately (to OpenAI), my request was entirely legitimate and legal since I wished to use the images in my digital law course—hence within the exercise of the so-called “fair use” permitted even by U.S. law.
To be clear, the point is not to claim the right to violate copyright, but to be able to exercise all legitimate prerogatives granted by law. In other words, if ChatGPT is to be built to respect copyright law, it must do so in its entirety, allowing the exercise of fair use and not just safeguarding the interests of rights holders.
An example of the latter scenario occurred when I asked Dall-E to generate a “head shot,” and I was challenged for using inappropriate language. Regrettably (for OpenAI), “head shot” is a completely legitimate and inoffensive term because it identifies a particular type of portrait angle and not, as the software’s foolish automated moderation or the upstream choice of its programmers deemed, a “shot in the head.”
Of the two scenarios, this latter is most akin to what we hypothesized two decades ago about the impact of spell-checkers and is undoubtedly the more perilous: the choice to “filter” not only the data on which a model is trained to condition its results but also to “moderate” the prompts represents an unacceptable preemptive limitation on the freedom of expression.
Indeed, these systems can be used to violate laws and rights, and it’s beyond dispute that both must be protected, including by indicting those who do not comply. However, this cannot happen preemptively, indiscriminately, and especially not in relation to content that “illicit” (which itself is a debatable point) contentbut to entirely legal content hypocritically qualified as “inappropriate” based on “ethical values” imposed by unknown actors or for unclear reasons (except in theocracies, where no distinction exists between ethics and law).
The most disturbing aspect of this preemptive censorship—by default and by design, as data protection experts would say—is that it is practiced not on the basis of directives from states or governments, as in China, for example, but by private companies more concerned with the risks of media criticism, online backlashes, and legal actions initiated by individuals or regulatory authorities like the European data protection authorities.
Thus, we are faced with yet another example of how Big Tech has appropriated the right to decide what constitutes a right and how it should be exercised, outside and above any public debate.
This drift towards the systematic compression of constitutionally guaranteed rights is the result of substituting a culture of sanction for one of prohibition. A great achievement of liberal (criminal) law is the concept that human freedom extends to the point of being able to infringe upon the rights of others, but thatthe culprit must accept to be punished. The law does not “forbid” killing; it punishes those who kill. This is the substantive difference between an ethical-religious imposition that applies “no matter what,” and a principle of freedom under which a person must accept the possibility of losing that freedom if they choose not to respect the rules.
Indeed, a “panties-less” generative AI can sometimes embarrass as did Michelangelo’s David in Japan or the statues of the Capitoline Museums during a visit by foreign dignitaries, yet the consequences of using this tool are solely and exclusively the fruit (and responsibility) of the choices made by those who wield it. To apply preemptive justice—and moreover a private form of it—is a way to absolve the individual of responsibility and to entrench the notion that the respect for rights, especially those of victims, can be exercised by and through a machine, without anyone being able to do anything about it.
“Computer says no,” Carol Beer of Little Britain would invariably reply to her customers’ requests; but nearly twenty years on, what was at the time “merely” a scathing critique of British customs has unfolded as an accurate and dystopian prediction of the world we are leaving others to build.