ChatGPT, Regulation and Superstition

The news of the lawsuit against ChatGPT brought by a person “accusing” it of defamation has been widely commented on as yet another example of the “dangerousness of artificial intelligence” and the “need for rules” to “tame the beast”,  implicitly confirming the ‘concerns’ advanced by various public bodies, in Italy and elsewhere. The real news, however, is yet another demonstration of how irrationalism, fideism and ignorance can prevail against facts, history and reason by Andrea Monti – Initially published in Italian by Strategikon – an Italian Tech Blog

 

ChatGPT is not a ‘subject’, is  a tool. It is much less ‘dangerous’ —for contingent and structural reasons— than a search engine. Its use creats, yes, problems, but problems that go back to the dawn of computing and even further back in time -such as the responsibility of the producer (yes, “producer,” not “author”) of software-that of so-called “intermediaries” and, finally, of platform service providers. To this, we could add, where proven individually, the issue of the ‘appropriation’ of data of various kinds and the need to find a legal solution to the sorite paradox.

These are real problems, some of which have been solved by laws and judgments, and some of which are still in search of a reasonable solution, but which have nothing to do with the superstitious belief that an LLM-based generative AI would be endowed with free will and, as such, likely to be held ‘responsible’ for what it does.

In the case of ChatGPT, it would suffice to read this article by Stephen Wolfram —the creator of Mathematica— to clear the fog and understand that many of the alleged ‘problems’ arising from the use of this software do not exist and others are certainly to be scaled back. However, if the mathematics of this article is too complex to understand, then to prove the hypothesis underlying the reasoning of this article, it is necessary to follow a purely argumentative path.

ChatGPT is not a “person”

First, as I explain in detail in The Digital Rights Delusion, one can fantasize the conferring of a ‘legal personhood’ on software and therefore also on an AI, but that would be unnecessary, wrong, and dangerous.

Technically, this would be called in the legal jargon ‘fictio juris’ —a legal fiction— according to which a corporation has autonomy similar to that of a natural person.

The concept may seem abstruse —it is— but it can be understood if one thinks, for example, of the national soccer team. There is no such thing as a ‘Mr National Team’, its members change over time. Still, the National Team always remains the same and, therefore —like a business corporation— can be considered (note: ‘considered’) as an autonomous entity.

A legal entity, like its natural counterpart —a human being— can

  • operate autonomously through its own organs (the people who compose it, such as return to the example of the National Team, the players, the coach, and the staff),
  • be the holder of rights and duties but, above all,
  • bear responsibility —that is, the ability to suffer the consequences of its actions.

In the case of software, no matter how sophisticated and ‘autonomous’, this is simply impossible.

Functioning intelligently does not mean (having the conscience of) being so. And even if the difference were nullified or became factually irrelevant, this would not allow the nature of ‘legal personhood’ to be extended to a machine.

Indeed, legal personhood presupposes the presence of human beings who embody its behaviour. As a side note, moreover, it is worth pointing out that even the concept of ‘legal personhood’ is beginning to creak, given the amount of abuse and fraud, not only in taxation, that it has allowed to be committed, so much so that the robustness of its fundamental element (partners not paying the debts of the company) has been progressively (but never sufficiently) weakened, but that is another matter.

Competence does not imply knowledge or awareness

Second, as Daniel Dennet, a philosopher and epistemologist who has been working on these issues for a very long time, explains very well, there is a difference between “competence” and “comprehension.”

It may be trivial, but that does not make the observation less true, that living beings —including humans— do not necessarily need to ‘understand’ something in order to ‘learn’ how to do it.

Without going back to the days of repetita juvant, in a great many areas, from military, to sports to the use of tools and machinery, it is a fact that on the one hand it is not necessary (immediately) to understand ‘why’ things should be done in a certain way, and on the other hand that understanding one’s own doing can come later.

Indeed, even not adhering to a mechanistic view of existence, it is undeniable that a machine does not need to ‘understand’ in order to function intelligently. Ergo, the issue of ‘autonomous’ liability is not even relevant as a matter of principle. At most,  the role of a machine in a harmful event is only that of an efficient cause, mechanical, precisely, but certainly not voluntary or culpable —as required by the fundamentals of legal theory.

It should be clear at this point that it would be time to stop talking about the ‘responsibilities of AI’ and start asking what are those of those who design, make and use it

It may be trivial, but that does not make the observation less accurate that living beings —including humans— do not necessarily need to ‘understand’ something to ‘learn’ how to do it. Without going back to the days of repetita juvant, in a great many areas, from the military to sports to the use of tools and machinery, it is a fact that, on the one hand, it is not necessary (immediately) to understand ‘why’ things should be done in a certain way, and on the other hand that understanding one’s own doing can come later.

Indeed, even not adhering to a mechanistic view of existence, it is undeniable that a machine does not need to ‘understand’ to function intelligently. Ergo, the ‘autonomous’ liability issue is not even relevant as a matter of principle. At most, the role of a machine in a harmful event is only that of an efficient cause, mechanical, precisely, but certainly not voluntary or culpable —as required by the fundamentals of legal theory.

It should be clear at this point that it would be time to stop talking about the ‘responsibilities of AI’ and start asking what those of those who design, make and use it.

Liability lies with the producer, not the product

The first issue, which Big Tech has no interest in discussing, is the still unresolved issue of the liability of software producers.

Unlike other industries, software development is not subject to product liability because programs are regarded as the Divine Comedy: a creative work. Because of that, unlike a car, a machine or a drug manufacturer, software developers are not required to protect the safety of people’s health and right. They only need to avoid a (highly unlikely) lawsuit for damages.

If software were a product, it would have to provide adequate security and protection for people —not for data, as we persist in claiming. Therefore, if a program like ChatGPT were an inadequately secure product, it could not be made available to the public.

Injury by Algorithm has been a known issue for at least a decade

The second issue is liability for the results of using the software. The topic of Injury by Algorithm is quite old. In Italy it was posed as early as 2013 when the Court of Milan ruled that Google was not liable for errors in the autocomplete feature of queries that could be defamatory. Excluded at the time for the search engine, a fortiori this liability does not exist in the case of ChatGPT for a contingent and a structural reason.

The contingent reason is that the software is still in development version and therefore, as such, its results cannot be taken at face value. The structural reason is that even if ChatGPT were made available in a (tentatively) stable version, its results could still not be taken for good precisely because of the way a generative AI works.

In other words, if software produces results that are not necessarily reliable these cannot be ‘taken seriously’, similar to the outputs provided by a search engine that does not necessarily apply a ‘reliability filter.’ We would also have to ponder on behalf of what would subsist a liability of the provider of services such as search engines or generative AI if the cause of the damage is the ignorance or credulity of the user.

Which leads, once again, to the issue of the liability of intermediaries in the automated production of results.

The Gordian Knot is service neutrality

This topic, the third in order, has been under discussion since the days of BBSs when some argued for the responsibility of the SysOp because it “could not not know” what other users were doing via the board. That happened, more or less, in the late 1980s and early 1990s.

Over time, the debate evolved involving transportation network access providers, content-sharing services (forums and newsgroups, personal sites, blogs and news outlets), social networks, and other platform services.

A milestone was set, in 2000, by the EU directive on electronic commerce: the intermediary is not responsible for what happens through its system if it remains extraneous to its operation, while it loses this shield the moment it actively intervenes.

The question of Open AI’s (and not ChatGPT’s) liability —at least from the standpoint of EU law— is purely factual: does the way the service works allow one to argue that the results provided are neutral in relationship to the intervention of those who designed and operate it? Depending on the answer, the consequences can be (also very) different for the parties involved.

OpenAI does not assume a contractual obligation to provide reliable results, and there is no legal duty to do so. In fact, like any other software, ChatGPT is made available in the factual and legal condition in which it is found or, as the program licenses say, “as is.” One should therefore consider whether the limitations stated on the site about blocking prompts that contain terms likely to produce offensive or dangerous results constitute a waiver of the neutrality of the AI platform’s operation.

The need of a GPL-like license for the data

The last issue to consider is the alleged ‘misuse’ of data used for LLM training. It involves the protection of personal, genetic and any other kind of data as susceptible to economic use and thus qualifying as an ‘intangible asset.’

There is no need for a specific rule to attribute this legal protection to data. If there is a market for which someone is willing to sell them and someone else to buy them, data have legal protection regardless of their intrinsic value.

In the case, again, of search engines, it is generally accepted that the company operating the platform is not required to pay for the use of content freely made available by individual users to build and deliver the service. Moreover, those who do not want their content indexed have the option of making that happen.

In the case of AI companies, this issue becomes whether the system allowed individual users the ability to exclude the use of certain content with tools similar to Google’s noindex and nofollow. Liekly, OpenAI did not make this functionality available, but the preliminary question is to ask whether it should have.

The answer certainly involves legal aspects: OpenAI, unlike Google, is not in a preeminent position in the market (not least because one would first have to understand which market we are talking about) and therefore does not exercise such a market-influencing role as to require special operational limitations.

There are, however,  also issues related to Big Tech’s business models increasingly based on the principle of ‘better to ask for forgiveness than for permission.’ Is this model still ‘legal’ or, at least ‘viable’? Should it be banned because it overrides individual rights to gain private profits? The answers to these questions are critical in particular when matched against data-driven research.

We should  ask ourselves whether or not, for example, to impose by law some kind of ‘free use’ of data similar to the GPL license when the purpose of their processing is to experiment and apply research results. Companies could make use of somebody else data, and in exchange they should make the dataset publicly available to other entities.

Conclusions

Like all technologies, AI raises issues to be addressed in terms of balancing interests and protecting individuals.

Legally, and this is good news, there is little new because many solutions to alleged ‘problems’ are already available. However, and this is bad news, there is also little new in the way the public debate on the subject is conducted.

It is based on ignorance, superficiality and superstition that lead learned and commoners to worship AI as a new Glycon, the rag snake artfully created by Alexander of Abonuteichus, the charlatan who at the time of the Antonine plague promised miraculous healings in the name of the new god. The difference with the past is that in the case of Glycon, Alexander had to jump through hoops to convince people that they were dealing with a deity, whereas in the case of ChatGPT the audience did it all on their own, and OpenAI certainly cannot be blamed for that.

Leave a Reply

Your email address will not be published. Required fields are marked *