Who is responsible for ChatGPT’s mistakes? A US ruling provides an answer (for now)

A Georgia court has ruled the obvious, namely that results provided by a chatbot should not necessarily be taken seriously. But things are not that simple by Andrea Monti – Originally published in Italian on Italian Tech – La Repubblica

On 19 May 2025, the Gwynnett County Court in Georgia (USA) issued a ruling which, in a nutshell, establishes that OpenAi is not liable for ChatGPT’s hallucinations.

What happened

Mark Walters, a well-known US radio commentator, brought a lawsuit after being named in an article as responsible for misappropriation of funds from a foundation that supports the Second Amendment to the US Constitution (on the right to bear arms).

The source of the news was a ChatGPT hallucination that had been used by the author of the article to obtain a summary of the appeal filed by this foundation against the Washington State Attorney General.

Although the author of the article, already a ChatGPT user, was aware that the software did not necessarily provide correct results, had received the warnings provided by the platform and could see the obvious inconsistency of some passages in the text, he decided to publish the piece without further checking.

What did the court decide?

First, the court found that in order to be defamatory, a statement must contain statements about the defamed person that appear, at first glance, to be true or that refer to facts of which the defamed person was a part.

Furthermore, it specified that the presence of warnings about the potential unreliability of the information produced — as in the case of ChatGPT — requires the user to exercise greater caution before accepting the results as valid.

Thirdly, and this is certainly the most interesting part of the decision, the court condemns the behaviour of the journalist (a different person from the person who was allegedly defamed), highlighting his negligence in “forcing the hand” of the software in order to obtain the result, and in failing to take into account the obvious errors contained in the response.

Why this ruling is important

The principles of law applied by the Georgia court are clear and, in some ways, trivial.

Liability is linked to how the model is constructed

The first is that for defamation to occur, there must be an awareness of damaging someone’s reputation by communicating false information. The results produced by ChatGPT t were certainly false, but this does not mean that OpenAI intended to create them as such in order to commit a wrongdoing — or that it trained the model negligently.

This is a fundamental issue because, even if the ruling does not say so, it establishes the limits of liability for those who develop an AI platform.

Similarly to the criteria for the liability of internet providers — who do not bear responsibility for the behaviour of users if they remain neutral with respect to their actions — a platform trained without applying “ethical standards” or other forms of intervention on the results can certainly invoke its neutrality. Conversely, the use of unreliable data, incorrect alignment, preventive control of prompts or the adoption of other “safety checks” that intervene in the results could generate (joint) liability on the part of the platform operator because it “intervenes” between the user and the software.

The user must provide proof of the platform’s negligence

In finding against the alleged defamed party, the court reaffirms a principle that also applies in Italy: the accuser must prove their case. In this specific case, the alleged defamed party did not provide any evidence of OpenAI’s negligent behaviour and therefore their claim for compensation could not be considered.

The problem, however, is precisely this: how is it possible to provide such proof without having access to confidential information, which is subject to intellectual property rights and, in any case, can only be understood through expensive expert reports that few can afford?

As things stand, it is highly unlikely that any Big Tech company — or even small start-ups — will be held liable for the damage they cause.

One solution would be to provide for a “reversal of the burden of proof”: if damage occurs, it is the platform service provider who must prove that it did everything possible to prevent it, rather than the victim having to prove the negligence of the platform operator. This principle already applies to dangerous activities such as driving motor vehicles or operating nuclear power plants. So the question is not whether it can be done, but whether it should be done.

The user is responsible for the use of the information provided by the software

The third, equally important point established by the ruling is that the user is not exempt from the duty to critically evaluate the answers provided by the platform — and, more generally, by software.

The way ChatGPT-type services work is clearly described as incapable of providing accurate results, and therefore more like an oracle than the holder of revealed truth. However, users persist in using these platforms to obtain answers that they are not necessarily able to understand.

Therefore, it can be inferred from the US court ruling that if a user asks a question but is unable to understand the answer and uses it, they assume responsibility for the consequences of their choice without being able to pass it on to the service.

This indication is also and above all fundamental for the application of the European regulation on AI because it rebalances a historical imbalance in liability for damages resulting from data processing, which considers the user to be the “injured party” by default, without taking into account any negligence on their part.

The risks of AI washing

At the same time, however, this principle of law also means that those who produce or market chatbots and similar products must be very careful about how they manage AI washing. Tranchant statements about the role of AI in the functionality of a product, claims about the superior performance guaranteed by artificial intelligence or other marketing claims that raise customer expectations can then come back like a boomerang when they are the subject of legal action.

It should be reiterated that OpenAI got off lightly because of the way it handled communication with customers, the disclaimers provided to users and the warnings provided by the chatbot. On the contrary, if it had implied that the platform produced correct results, the trial would probably have taken a very different course.

Free for all (AI manufacturers)?

No, this ruling cannot be interpreted as a “free for all” but, on the contrary, as a clear indication of how a platform that provides information to users should be developed and put into circulation.

Because “better to ask for forgiveness than permission” should no longer be an acceptable method of offloading the consequences of one’s industrial strategies onto others.

Leave a Reply

Your email address will not be published. Required fields are marked *