Suicide and artificial intelligence: Can ChatGPT be held responsible?

A tragic case involving the use of a chatbot raises an urgent question: not about AI itself, but about the systemic lack of accountability of those who design and distribute it by Andrea Monti – Initially published in Italian by Italian Tech – La Repubblica

The news of the suicide of a teenager who had “confided” in ChatGPT is, unfortunately, not the first case of its kind. Back in December 2024, another AI company, Character.ai, was sued by the parents of a boy who allegedly took his own life on the impulse of the hyper-anthropomorphised chatbot he was using.

These two cases, like the less serious but no less worrying ones of people who take refuge in a fake and apparently reassuring interaction, must be contextualised within the phenomenon whereby chatbots are used, against all logic and rationality, as confidants, mentors and, in some cases, as real “partners”.

Apart from the psycho(patho)logical aspects of such phenomena, which are a reflection of the more general detachment from reality induced by technologically mediated relationships and, as Simon Gottschalk of Nevada University points out, by the infantilisation of Western culture, it is also useful to address the issue from a legal point of view.

“AI responsibility” and the problem of anthropomorphisation

AI is software and, as such, has no “subjectivity” or “consciousness”. The fact that it can operate with a high degree of autonomy and replicate (apparently) cognitive manifestations does not change the terms of the question: the fact that it operates in a certain way does not affect the inanimate nature of software.

These concepts are easy to understand when we think of a word processing programme or a programme for managing the routing of IP packets that bounce (apparently) chaotically from one part of the Big Internet to another. But when it comes to chatbots, the irrational urge to believe that we are dealing with a “being” and not an object is too strong, and this changes the way an increasing number of people relate to this technology. It is therefore not surprising that this psycho(pathological) dimension pushes individuals — but unfortunately also politicians and legislators — to hypothesise about the “responsibilities of artificial intelligence”, as in the paradigmatic case of the European regulation on AI and its improbable classifications based on vaguely defined “risk criteria”.

You don’t need AI to cause harm to people

The history of computing is full of tragic events — from the Boeing 737 Max disaster to the Royal Mail case — caused by software errors that, in contrast to AI, we could describe as “stupid”. But precisely because even (and especially) the functioning of “normal” programmes can cause irreparable consequences, we should once and for all come to terms with the fact that if software “makes a mistake”, it means that it has been poorly constructed.

 

It goes without saying, therefore, that the consequences of the software’s “behaviour” should be paid for by someone selected from among those who built it, tested it, chose to market it and sold it to end customers. No more and no less than what happens with any product that is placed on the market, from bolts to the Space Shuttle’s thermal protection systems. And here we come to the crux of the matter.

The responsibility lies with the AI companies, not the AI

The fact that a chatbot should be considered a product is the common thread linking several legal actions involving AI companies in the US. Whether it is products that allegedly incited suicide, products that failed to intervene in cases of dangerous behaviour, or products that provide unreliable results, the common feature is precisely the way in which this software was developed. This is the line of attack chosen by the lawyers of those who claim to have suffered damage from the operation of chatbots and similar products. Put in these terms, the issue shifts from the non-existent “responsibility of the chatbot” to the possibility of establishing the responsibility of those who built it without providing for “adequate safety measures”.

The role of safety checks

In reality, chatbots are equipped with safety measures in the form of “safety checks” — a set of techniques and methods that prevent the chatbot from responding in a certain way, or that block the response before it is communicated to the user. The point, then, is to understand how effective these checks are, or should be, because this is what determines the responsibility of the supply chain.

This is a central issue in the development of chatbots in particular because, on the one hand, the presence of safety checks severely limits the possibility of effectively using “freely available” LLMs in areas such as the judiciary, but on the other hand, it represents the application of the general principle provided for in all legal systems whereby, regardless of what one does, one must avoid causing harm.

The limits of safety checks

However, while safety checks are a way of complying with the obligation not to cause harm, the way in which they are carried out represents the limits of responsibility. In other words, it is not enough to provide for a safety check; this check must also be effective. Using a comparison that is overused in the world of information technology, that of cars, it is not enough to say that the car has brakes; they must also be built in such a way that they work properly.

In the case of chatbots, this obligation is very complex to comply with due to the multiplicity of uses of such platforms. Furthermore, in the absence of explicit guidelines, the concept of safety is often applied more to avoid legal trouble for the manufacturer than to protect the user from what might happen to them when using the software. This is evident in the care taken to prevent the chatbot from responding in a lawful but “inappropriate” manner, whatever that means.

The industrial sector has very strict and complex rules to follow when it comes to components, equipment and complex machines, but when it comes to software (of any kind), the same criteria are not applied. On the contrary, it is quite standard for user licences to state, albeit in different terms, that the software is not guaranteed for specific uses, that it should not be used in critical or dangerous environments, and that it is provided “as is”.

The deception of the user licence

In other words, with the excuse of the licence — yes, because in the EU software is considered not a product but, like Dante’s Divine Comedy, a creative work — the producer, pardon, the author, absolves himself of all responsibility for what happens when using a programme. These are clauses that could be challenged quite easily in court, but how many people will invest time and, above all, money for such a result? Better to wait for the next version of the software, which promises to eliminate bugs and provide new and exciting features, since no update will ever eliminate the lack of responsibility of those who sell it.

Does this mean that AI companies are essentially impossible to hold liable for damage caused by the products — pardon, creative works — that they make available to end users? No, of course not, but it also means that establishing any direct liability is much more difficult because you have to overcome the impenetrable wall of licence conditions that place all responsibility for the use of software on the user.

Removing software from copyright

Cases, however tragic, such as those we are dealing with demonstrate the negative consequences of insisting on considering chatbots as conscious entities and perpetuate the wrong attitude of focusing on individual cases instead of addressing the problem in systemic terms.

The issue to be resolved is clear: software can no longer be considered a work protected by copyright but must be considered a product.

The consequences are equally clear: the manufacturer (former author) must provide specific guarantees and assume specific responsibilities for placing defective products on the market.

The reason why this will not be done in the EU is, ultimately, also obvious: just look at which countries the companies that control the global software and AI market have their registered offices in.

Leave a Reply

Your email address will not be published. Required fields are marked *