Formally motivated by good intentions, but more likely concerned about avoiding legal action, OpenAI states that it monitors all chats with its customers and informs the authorities in cases of imminent danger to other people by Andrea Monti – Initially published in Italian by La Repubblica – Italian Tech
The news has been around for a few days —dating back to a post published on the company’s blog on 26 August 2025— but it had gone almost unnoticed: OpenAI candidly states that its chatbots monitor interactions with users as part of a process that may result in reporting to the authorities. However, it should be noted that OpenAI only takes this latter action if the chats concern third parties. On the contrary, in order to “respect the privacy” of its users, it does not inform the authorities of potential cases of self-harm, limiting itself to activating a series of essentially automated checks.
More specifically, as part of the design of the safety checks — ChatGPT has been programmed to provide friendly, non-aggressive responses, as well as to encourage the user to interrupt overly long sessions (probably to break the immersive effect of the interaction and allow rational functions to regain control of the situation), and to bring the person to their senses by suggesting that they seek help, including through emergency numbers and dedicated websites.
What are the checks for?
The goal of this generalised check, OpenAI states in the post, is to be genuinely helpful.
Translating from the muffled language of institutional communication supervised by lawyers and public relations consultants, the message seems to say: “we have done everything possible” and therefore we are not to blame if something happens. If we then move to a more formal dimension, the meaning of “genuinely helpful” becomes: “we have taken proactive steps to reduce our involvement in legal actions for damages to third parties caused by customers” misuse of the platform’.
The scope of responsibility
From a defensive point of view, OpenAI’s choice is perfectly understandable and legitimate.
As long as a customer decides to use a product or service to harm themselves, it is their own business, but if a customer wants (or is able) to cause damage to third parties, the problem is (also) the manufacturer’s.
In other words, you certainly cannot blame a car manufacturer if someone decides to use a car to crash into a wall. What matters is that the car is free of recognisable defects (i.e., not subject to recall campaigns) and that it has the safety systems required by law (ABS, airbags and other ADAS—advanced driver assistance systems) and therefore does not cause (too much) damage to those who use it in accordance with the purpose for which it was built.
By extension, therefore, if ChatGPT is designed not to cause stress to its users, blocks certain types of requests and provides results aimed at protecting the user, OpenAI cannot go further in judging individual behaviour that goes beyond the purpose for which the chatbot was created.
This reasoning, however, works if the measures taken are actually fit for purpose.
Once again, the problem is the (lack of) responsibility of software developers
Returning to the comparison with motor vehicles, the technical characteristics of a car’s components and its performance are subject to rules and testing before it is put on the road. Consequently, if the product is “compliant”, when someone is injured while using it, the manufacturer is not liable.
This is not the case with OpenAI’s safety checks, but in general with any “off-the-shelf” software or software used to provide platform services. Staying with the automotive comparison, in the case of software, there are no State regulated approvals or technical specifications to be followed by law, nor are there engineers who have to sign off on designs and assume the relevant responsibilities. At any level, when the software malfunctions, the manufacturer (pardon, the author) releases a “fix” and moves on to the next time, without anyone paying for the consequences of the defect.
Who pays for the damage caused by software?
In reality, at least from the minimal point of view of so-called “tort liability”, whoever causes damage must compensate for it regardless of how the damage occurred. Whether it is incompetence in driving a vehicle or a design and implementation error — even — in software, it matters little.
However, this general liability is not stringent enough and, given the pervasiveness of digitalisation, it might be appropriate to apply a more rigorous legal regime to the development of programmes and the provision of services that make them possible. Today, however, this is not yet the case, so software houses and platforms can enjoy a somewhat unclear regulatory situation that negatively affects the rights of each of us.
The systemic consequences of OpenAI’s choice
Whatever the reason that prompted OpenAI to make such a choice, there are two things to keep in mind with regard to responsibility for what happens in the interaction with its chatbot. One directly concerns OpenAI itself, while the other involves — in general — the operators of platform-based services or, as they say, “as-a-service”.
The potentially binding value of official blog posts
Having stated that it carries out these checks with specific limits and purposes, OpenAI assumes contractual responsibility for the effectiveness of the technical solutions adopted. In other words — and returning to the car comparison — it is not enough to say “the car has brake assist” because this technology needs to work properly.
Therefore, if the system for preventing acts of self-harm or harm to others does not work, the victims (or their heirs) will have the right to request that it be verified whether this system was indeed fit for purpose.
It matters little that, perhaps, in some convoluted paraphrase of the terms and conditions, these responsibilities are limited or even excluded. Unilateral statements, such as those made by a company through its institutional channels, contribute to determining the overall content of the obligations that the service provider assumes towards the potential customer. It is no coincidence that, for example, advertisements for financial products or drugs contain, albeit written in small print or recited in a hurried voice in radio and television commercials, the indication that before making a decision, one must read the prospectus or the package insert.
So, returning to the point, one of two things is true: either the OpenAI post does indeed have contractual value, in which case, from its point of view, the remedy is worse than the disease because it is not certain that the reduction in liability is actually effective. Or the post is the result of a well-calibrated defensive legal manoeuvre, in which case the aggrieved user may find that they have little or nothing to claim against OpenAI.
The lawsuit may be won, but user confidence will probably be weakened because there is nothing worse than trusting someone only to discover that you have been deliberately misled into thinking that things were a certain way when in fact they were not.
The problem with checkboxes
Of course, the principle still applies that before signing a contract, you should read it and if you have any doubts, you should consult a professional. However, given the amount of acceptable user policies, terms and conditions, privacy notices and so on that users must accept every time they use a programme or service, it is somewhat unrealistic to think that they can seek individual legal advice. So, every time we tick the check box declaring that we have “read and understood” the rules for using a service, what we are really doing is waiving our right to defend ourselves.
Extending control over user activity
From another point of view, OpenAI’s decision lifts the veil of hypocrisy that covers the issue of intermediary liability.
Since the beginning of legal discussions on “sysop liability” thirty years ago, one side has argued that it is “impossible to know” what happens in a system (at the time, we were talking about simple BBSs), while the other has claimed that the machine administrator “could not not know” or, in any case, should have known what users were doing.
A compromise solution came in 2000 with the EU Directive on electronic commerce: service providers are not obliged to monitor users in advance and are only liable for what happens if, for various reasons, they “intervene” in what people do via their platforms or if they become aware — in any way — of illegal activities and fail to take action.
For a long time, and still today, platforms have used these rules to limit their liability by claiming, albeit with less and less conviction, that they are pure intermediaries and that, in order not to violate privacy and freedom of speech, they do not monitor their users.
However, it is an undisputed fact, demonstrated by their profound profiling capabilities, that large platforms have the technological resources to control what happens on their systems. It is equally undisputed, as demonstrated by the personalisation of content, that the intermediary actively intervenes in the users’ use of the service. Consequently, a person who actually possesses these capabilities should not be able to avail themselves of the general exemption from liability allowed by the e-commerce directive.
This explains why, for some time now, the second line of defence has been built around the principle — as mentioned, thirty years old — of “technically impossible”. But while this line has so far withstood attacks from politicians, legislators and judges, it is not certain that it will continue to hold after OpenAI’s public statement.
The end of automatic liability exemptions
OpenAI’s statement effectively closes a historic phase: one in which platforms could more easily declare themselves neutral and therefore not liable by invoking technical impossibility. Those that can filter, report and prevent can also be held legally liable if these tools malfunction, are poorly constructed or undersized.
The privatisation of rights and prevention
For some time now, as evidenced by the EU’s surrender formalised by the Digital Services Regulation, Big Tech has assumed the power to decide what a right is and how it should be exercised. Now, a further step has been taken from regulation to prevention: allowing a private entity to decide when and how much behaviour should be considered dangerous. In other words, the point is not whether OpenAI — or any platform — reports certain events. The point is who authorised them to do so, based on what criteria and with what public oversight.
Perhaps platforms really do operate “for our own good”. But if what is good is decided unilaterally by a private company, evaluated by an automated system and justified with a blog post, then it is legitimate to ask for whose benefit all this is happening.
Without answers to these questions, being genuinely helpful will remain just a slogan — and a rabbit to be cleverly pulled out of a hat in the middle of a courtroom.