The case of Character.AI and the suicide of a 14-year-old in the United States, attributed to interaction with the chatbot, sheds light on the legal consequences of the anthropomorphisation of software by Andrea Monti – Originally published on Wired.it
Character.AI (a company that manages a platform particularly used by minors that allows them to create and interact via computer and smartphone with highly anthropomorphised chatbots), its founders and Google (as the project’s financier) have been sued in civil court in Florida (USA) by the parents of a fourteen-year-old boy who, according to their account, committed suicide on the impulse of the chatbot he had built.
An analysis of the charges brought against the defendants highlights the dramatic, largely predictable and anticipated but ignored consequences of three converging phenomena: the first is that of disruptive business models, which have been practised for at least thirty years and are based on the slogan “better to ask for forgiveness than permission”; the second is the transformation of minors into “customers”, also practised for decades, effectively disregarding, with the connivance of politicians and legislators, the rules on the age of majority as a condition for entering into contracts (including online); the third is the software industry’s continued refusal to consider programmes as “products” rather than “creative works”.
The accusations against Character.AI
The boy’s parents base their arguments on four main points. The first is that Character.AI is a poorly designed product and therefore not reasonably safe for ordinary consumers or underage customers. The second is the lack of adequate advance information to parents and children about the possible mental and physical dangers of using the product.
The third is that Character.AI could not have failed to realise that targeting a customer base of minors would require special precautions, yet the product was deliberately designed and manufactured to function in a deceptive and “hypersexualised” manner. The fourth argument is that none of the parents had entered into a contract with Character.AI, which had instead interacted directly with the underage child and therefore should not have allowed the use of the service.
Whether and how the defendants will defend themselves remains to be seen, but the available information allows us to make some more general observations about the effects of Big Tech’s hyper-aggressiveness, the consequences of building a business strategy based on loneliness and isolation, and the irrational tendency of people to consider a piece of software “alive” simply because it functions with some degree of autonomy.
The problem of anthropomorphisation
Let’s start with what the family’s lawyers call “Anthropomorphising by Design”, i.e. the deliberate design of a product (or software?) to appear human or show interaction capabilities similar to those of a living being.
This is certainly not a new phenomenon, if we consider that around twenty years ago, Sony gave “life” to the AIBO project, a robot dog that is still on the market today, and that even earlier, in the 1970s, roboticist Mori Masahito was already reflecting on the limits that should not be exceeded in the construction of human-like robots. Today, the anthropomorphisation of objects is one of the most effective marketing levers in communication strategies for selling entertainment (films) and technology products, as demonstrated most recently by the bold choice to present AI not as “simple” software but as a “sentient being”. An example of the consequences of this approach is the marriage celebrated in 2018 in Japan between a human being and a hologram.
The consequences of anthropomorphisation by design have not only influenced the market but also, unfortunately, the choices of intellectuals, politicians and legislators who adopt decisions and strategies based on the false but irrationally considered true assumption of the “humanity” of technology.
The removal of responsibility from human beings
Among the undesirable consequences of anthropomorphising technologies, the elimination of individual responsibility is the most serious. Blaming software for the irrational behaviour of users, which in some cases can even have dramatic consequences, means, on the one hand, not giving adequate importance to individual problems and the need to listen to people in difficulty. But at the same time, it also means breaking the chain of responsibility that binds a product to its creator, as demonstrated by the mind-boggling debate on the “responsibilities” of self-driving cars.
The elephant in the room, in this discussion, is the legal nature of software, which, against all evidence, continues to be considered a creative work and not, as it should be, a product in its own right.
Is software or a platform a product?
Software houses, and subsequently also the industries that have incorporated software into their products, have always taken advantage of the political choice to consider software as a “simple” creative work, despite the fact that nowhere in any copy of the Divine Comedy does it say that “this book will function substantially in accordance with the instructions” or that “the contents are not suitable for specific purposes” or that, again, “the work should not be used in critical and life-threatening environments”.
Therefore, despite the facts saying otherwise, ensuring that software is regulated by copyright prevents the rules on product liability from being applied to its “creators”, which, in short, prohibit the marketing of objects that are too dangerous for users.
This is precisely the argument used in the lawsuit against Character.AI by the victim’s family: “C.AI is similar to a tangible product for the purposes of product liability law. When installed on a consumer’s device, it has a defined appearance and location and is operated by a series of physical steps and gestures. It is personal and mobile. Downloadable software such as C.AI is a “good” and is therefore subject to the Uniform Commercial Code despite not being tangible. It is not simply an “idea” or “information”. The copies of C.AI available to the public are uniform and not customised in any way by the manufacturer‘.*
Conclusions
The implications of the Character.AI case may have consequences that go far beyond the reasons (more or less valid, as the court will decide) of the suicidal boy’s family. First, and in even more concrete terms, if the idea that a programme is a product and not a creative work were accepted, the impact on the software industry and on the lives of citizens, businesses and institutions would be enormous. The former would be forced to fundamentally review the way in which the “products” they place on the market are designed and built and to pay the consequences of strategies that are not always based on respect for users’ rights. The latter would finally have the opportunity not to suffer unilaterally from industrial choices made essentially on the assumption that “it is the software’s fault” and not that of those who build it.
Secondly, they force us to reflect on the social consequences of the spread of anthropomorphic or — worse — anthropomorphised objects that lead to the development of interactions in which the machine is on the same level as human beings.
Assessing these aspects of the social impact of information technology can no longer be postponed: Big Tech has long made it clear which path we must take: the extreme virtualisation of social interactions through the construction of software interfaces which, giving substance to Neal Stephenson’s intuition in In the beginning was the command line, risk becoming the only form of contact with a totally virtualised reality and, therefore, intrinsically fake but above all controlled by those who generate it.
Excellent analysis – as always.