Social media and ‘addiction’: the Los Angeles ruling raises more questions than it answers

To attribute responsibility for individual behaviour tout-court to product design can profoundly alter the boundary between technological neutrality and active intervention in users’ decision-making processes, with effects that may extend far beyond the individual court case by Andrea Monti

The first-instance ruling handed down by the Los Angeles Superior Court in the case brought by ‘KGM’ against Meta and Google creates more problems than it solves because, without prejudice to any liability on the part of the defendants, it reinforces the notion that users are not responsible for their own behaviour.

The dispute, in brief

In short, the Court found that Meta and Google had deliberately designed their services to induce ‘addiction’ in users.

In practical terms, this means that the platforms, on the one hand, ensured that their content recommendation systems and the features made available to users fostered habitual use; whilst, on the other hand, they failed to inform users of the consequences of using the platform and implemented ineffective age verification systems.

Neutrality and (non-)liability

This decision, along with others issued during this period, highlights a shift in approach to the issue of platforms’ liability for users’ behaviour and for the harm they suffer due to the way the platforms are designed.

The mere provision of an online service does not automatically imply liability on the part of the provider for users’ actions. For a platform to be held liable, it must actively influence individual choices.

This rule, established for example by the venerable EU e-commerce directive dating back to 2000, has often allowed Big Tech to invoke its technological neutrality and thus claim no liability for what occurred via their platforms.

At the time, indeed, tracking and profiling capabilities were certainly less effective in terms of influencing users. Thus, for some time there was indeed scope to argue that platforms were not responsible for the consequences arising from their use. However, the refinement of methods for delivering personalised content and products, and the progressive accumulation of vast amounts of data, has made it increasingly difficult to draw a clear line between what constitutes mere ‘marketing activity’ and what, instead, constitutes active manipulation of individual behaviour from which specific legal liabilities arise.

Doubts regarding the Californian ruling

In the specific case of the ruling handed down by the Los Angeles Superior Court, this legal principle appears to have been applied without adequately assessing certain aspects that would have required greater consideration.

Firstly, the ruling seems to ignore the responsibility of ‘KGM’s’ parents, who left her alone in front of the screen for years, without looking after her and without exercising the supervision they should have. This, of course, does not in principle exclude the service provider’s liability for the way in which the services were designed and made available; but the possible presence of defects or features deliberately designed to create addiction does not eliminate parental responsibility.

Secondly, the ruling does not clearly explain how Meta could have known about the girl’s mental health issues and, even if it had discovered them, what it should have done — a common issue in the management of AI platforms.

Thirdly, there does not appear to be any direct evidence linking the way a platform is designed to the consequences of those choices on an individual. In fact, the reasoning appears to have been as follows: social media is designed to be addictive, the girl was addicted to social media, therefore social media is responsible. But even if it were proven that a platform had indeed been developed to generate addiction, is there any evidence that, in this SPECIFIC case, the design choices made by Meta and Alphabet actually influenced KGM’s capacity for self-determination?

Statistical liability is the crux of the matter

Regardless of the specific merits of the judgment’s content, the issue that deserves to be addressed is whether or not to attribute general liability for the way a product is designed, irrespective of the possibility of proving the specific harm suffered by an individual.

The issue is not new in case law (there has been talk of ‘probabilistic causality’ for decades) or in the context of corporate social responsibility. Although the decision has been presented as ‘revolutionary’, it actually forms part of a debate spanning more than a decade involving the sugar, alcohol and tobacco industries. In these sectors too, the question has been raised as to whether product design and communication strategies were intended to create addiction in consumers.

To date, the issue remains under debate, but certainly if the Californian approach were to be upheld in subsequent court rulings, the effect would not only be to redefine the boundaries of digital platforms’ liability, but to alter more profoundly the balance between individual autonomy and the industry’s duty of care.

It must be established that software is a product, not a literary work

However counterintuitive it may seem, in the EU — and therefore also in Italy — software is not a product but a creative work, no different, in other words, from a novel, a painting, a photograph or a song. The result of this legal classification is that, unlike in the industrial world, product liability rules do not apply to software, particularly with regard to the obligation to guarantee a certain level of safety.

If, to resort to a well-worn historical comparison, software were a car, it would have to be type-approved before being put on the road, demonstrating that it had been designed and built in compliance with all safety and security requirements for its users. This would fundamentally resolve the issues that legal experts, ethicists and technicians have long been grappling with. Yet, whilst in practice and in the courts there is an increasing move in this direction, parliaments do not seem interested in addressing the issue.

It is rather paradoxical that whilst the EU churns out mammoth regulations on data and system security, it remains largely indifferent to the safety of the people who must use – or, rather, endure – these data and systems.