From mass surveillance to individual control, the path goes through videgames and exposes the GDPR

A peculiar feature of the third iteration of Call of Duty Modern Warfare has gone almost unnoticed by the media: it will increase the use of AI to block – ‘filter’, as marketing experts would euphemistically say – ‘toxic’ conversations. In other words, an AI will analyse what players are saying in real time, and ‘toxic’ language – whatever that means – will be reported to the moderation team by Andrea Monti – initially published in Italian by Strategikon – an Italian Tech Blog

The news, however, is not the ubiquitous reliance on AI, this time to catch the ‘criminals’, as this is just a technicality. What should be considered is the next phase of the transition from mass surveillance to the privatisation of the individual and individualised control, not only for ‘national security’ but also to protect ‘trivial’ business results, limit reputational damage, minimise claims for damages and, worse, to ‘educate’ users.

As a veteran Doom player, I find it challenging to maintain British composure while playing an FPV shoot-em-up and letting the shotgun unleash a barrage of bullets against the seemingly invulnerable Minotaur, while it fights back with all its might. But that’s just me. So, back to Call of Duty, in the middle of a raid where we are setting up our Lachmann Sub (a replica of the H&K MP5) in full auto, or brandishing a Taq-V (i.e. an FN Scar) to kill someone before they can return the courtesy, it would not be acceptable to shout ‘Die, !&?@#!#%&!’ (with the player replacing the meaningless characters with the profanity of their choice). To avoid being flagged by the system as ‘toxic language users’, we should stay well below the line set by the (vague) Call of Duty code of conduct. This means addressing an enemy in a more polite manner, such as ‘Dear Sir, I am sorry for any inconvenience this may cause you, but I would greatly appreciate it if you would hasten your departure from this world by allowing me to strike your vital points with my hardware, ejecting tiny pieces of metal at high speed to cause irreparable tearing of your organic tissues.’ It would be as if, during a football match, if you wanted to ask for the ball from a ‘selfish’ team member who wanted to emulate Pelé in Escape to Victory, it would be more appropriate to say, ‘Could you please pass the ball to me so that I can contribute to the development of the play’, rather than the more effective ‘Give me the ball, idiot!’.

This is not the place to return to the age-old debate about the role of certain categories of video games in the unleashing (note: ‘unleashing’, not ‘deterministically causing’) of violent behaviour and their exploitation for military recruitment purposes; and let us leave for another occasion some reflections on the role of aggressivity in social relations and how to defuse it, for example, through sport or other forms of controlled and sublimated management. What is important here and now is to point out how technologically mediated individual control is expanding to the point of invading the most private spheres of the individual, even to the point of an idea materialising in a sound emission before it materialises in writing or behaviour.

One could justify such a thing by saying that the moment a person publicly expresses a statement that potentially reveals criminal behaviour, he or she should be punished, or at least stopped. If indeed ‘thought is action’, then manifesting an idea is tantamount to putting it into action, and therefore there is no need to wait for this to happen in order to take appropriate countermeasures.

This is the reasoning that implicitly characterises, for example, the EU’s Digital Services Directive and copyright regulations. By extending the preventive control obligations of the very large online platforms to the actions of individuals, the rules in effect delegate to them the power to decide, in an essentially unilateral way, what is the concrete boundary between freedom of expression and unlawful conduct, replacing the courts. Worse still, this power is exercised over content that is ‘inappropriate’ and therefore perfectly legal, even if it offends some people. The power attributed to the VLOPs is to impose a de facto, but not de jure, sanction on what is considered (by whom, for whom and according to what parameters?) to be part of this moralising category.

Since the time of the Digest, the collection of legal principles commissioned by Justinian and published in 533, the principle ‘cogitationis poenam nemo patitur’ (Digest, 49.19.18) has been part of Western culture to the extent that, in Italy, it is one of the criteria established by article 25 of the Constitution and article 1 of the Penal Code for the attribution of criminal responsibility. However, what seemed to be an insurmountable limit – punishing the act and not the thought – has long since been largely overcome, not (only) with new legal provisions, but above all thanks to the instrumental and now unstoppable use of an undefined ‘ethics’ that replaces the primacy of the law.

No one has ever clarified in whose name the personal convictions of an individual or a group of people should become binding on all those who do not think in the same way, so that it was necessary to ‘invent’ the law precisely as an instrument for mediating between different readings of reality. However, the attitude of politicians and legislators has changed radically since information technologies made possible the spontaneous and uncontrollable aggregation of groups of individuals, the dissemination of self-produced content and the (more or less) public expression of personal ideas and creeds (it is hardly worth mentioning that these are the three areas in which the state’s need for self-defence has always been regulated in the past, by imposing limits on the freedom of assembly, control over the means of production and dissemination of printed matter and the preemptive control of dissidents and political activists).

On paper, especially in the EU sphere, there continues to be a professed devotion to the fundamental rights of the Nice Charter and respect for the rule of law. In reality, not only is whistleblowing legitimised by the trusted flagger of the Digital Service Act, not only is the power to exercise exclusive control over freedom of expression taken away from the courts by the same rule, but the European institutions are beginning to seriously consider imposing a ban on end-to-end encryption, i.e. the function that allows messages and content to be encrypted before they are sent, making it much more difficult for institutional and non-institutional malicious parties to intercept them. This ban will allow an individualised control conceptually similar to the one that is going to be activated in Call of Duty chats. In other words, the point (and the goal) it is always  controlling a person’s behaviour in advance and unilaterally deciding how to qualify it.

For the record, Apple tried to implement such a system only to backtrack, and in 2022, Google’s automated systems for the preventive analysis of content transmitted through its services led to a criminal investigation against a parent who had sent pictures of his sick child to his paediatrician.  In practical terms, banning end-to-end encryption means imposing by law the installation on every fixed or mobile terminal of a system for the preventive analysis of content and the signalling of those that the ‘system’ will have classified as illegal or, worse still, simply (and this is where it gets tricky) ‘inappropriate’.

Such an invasion of privacy is simply unthinkable, and there is no need to explain why.
Even with this prospect, one would have expected at least a heated public debate, but none of this has happened; and even more deafening is the silence of the data protection authorities, who ‘forget’ that the GDPR is not just about ‘privacy’, but about protecting fundamental rights – all fundamental rights – from treatment that attacks them.

To put it even more clearly: intercepting conversations in real time, deciding whether the content is a breach of contract, and applying sanctions that can lead to the termination of the relationship is a treatment that has nothing to do with ‘privacy’, but is no less damaging to personal rights.

It remains to be seen what measures, if any, the national data protection authorities will take when they become aware of this. In the meantime, however, the strategy of the boiled frog- set a stake that marks a boundary and then start moving it – is clear. So we start by preventing the spread of ‘inappropriate’ content on online platforms that have already been published, then move on to tackling ‘poisonous language’ in video games, and, once the fact has become socially accepted, move the line further and further between (really necessary) safety prevention and unacceptable individualised, ex ante control over what you can and cannot say, but above all, what you can and cannot think.

Not even in his worst nightmares could George Orwell have imagined a dystopia of this magnitude, but, as the Bard warned, ‘there are more things in heaven and earth, Horatio…’.

Leave a Reply

Your email address will not be published. Required fields are marked *