The Hypocrisy of Predictive Justice

Mountain View algorithms reported a father who sent photos of his son’s inflamed genitals to his doctor. The charge is of child pornography. And confirms the idiocy of algorithms and the legislator by Andrea Monti – Intially published in Italian on Strategikon – an Italian Tech Blog.

According to the New York Times, Google reported a photo of a child passed through its systems sent by the father to a doctor for a consultation at an anti-child exploitation hotline. In addition to the report, Google would have deactivated the parent’s account and refused to review the case despite receiving explanations and clarifications. Inevitable, on many sides, the outraged reactions for what seems to everyone an abuse without any justification.

Google’s defence was based on two arguments. The first: yes, we use “artificial intelligence”, but there are also human beings involved in the decision; the second is a law requiring us to carry out these reports. Both defences, however, are pretty fragile.

Why Google’s thesis don’t hold up

The first is because the use of automatic systems in decision-making deprives the human operator of his role. How often, faced with a report, will the verification officer take the responsibility of discarding it since, instead, “someone” – or rather, “something” – higher than him told him that a content is ugly, immoral or against the law? The question does not concern only this case but all those in which, irresponsibly, one wants to introduce the “policeman” – or worse, the “judge” – automatic. Especially when the number of cases to be reviewed is high, human intervention is unlikely to make a difference.

This brings us to the weakness of the second thesis. Does American law require reporting “neutral” images – such as the one sent to the paediatrician – and therefore not clearly qualified as child pornography? It is not so.

Excluding clear cases in which the meaning of an image admits no doubts, in many other cases, it is the eye of the beholder that semantically connotes the content. Thus, for example, the image of a dissected body can seem gruesome if mistaken in a group of necrophiles or expected, in its drama, if it is shared between coroners performing an autopsy.

The senselessness of a legal automatism

Many others could be examples, but the concept is clear: there cannot be a legal automatism between the nature of an image and the obligation to report. In fact, the reference standard, article 2258A of the US Code, does not provide for the active duty to search for illegal content but only for reporting to the CyberTip Line, verbatim, “after having had actual knowledge” of the thing.

In other words: it is not mandatory for the provider to actively search for illegal content, and before proceeding with any report, the lawfulness or otherwise of the content must be assessed. This implies the necessary presence not only and not so much of a human operator but of someone who can perform comprehensive assessments, including legal ones.

Furthermore, the National Center for Missing & Exploited Children and the CyberTip Line (the latter mentioned by law) are not government bodies or judicial offices. As stated on their website, their legal status is that of ‘a private, non-profit 501 (c) (3) corporation ‘partially funded by the US Department of Justice. This latter, though, is not responsible for ‘this Web site (including, without limitation, its content, technical infrastructure, and policies, and any services or tools provided) ‘.

A US-funded pre-court

With a ponziopilatesque choice, the American public structures finance a sort of private pre-court but take no responsibility for its actions. The public policy approach is diametrically opposed to directive 31/00 – recognized in Italy with the legislative decree. 70/2003 – which imposes on providers the obligation to immediately and directly involve the judiciary.

Unfortunately, policies are also changing in the EU.

The American approach to the use of private intermediaries in combating illegal content has also been acknowledged by the European Union, which has gone even further. If, in fact, the US legislation has created a sort of pre-court, the EU one pushes for the creation of a pre-police system called  trusted flagger.

It is a reporting system – in the original meaning of the term – according to which private subjects are, in essence, authorized to take the place of the judiciary and the police in the search for illegal content and report them to platforms and providers.

Things worsen because of the cross-oceanic interest to circumvent the end-to-end encryption (the encryption of the contents on one’s terminal before sending it) through the generalized adoption of a client/cloud side scanning system, i.e. the search systematic and generalized of what is contained both in the single device and in the “cloud” backups. Some time ago, Apple had announced such a project, suspended due to the controversy but not definitively abandoned.

Needless to say, also now – although with some merit – privacy zealots have risen. However, they fail to consider the impact of structural changes caused by the uncontrolled spread of technologies and services under the exclusive control of private companies.

The criminogenic downward spread of knowledge and technology

As I wrote in another article, the downward spread of knowledge and technology is inherently criminogenic, and the state – states – do not have enough resources to prosecute, if not all, at least a significant portion of the offences committed daily. Like it or not, the use of automated systems is pragmatically inevitable, not because they are “better” or “infallible” but because there are no alternatives. You need something that does the roughing job, which, by definition, cannot be refined. And when these systems are wrong, as they say in Rome, a chi tocca nun se ‘ngrugna – when your turn comes, don’t complain.

However, if it is necessary to automate the assessment of individual responsibilities, then precise and effective counterweights should be established against those who make mistakes.

Especially when there are shameful crimes involved, it is not tolerable that the negligence of some operator in some remote data centre or, worse, of the author of “artificial intelligence algorithms” go unpunished. A minimal statistical error may also be acceptable from the point of view of numbers. However, in criminal trials, this error acceptance results in the destruction of the lives of innocent people.

It is inevitable that this will happen because perfection is not of this world, and there are already too many judicial errors without the need for some abstruse automatism. But it is not the machine that is frightening, but rather the hypocrisy of human beings who have found a way to unload the weight of the responsibility of choice, throwing it on the shoulders of software to which to give, even, the legal status of personhood.

Long live Carol Beer, and her Computer says no!

Leave a Reply

Your email address will not be published. Required fields are marked *