The artificial intelligence of the virtual prosecutor

An article published on 26 December by the South China Morning Post headlined Chinese scientists to develop AI ‘prosecutor’ that can press charges on its own. According to the article, the project, which began in 2015, has now reached the executive stage, and software can support prosecutors in deciding whether to send eight types of crime to trial, including dangerous driving, fraud and gambling. Therefore, the field of application is restricted because the crimes that can be analysed are few, and the magistrate still maintains the last word. However, there has been no lack of the usual “alarms” about the “robotic judge” and the umpteenth demonstration of how dangerous this “artificial intelligence” can be –  by Andrea Monti – Initially published in Italian on Strategikon – An Italian Tech BlogIt is useless pointing out that the uses of this technology are restricted and will not extend beyond a specific boundary due to the intrinsic limitations of the way computers work. The usual counterargument is, “it is how things work today, but what about tomorrow?” This logically flawed argument – a paralogism, the experts would say – has excellent rhetorical power: it is compelling to those who are not experts in a given subject. The fallacy of this approach is apparent: one can wave his arms all his life – and have his children, grandchildren, and great-grandchildren do the same – but he will never be able to fly without the help of some technological tool.

The basis of this paralogism is the confusion between science and science fiction, which is also unjustly exploited by the industry and suffered by politicians and information professionals. It leads to the humanisation of technological tools by attributing to them the characteristics of living beings. It is a script similar to the one we have seen for ‘cyberspace’, a word which, according to its inventor, the writer William Gibson, means absolutely nothing but has become a constant presence in Western legislative policy and legal doctrine. Thus, with the entertainment industry’s help, the perception has taken root that sooner or later, we will live in scenarios like those in Bicentennial Man or I Robot (the motion pictures, not Isaac Asimov’s novels). This attitude reflects the number of articles in which commentators are astonished at the “emotional manifestations” of new iterations of Eliza, such as Sophia.

Let us return to the question of the ‘robot judge’ (as many mistakenly call it).

The software works by applying a system of natural language processing. So, it analyses the investigators’ reports and the various documents that make up the investigation file and assesses whether the elements obtained from this analysis correspond to facts punishable under Chinese criminal law. If this is the case, there is little about which to worry.

Assuming that the software works, perhaps aided by standardising how police write reports and magistrates issue orders, it can objectively handled automatically crimes. Even in our part of the world, speeding, quantifying maintenance payments in divorce, verifying the correctness of a balance sheet in the event of fraud, and granting loans are essentially automated. The computer programmes may not be state-of-the-art, but the fact remains that an essential component of the decision on a person’s (social) life is entrusted to automated tools.

As for the fears about the ‘automatic judge’, it is enough to consider that in Italy, in addition to offences detected by speed cameras, even failure to comply with the vaccination requirement, in Italy, is ascertained by software, reversing the burden of proof.  One receives a complaint and then has to prove that he is not responsible or had a good reason to disobey the rule. In jargon, this is called ‘reversal of the burden of proof’ and works for minor violations and tax violations, but not for offences punishable by criminal law.

Therefore, in principle, there is no difference between the instrument developed by Chinese scientists and those used on this side of the Iron Curtain. It is unthinkable that human operators can handle an enormous number of (potential) violations entirely by bare hands, so much, so that case law has jumped through hoops to uphold the legitimacy of ex post infringement notices or the lacking of the signature of the person who detected them. Those cases concerned traffic laws. However, it happens systematically in the legal domain. Once a ruling sets a principle, other decisions can extend it to any other field.

The issue, therefore, is not ‘whether’ to automate evidence’s analysis. What matters is to understand the actual possibilities for the suspect to exercise the right of defence when the assessments of the public prosecutor derive from the software-based analysis of the investigations.

This issue is hardly new: all the trials with technical content (tax and financial fraud, medical liability, environmental disasters, computer crimes) rely upon expert opinions and technical assessments. These activities are expensive and managed by consultants who do not always live up to their reputation. Objectively speaking, suspects who can bear the financial burden of consultancy have a better chance than those who have to rely on experts working for the prosecution. Even the use of software to analyse the results of investigations does not escape this rule. A suspect has the right to know from where incriminating data came. Therefore, he has the right to ask for the functioning of the ‘algorithms’ to be verified. However, Italian jurisprudence has already established that this type of expertise is not admitted in matters of computer evidence. The suspect must explain exactly ‘where’ and ‘how’ the technical error would have occurred.

However, in the case of complex software such as those mentioned in this article, such evidence would be practically impossible to provide, and therefore a person would find himself having to undergo a trial without any real possibility of defending himself.

The scaremonger, therefore, should not be the artificial intelligence of a virtual prosecutor, but the real one of an actual magistrate, or – more precisely – the lack of it.

Leave a Reply

Your email address will not be published. Required fields are marked *