The EU contradictions on cryptography

Two recent leaks reveal the European Union’s choices on cryptography. Child protection and national security are the reasons for the impossible mission of ensuring security through weakened encryption. By Andrea Monti, Adjunct Professor of Law and Order at the University of Chieti-Pescara – Originally published in Italian by Formiche.net

Two very recent leaks, one published by Politico and the other by StateWatch suggest that Europe is considering restricting the use of cryptography in the private sector and/or seeking ways to circumvent it. In particular, as the analysis conducted by the American Electronic Frontier Foundation remarks, the idea would be to block end-to-end encryption, i.e. encryption that is performed locally, on the user’s terminal, or circumvent it before (and regardless of whether) the message goes through a public communications network.

The reasons behind this choice are – years later – still the same: “fight terrorism” and “protect minors”; and years later they continue to be even more difficult to sustain than in the past, both in legal and political terms. Moreover, following up these proposals would, paradoxically, have disruptive effects on the management of national security.

Twenty years ago it was already clear that there was no point in banning cryptography or allowing the use of a weakened version of it, and we were at a time when the debate was still at a substantially abstract level because the development of electronic communication services and those of the information society was still in its infancy. Today, the ineliminable dependence of citizens, companies and institutions on information technologies (and therefore on information security) makes it unthinkable – and impossible – to impose the weakening of encryption systems for non-military use or to reduce their effectiveness.

THE HISTORICAL EVOLUTION OF THE DEBATE ON CRYPTOGRAPHY FOR CIVIL USE

The first hints of regulation of cryptography outside the military, diplomatic and security domains date back to the 1920s. Article 41 of the Royal Decree no. 965 of 30 April 1924 stated that “each professor must diligently keep the class newspaper, on which he shall progressively record, without cryptographic signs, the profit grades, the subject explained, the exercises assigned and corrected, the absences and failures of the pupils”. Similarly, in later times the discipline of radio ham operators prevented them from “speaking in code”. It required the use of four languages only so that the surveillance offices of the then Ministry of Posts and Telecommunications could check that ham radio operators did not use the medium for illicit purposes. The reasons for these regulatory choices are clear: the State – or rather, the Executive – must always be able to control what happens, from the microcosm of a classroom to the ethereal world of electromagnetic waves.

Although the penal code did not explicitly sanction the use of encryption systems, it is certainly possible to conclude – if we exclude the periodicals of puzzles and literary inventions – that for a long time it was not possible to use these methods outside the traditional domains of application.

The regulatory framework changes drastically with Presidential Decree 513/97, which regulates the use and value of digital signatures in advance of the EU, excluding the possibility of using a key escrow or key recovery system. This regulatory choice had two consequences: the first was that strong cryptography was indispensable to allow the use of computer documents without risk of fraud, the second was that its civil use had to be necessarily entirely lawful.

The evolution of the regulatory framework of digital signatures under Presidential Decree 445/00 and its amendments, the transposition of Directive 95/46 on the protection of personal data by Legislative Decree 196/03, the obligation to guarantee the integrity of computer data seized during criminal investigations established by Law 48/08 and finally the entry into force of EU Reg. 679/16 (not to mention the issuing e-Privacy Regulation) reinforced this last concept. In other words, there is a strong double regulatory perimeter (one EU, the other national) that does not allow the adoption of rules on the use of weakened cryptography or to circumvent its protection. At the same time, however, in the US and other countries, including Italy, there has been no lack of institutional voices that over time have “accused” strong cryptography of being an obstacle to investigation and support for criminals.

CLIENT-SIDE SCANNING: A VALID ALTERNATIVE?

Aware of the extreme difficulty – at the limit of the impossible – to dismantle not so much and not only the regulatory apparatus on cryptographic freedom but the intricate network of interdependencies between public services, private services and citizens’ “digital life”, European politicians are thinking of an alternative solution: client-side scanning (CSS).
In short, CSS is a system that “intercepts” a content locally (i.e. on the user’s computer) before it can be encrypted, verifies through a mechanism of “marking” and matching with a “blacklist” whether the content in question is legal or legitimate (a not insignificant difference) and, if not, blocks it by reporting the fact to the authority. It is clear that, in this case, the use of cryptography would be irrelevant because the check would take place before the content is hidden.
Today, CSS is mainstream. However, it has old roots.
In 2005, an independent researcher pointed out that Sony BMG had included a rootkit in Van Zant’s “Get right with the man” CD, meant to protect its copyright. This rootkit made users’ computers vulnerable to external controls and attacks. Perhaps also because of the public outcries, there was no further use of this technology, but the valid proof-of-concept stayed.
In the meantime, the decades-old, boiling-frog-like commercial strategy by large software houses is almost accomplished. Nowadays, indeed, it is perceived as absolutely usual that, in order to work, a communication device must be “registered” with the manufacturer’s systems and that the manufacturer can, remotely, know what the user does with it.
Anti-virus software also works based on a principle similar to that of client-side scanning, i.e. the search in the files present on a computer for “signatures” corresponding to those present in the threat database.
Summing up, it is clear that the prerequisite for the large-scale acceptance of CSS is already there.
However, this scenario does not necessarily suggest that the European solution will succeed because – regardless of its practical feasibility – there are political limits that would be very dangerous to cross.

THE PROBLEMS GENERATED BY CLIENT-SIDE SCANNING

The first thing that springs to mind when studying the mechanism that makes client-side-scanning work is, of course, the threat to human rights and the abuse of this technology by States. It is not impossible to think of a system that from the original objective (“protect minors” and “fight terrorism”) is “discreetly” reprogrammed to block or map out political or social dissent, even if not dangerous for public order and security.

In reality, however, the critical points are of much greater magnitude because the same arguments that, historically, have been used to oppose a legislative stranglehold on cryptography shrunk in one concept: mathematics cannot be banned and software creation cannot be prevented.

Mathematics (of which cryptography is a part) and computer science are branches of knowledge that are protected by the freedom of science. To prohibit the development of specific theoretical research and its applications, or to make illegal the development of software that that research translates into usable tools would not prevent criminals from equipping themselves with these tools or even financing their development.

Worse, it would be to allow only specific individuals to carry out these activities, because it would mean creating a Pythagorean system in which only esotericists are allowed access to knowledge while the rest of the community has no access to knowledge.

The loss of openness and information sharing would affect scientific research negatively. There are indeed areas in which particular researches are classified, and various technologies are subject to embargo. However, in principle, freedom of research accounts for most of the knowledge gain and its practical application. Further fragmentation would mean weakening a State’s ability to cope with the threats that might endanger the citizenship.

THE IMPACT ON NATIONAL SECURITY

National (and not only) cybernetic security, and let us get to the point, is protected not only by classified research and secret technologies but by the possibility for researchers all over the world to help prevent or eliminate infrastructural and application vulnerabilities of the information society. The most evident result of the positive impact of a model based on the sharing of technical and scientific information is the ecosystem of so-called “free software”. One example above all: the Linux operating system, created decades ago by a Finnish student and made freely available and modifiable by anyone who wants to, now runs a very significant percentage of internet servers around the world. Another example, this one closely linked to the protection of critical infrastructure, concerns subjects such as the CSIRT of the Presidency of the Council. This structure has today, the possibility not only to access information on attacks and vulnerabilities spread by software and equipment manufacturers but also to interact with the community of independent security experts from whom to obtain information that would otherwise be denied or not available.

In practical terms, a strategy that, as said, we could define as “Pythagorean” would necessarily imply, in addition to agreements with technological multinationals to create an extended perimeter of control, also a substantial limitation of the possibility for individual countries to access high-level Western training in order to limit the technological growth of geopolitically problematic subjects.

It would no longer – and only – be a question of blocking the development of nuclear or strategic programmes, but also of preventing the acquisition of knowledge that is less immediately critical for the international order, but useful to reinforce a potential adversary.

We face the “weaponisation” tout-court of knowledge, which translates, for example, into the US choice to limit access to Chinese students and scholars or the Beijing choice to attract intellectual elites from the West.

CONCLUSIONS

National cybernetic security and technological public order are intertwined with a broad level of circulation of ideas, scientific methods and even independent technological development.

The imposition of systems such as client-side scanning on the one hand consign countries into the hands of software multinationals, limiting a nation’s technological independence. On the other hand, they structurally weaken the democratic hold by creating an authoritarian control system. On the other hand, they require the issuing of regulations that prohibit (or allow only certain subjects) to carry out theoretical research and develop applications that prescind from this pervasive and unstoppable control system.

The fragmentation of knowledge, necessary in a context of knowledge weaponisation, implies the imposition of bans on the study and implementation of algorithms and security software and thus weakens a country’s ability to prevent and react to local and global threats.

Leave a Reply

Your email address will not be published. Required fields are marked *