The Zyxel’s Firewall Bug. Twenty Years Passed Invain

by Andrea Monti – originally published in Italian by Infosec.News

Routers … are affected by a severe vulnerability that makes it possible, without any artifice or hack, to obtain the router’s access password.
Therefore, it is possible to block the operation of the device, making services inaccessible and, in some cases, accessing the user’s internal network. It would make it possible to intercept e-mails and, more generally, the information contained therein— all without the user’s knowledge. We wonder … how is it possible that equipment with such vulnerabilities to the privacy of citizens and the activities of companies can be placed on the market without any control, without any information or caution, without any assumption of responsibility on the part of manufacturers and distributors and without any protection for defenceless (and unsuspecting) users?

These words seem just a few days old, commenting on the vulnerability discovered a few days ago affecting several Zyxel firewalls and VPN gateways. This bug, which is the subject of the inevitable “patch”, makes it possible to gain administrative privileges from outside. It has obvious negative implications for security, privacy and the wallet of those blameless who must stop their activities to verify that they have not suffered adverse consequences from this unknown vulnerability.

It is not the case, because these words are almost 20 years old, and comes from the call to action that ALCEI (an NGO active since 1994 in the field of digital rights) sent in 2002 to the Italian Data Protection Authority. The Digital Rights NGO asked for its intervention in a worryingly similar case: routers vulnerable to an external attack distributed by telephone operators to households and businesses.

At the time, no one – neither the Data Protection Authority, nor the Communications Authority, nor the Market and Competition Authority – lifted a finger. Prosecutors, various law enforcement ‘technology special squads’, and even consumer associations remained inactive. It was one of the many and usual defects that invariably afflict software and equipment. Why bother?

The answer is simple, but no one wanted to hear it – and no one wants to hear it right now: individual safety may be a ‘process’ and not a ‘product’ (as the ‘experts’ say), but if a car brakes randomly, skids even with the engine off and goes into reverse when put into first gear, even Tazio Nuvolari would have trouble driving safely. Especially since, unlike the cars of the Mantovano Volante era, it is impossible to ‘put your hands’ on computers and equipment  we use.

In other words, security is born and evolves from well-designed, well-constructed and well-maintained products. Then, and only then, there are the policies, risk assessment frameworks, certifications and all the consultancy paraphernalia that accompanies the sector.

Product robustness comes first. It is no coincidence that the (confused and bureaucratic) legislation on the ‘cyberspace’ requires validation of the security of the equipment deployed in critical infrastructures and essential services. It is also no coincidence that the “Conte-Huawei decree”  required TIM, as a condition for the use of the Chinese giant’s 5g devices, to organise a complex system of controls that even extended to projects and source codes.

However, and here we come to the point, are IT providers obliged to market safe products? Who determines when a product is secure enough? However, didn’t we say that it is impossible to write error-free software?

Let us start with the first answer: yes, IT vendors are obliged to market secure products, just as they do with any other industrial product. The point is that, unlike ‘traditional’ products, there are no laws to prevent the marketing of vulnerable equipment and software, and no penalties for those who do.

The commonplace that ‘there is no such thing as safe software’ must also be put into context. While it is impossible to write programs that are free of defects, it is also true that one should do one’s best to minimise the presence of bugs. In other words, on an ideal graph with the abscissa from zero to one hundred, the ‘full scale’ being an asymptote does not justify keeping closer to zero.

These considerations also directly impact compliance with the security adoption obligations imposed by the Data Protection Regulation, the ‘GDPR’.

Article 25 of the regulation imposes the obligation of ‘data protection by design and by default’. It means that anyone who wants to use information technology to process data as part of their business or institutional activities must carry out a prior assessment that goes as far as analysing individual devices and software to check whether they comply with this principle.

It is evident that no user, however large, can afford to invest time and money in such an analysis. Intellectual and industrial property law prohibits such checks without entering into onerous confidentiality and liability agreements with the rights holder. Therefore, one would expect that manufacturers would issue some ‘declaration of compliance’ with the requirements of Article 25 of the GDPR.

However, the way the standard is structured does not formally require the issuance of such a certification. The manufacturer declares its product characteristics, perhaps adopting commercial strategies for which it offloads all the responsibilities of managing the end customers to the distributors. Then it is up to the customer-processor to make the appropriate choices and assume the consequences of the choice.

Formally applying the rule, this reasoning is sound. However, if we put it into practice, it turns out to be impractical.

End-users cannot and will not embark on costly and complicated technical evaluations, because otherwise there is no point in outsourcing services or buying third-party products. Service providers – especially smaller ones – often lack bargaining power vis-à-vis manufacturers, which puts them in a “sink or swim” position. Finally, producers often shelter from the jurisdiction of the Italian authorities. They apply the laws of their own country and ‘exclude’ themselves from any responsibility (reading the user’s licence for proprietary software is like reading the medicine leaflet). Even in the case of freely available drugs, there would be more reasons not to take them than to take them).

Therefore, the paradox is that the last link in the chain, the one who has no alternative but to use specific equipment and software without any guarantee, is the one who risks sanctions and lawsuits for damages.

Twenty years ago, the development of information technology in Italy and the perception of risk by institutions was (leaving aside public declarations) very low, but today this is no longer the case, and it is no longer possible to tolerate a state of affairs caused by business strategies rather than by objective needs.

Today, therefore, as twenty years ago, I publicly address once again the Data Protection Authority and the other ‘competent authorities’: the rules you are called upon to enforce allow you to intervene on equipment and software manufacturers to demand the protection of personal data, consumers’ security and the security of the public telecommunications network.

There is no need for more laws, only the will to interpret and apply them in a way that is appropriate to the times we are living in and the threats we are facing, and I hope that, in another twenty years, I will not have to write an article like this for the third time.

Leave a Reply

Your email address will not be published. Required fields are marked *