An Italian Data Protecion Authority Secret Report Leak?

According to an Italian newsmagazine, a non-for-public eyes investigation of the Italian Data Protection Act would have found severe security problems in the management of the Internet Exchange Points (the points of the Italian telecommunication network where the various telco networks are mutually interconnected.)

A first remark is that the King is – or might be – naked. If this secret report actually exists (and the IDPA didn’t deny its existence) and has been leaked, the Authority’s information security is not that good, and – therefore – the IDPA should fine itself for this non compliance, instead of just targeting the rest of the (industrial) world.

Coming to the heart of the matter, in the words of the journalists that authored the article:

there is an enormous black hole in the security of the Italian telecommunications. A hole so wide that allows whoever with a proper equipment to have available phone calls, SMS, emails, chat, and social-network posted contents.

The journalists claim that the report verbatim says:

These device are equipped by technical features that can allow the traffic duplication, in real time, of the traffic in transit diverting it to another port (port mirroring)

and that

if somebody wanted to look at the traffic in transit this would be easily done with specific analysis tools …

It is amazing how this article – and the IDPA findings, if proven true – are so poorly legally and technically savvy because:

  • the possibility of performing a port mirroring is necessary to the public prosecution and intelligence agency activities. The point, then, is how and by who these feature are exploited rather than its mere existence, that like-it-or-not are necessary for investigative purposes. One day, maybe, it will be possible to disclose some of the ways traffic data information are asked, but this is another story…
  • there is no evidence of the port mirroring features being abused, misused or cracked,
  • performing a port mirroring in an Internet Exchange Point is not as easy as the article and the IDPA report(?) says: it is not like Independence Day computer virus uploading or Swordfish’s Hugh Jackman “under pressure” hack,
  • there is an easy way, available almost since day one of the pre-internet era to protect users’ communications without caring of what the ISPs do: client-based encryption. But I assume that the Minster of home affair wouldn’t like an IDPA endorsement of the “crypto-for-the-masses” slogan,
  • oddly enough, the IDPA secret report (if true) doesn’t address the serious problem of network devices proprietary firmware and operating systems that prevent an ISP to check on its own the existence of backdoors (as in the recent Cisco affair) and other security flaws.

The Economics of Personal Data And The (Reckless?) Use Of Unreliable Statistics

A paper by a scholar of the university of Trento (IT), co-authored by people from the Kessler Foundation,Telefonica Network, Telecom Italia and Google finds that we are ready to sell our personal data for about two Euros.

Although the conclusions are – in principle – fair enough and match the “gut-feeling” of whoever works in the field of the personal-data handling, I wonder how it would be possible to draw statistics evidence by the criteria adopted.

I’m not a statisticians, but the only part of the paper dedicated to the sample’s composition reads:

All volunteers were recruited within the target group of young families with children, using a snowball sampling approach where existing study subjects recruit future subjects from among their acquaintances … A total of 60 volunteers from the living lab chose to participate in our mobile personal data monetization study. Par- ticipants’ age ranged from 28 to 44 years old (μ = 38, σ = 3.4). They held a variety of occupations and education levels, ranging from high school diplomas to PhD degrees.
All were savvy Android users who had used the smartphones provided by the living lab since November 2012. Regard- ing their socio-economic status, the average personal net in- come amounted to e21169 per year (σ = 5955); while the average family net income amounted to e 36915 per year (σ = 10961). All participants lived in Italy and the vast majority were of Italian nationality.

While, again, I have a limited knowledge of the statistic, there are a few oddities in the method applied by the researchers that undermine the value of the findings:

  1. The sample is made by only 60 people, belonging to young (wealthy enough) young families with children. This isn’t actually a fair depiction of the Italian socio-economics. Furthermore, there are neither enough information about the socio-economic status nor the ? geographic location of the participants to actually understand the sample quality.
  2. Even Wikpedia knows that the “snowballing” sample selection method is known to be prone to biases. No evidence are given in this paper of who the biases are handled.
  3. Though broadly used, Android isn’t the only platform. A well balanced sample should have taken into account Blackberry, IOS and Windows Mobile (or whatever the name.)
  4. The “measurements” of individual traits data relies upon psychological categories and methods. Psychology is not a science and putting a bunch of equations into an highly subjective discipline doesn’t turn it to hard science (I know, I know, positivism is dead, natural sciences aren’t so “absolute” etc. But try to send a rocket to the moon by assessing the “mood” of a ballistic trajectory and tell me the results.)

Before concluding that this paper offers no scientific evidence of its findings I would like to have these (and maybe other, expert-made) questions be answered. But I’m afraid that the final judgements wouldn’t change.

A final remark: the lack of scientific method shown in this paper is dangerous because, as often happens, poorly informed journalists jump on the news and “sell” it without any warning to the readers, thus luring them – and the Data Protection Authority, I fear – into thinking that what is a limited, partial and non-relevant work actually drives to factual conclusions.

 

My Answers to the House of Lords EU Committee about the Right To Be Forgotten

A Linkedin post by Luciano Floridi announce a British House of Lords EU Committee hearing about the Google Spain ECJ Decision and the right to be forgotten. Here are my two cents (sorry, this isn’t going to be a short post):

Q. Do you agree with the Court’s ruling that Google (and other search engines) can be classed as data controllers?

A. NO. The search engine activity as such doesn’t handle personal data under the 95/46/CE Directive. The collection and organization of the retrieved data are the automatic output of a search algorithm. The issue arise when the retrieved data are used for purposes different than the pure providing search engines results, thus attempting to identify a natural person and creating his/her profile. To give an example: Duckduckgo.com and before, Cuil, are no-user-data-collection search engines so it is not possible to include them into the legal “data-controller” definition.

Q. The question put by the Spanish court to the Court of Justice referred to the data subject wishing to have information “consigned to oblivion”. Isn’t the true position that information removed from websites will always continue to exist, but will simply not be so easily accessible?

A. Yes. And fact is that information still available are still accessible by alternative means (word-of-mouth, newsgroups, social networks etc.) The point is that we are lured into thinking that there isn’t anything else, on the Internet, outside Google but this is simply not true. Google is used because is quick and effective, but when proper information are needed nobody will rely upon a search engine while trying to connect with an expert of the matter.

Q. The Court has ruled that the data subject’s fundamental right to privacy “as a rule” overrides the right to receive information, but that this will not be the case if there is a public interest in “the role played by the data subject in public life”. Do you agree with this order of priorities? Can it in practice be implemented?

A. It is a legal mistake to build the right to be forgotten on the EU Data Protection Directive. The right to privacy is set forth by the European Convention on Human Rights and the data protection is a principle set forth in a EU Directive. Thus data protection is a subordinate and particular right that doesn’t necessarily implies privacy issues. EU Data Protection Directive, indeed, is contrary to the Right to be forgotten because sets a precise legal duty to handle personal data so that they are readily available, updated and exact. This is contradictory with the idea of being forgotten, because a messy way to handle personal data (i.e. non reliable information) would be the best protection for an individual, whose personal whereabouts wouldn’t be easily found.

Q. Do you think it is in practice possible for Google to comply with the Court’s ruling?

A. Yes, but the decision is wrong and Google shouldn’t be forced to comply. The balancement between individual rights and public needs can only be assessed by a Court and we can’t bear the risk of letting a private company to decide what we should and shouldn’t find. The Google Spain ECJ decision shift the burden of protecting the public interest on a private company’s shoulders. To put it short: the ECJ ruling gave Google the legal power to re-write the History.

Q. What do you consider to be a ‘reasonable time’ for companies to put in place an acceptable response to the CJEU’s ruling?

 ?A. I don’t think a general answer is possible. There are issues to be considered such as the number of users’ claims, the kind of legal issues involved by every single claim, the impact on the technical infrastructure and so on that make giving a figure a roll of dice.

Q. The proposed new EU Data Protection Regulation would give data subjects an even stronger ‘right to be forgotten’. Do you think the UK Government are right to oppose this?

A. Again, data protection doesn’t equal right to privacy. The upcoming EU regulation shouldn’t deal with the right to be forgotte because it is an out of scope issue that should be handled within the EU Convention of Human Rights framework.

Q. How do you think an acceptable balance can be achieved at EU level between the public’s right to know, and the right to privacy?

A. By re-affirming and hardening the principle that online (as offline) the main legal liability is on the natural person that performs an action. In the specific case, if a fact is true and reported in a proper way there is no reason to erase it. Following the contrary opinion, today we wouldn’t know anything about the Lucius Catilina’s attempted golpe because his heirs might legitimately ask, after about 2.000 years, that their ancestor be let rest in peace.

Italian Data Protection Act As Censorship Tool

The news of the day is that the lawyers of an indicted Italian politician will ask the Italian Data Protection Authority to block the publication of a video ?covertly-made by a journalist portraying this indicted politician while serving his sentence in and elder-care facility (as a substitution for a 4 month jail term.)

While it is (still) not known whether the request will actually be filed, the news is a confirmation that the Data Protection Act is now seen as an effective tool to remove “unpleasant” information from the public sources in the name of “privacy protection”.

It will be interesting to see if, in this case, the Italian Data Protection Authority will follow the censor attitude showed back in the 2006 in the case of a TV show that exposed several Italian MPs to make use of drugs.

It really doesn’t matter whether, in this case, the Data Protection Authority shall block the video or not. The point is that by confusing “privacy” with “data protection” and giving room to a devious interpretation of the “right to be alone” – such in the Google Spain case – on the long term we are making impossible the work of the future historian and, on the short term, we are favouring the possibility for the powers-that-be to finally get back its dark, quiet obscurity where anything can happens, hidden from the public scrutiny.

In the name of “privacy”.

A Homicide Investigation And The (Still Alive) Data Retention Regulation

The young girl homicide investigation I’ve talked about in a previous post reveals other interesting information, this time about the Telcos’s role in supporting the public prosecution service through the traffic data retention.

The media are reporting (italian only, sorry) that more than 120.000 single mobile calls are under scrutiny spanning from a few months before the kill. But since the fact is more than three years’old, these data aren’t even supposed to exist since the Data Retention Directive forbade its preservation once the (maximum) two-years term expired.

So, hopefully for the justice and the family of the poor girl, at the beginning of the investigation the public prosecutor, as required by law, did issue a traffic data “freezing” order or, better, seized it as dictated by the Italian Criminal Rule of Evidence.

As in the case of the DNA-based evidence, the collection of traffic data without complying the Rule of Evidence might allow the defense lawyers to challenge the reliability of these information especially because the original traffic data have (or should have been) destroyed once collected by the public prosecution service, thus preventing the possibility of double-checking during the trial their actual evidence “weight”.