Data can be anonymous in one context and personal in another: the EU Court reaffirms the changing nature of information. Big Tech and authorities will have to assess each case individually by Andrea Monti – Initially published in Italian by La Repubblica – Italian Tech
What has become commonly known as the “Deloitte ruling” is a decision by the EU Court of Justice in case C-413/23 P published in early September 2025.
The judges reiterated the concept that the status of personal data is not absolute but depends on the actual ability of the data holder to identify or make identifiable a natural person. In other words, if someone receives pseudonymised data (and therefore does not know to whom it refers) but possesses additional information which, when cross-referenced with the data received, reveals the identity of the persons involved, this data is, to all intents and purposes, classified as “personal” and therefore fully protected by law.
A (non-)revolutionary decision
The “Deloitte ruling” has been hailed as “revolutionary”, but in reality it is nothing new. The Court had affirmed the same principle in the Breyer case, which is referred to in the decision and analysed below. Furthermore, and above all, for once, the legislation was already clear in itself.
Be that as it may, this new decision no longer allows national data protection authorities to apply questionable interpretations of the rules and requires them to tackle head-on the issue of Big Tech’s accumulation of user data.
The difference between personal data, pseudonymised data and anonymised data
To understand the nature of the problem solved by the Deloitte ruling, a legal introduction to a topic characterised by a certain amount of confusion is necessary.
The relevant regulation is the General Data Protection Regulation, which defines personal data as any data or information that, alone or in combination with other data, makes natural persons identifiable or identifiable. This data is subject to a complex and detailed set of requirements to ensure the protection of the rights and freedoms of data subjects.
From a piece of personal data (Mario Rossi, engineer, born in Vattelapesca on 20 May 1980), it is possible to remove enough information to make it no longer referable to a particular person (Mario, born in 1980). The data therefore loses its “personal” nature and can be freely transferred without any regulatory restrictions.
However, the devil is in the details, and so we must be absolutely certain that the situation is as described, considering two possible options.
The first is that the recipient of the data “Mario born in 1980” has no way of retrieving the other information initially associated with the person. In this case, we refer to anonymised data for those who deliver it and anonymous data for those who receive it. The law does not apply.
The second option is that the recipient of the data initially “disconnected” from the identity of the subjects to whom it refers can retrieve it in another way with reasonable effort, or possesses other data, perhaps anonymous, which, when cross-referenced with the new data, makes the person identifiable again. The data then becomes entirely personal again according to the definition of the law, and its use is subject to duties and responsibilities.
The principle of law expressed in the judgment
To simplify matters to the extreme, the European judges considered that when it comes to personal data, 2+2 can equal 4 or 5 because the sum of two individually anonymous databases can result in a personal database, i.e. with more information than the individual components.
In order to decide whether the result is 4 or 5, the judges write, it is therefore necessary to verify on a case-by-case basis whether the person who communicated the data has actually removed the elements that identify the persons and whether the recipient has, equally effectively, the concrete possibility of recreating the individual’s informational identity using reasonable means.
Access to IP-associated data makes the difference
This conclusion is better understood by analysing the aforementioned Breyer case on the processing of dynamic IP numbers of those accessing German public administration websites
When a terminal connects to a network, it receives an IP number that can either be the same (static IP) or change with each connection (dynamic IP).
The mobile access operator is certainly able to associate the SIM card — and therefore the contract holder — with the device used and the dynamic IP number assigned. This means that for the operator, the IP number (together with other information) is personal data.
On the contrary, those who manage the network resource to which you connect — for example, a newspaper — only receive the IP number and some other technical information about the browser used, the operating system, and so on. It is clear that in this case, the same IP number that was part of a set of personal data for the access operator is anonymous data for those who manage the platform that hosts the newspaper. Reassociating a dynamic IP address with the person using it would mean having access to the telecommunications operator’s information, which is obviously not possible.
However, if the user has registered by declaring their identity or has passed a paywall, then the IP address, even if dynamic, becomes personal data again.
The sensitive issue of analytics
This reasoning applies even more so to decentralised analytics services.
Those who install a simple plugin to manage their website’s access statistics can choose to do so in such a way that they do not know who is connecting to their site. However, when using a third-party service, these parties may have additional information that, as mentioned, “unmasks” the anonymous user.
This is where the most critical aspect of the entire discussion on the responsibilities of the links in the data collection chain comes in.
Applying the principle reiterated in the “Deloitte ruling”, in such a case, the party processing personal data (and therefore subject to obligations and responsibilities) is the analytics service provider, not the party sending anonymous data (for example, because there is no paywall, because the user is using the freely accessible part of the site, or because they have blocked trackers and personal profiling tools).
According to a commonly held interpretation, however, it does not matter that a website only collects technical data without having information about the user’s identity. It is sufficient that someone in the chain can cross-reference that data with other data to extend the obligations to all links in the chain. However, following this interpretation, which has been disavowed again by the European Court, it would be virtually impossible to have truly anonymous data. Thus, for example, scientific research into cures and therapies for (even) incurable diseases would find itself mired in the quicksand of bureaucratic requirements that add nothing to the protection of rights but reduce the hopes of a cure for those who suffer. Regulations such as the AI law currently being approved partially solve the problem, but in structural terms it remains unchanged.
What changes for Big Tech
Until now, Big Tech has built its technological and industrial models on the assumption that the data it receives from external sources is collected anonymously and is therefore exempt from European regulatory obligations. The industry’s adoption of systems based on differential privacy and the spread of privacy enhancing technologies among users seek to solve the problem, but the underlying issues remain the same.
With this ruling, but in reality since the enactment of the GDPR, it is no longer possible to apply this “rule” indiscriminately, but it will be necessary to verify on a case-by-case basis whether the flow of information received actually keeps it anonymous or whether, as mentioned, it allows the user to be “unmasked”.
At the same time, the supervisory authorities will no longer be able to apply automatic rules and consider entities that forward non-personal data to large analytics providers as subject to the legislation. Therefore, they will have to ascertain on a case-by-case basis whether those who use such services can actually know who the person behind the display is.
The impact of reaffirming a principle that has been known for some time but equally neglected for some time could be significant.
It will certainly be significant for Big Tech, which will have to address the issue of reviewing the way it interacts with the links in the profiling chain.
It will be equally significant for national data protection authorities, which will have to decide whether and how to apply sanctions not only to tech giants, but also to all those who, at the feet of these giants, contribute to satisfying their insatiable hunger for information by sending many small anonymous pieces of our informational body.