Why it is wrong (and dangerous) to impose copyright on human beings

To ‘combat deep fakes,’ Denmark is proposing a law to give people copyright over their physical features and voices. But this makes it easier to sell data and objectify individuals, and risks paralysing research by Andrea Monti

Last June 2025, the Copenhagen government put forward a proposal to extend copyright to biometric data, including physical features and voices. The aim of this proposal is to ensure that Danish citizens have control over their own features, which are increasingly being manipulated to create deep fakes in the form of images, videos and voices. The proposal only applies to Denmark, but given how the European Union works, it will need to be analysed outside national borders before it can be definitively approved.

The issue of combating deep fakes is certainly relevant. The widespread availability of tools that can replicate a person’s outward appearance fairly accurately allows acts to be committed that range from jokes in poor taste to damage to an individual’s reputation and even the exploitation of another person’s image for advertising or criminal acts. However, the solution proposed by Denmark is meaningless from a legal point of view, dangerous for individuals and, as far as these pages are concerned, capable of blocking scientific research.

Why it is wrong to think of copyright as a means of protecting an individual’s image

A first-figure syllogism clearly explains the structural error in the Danish proposal: copyright protects a creative act of a human being, while data of any kind, including biometric data, are discovered and not created, and therefore cannot be protected by copyright.

In addition to being logically and legally unfounded, the Danish proposal is also unnecessary because the use of data that identifies a person is already regulated at EU level by personal data protection rules and at national level — for example, in Italy — by rules on the protection of personal image and those punishing impersonation and various other types of fraud.

Finally, asserting copyright over biometric data is dangerous because it definitively transforms informational identity — the set of data that defines us — into an entirely available asset.

Copyright on data makes it irrevocably licensable

Although in a confusing manner, European regulations have so far established the principle that individuals always retain control over data (including biometric data) concerning them. If the Danish law is approved, data would effectively become permanently licensable, i.e. “saleable”, with no possibility for individuals to get it back, object to the way it is used or share in the profits derived from its use. On the other hand, such an option would imply that in some jurisdictions, such as Italy, using data without a licence or in violation of its terms could even be a criminal offence.

The historical precedent: copyright on the human genome

The Danish government’s idea is not, in fact, new, as attempts have already been made in the past to envisage some form of copyright on an individual’s information components. Nor are the consequences of such a choice new.

In an interview with the Washington Post almost forty years ago, Walter Gilbert, winner of the 1980 Nobel Prize in Chemistry, theorised in no uncertain terms why the human genome should be subject to copyright and not patents.

‘I don’t believe the genome is patentable,’ Gilbert told the journalist, anticipating by thirty years the US Supreme Court, which in the case Association for Molecular Pathology v. Myriad Genetics, Inc.. ruled that genetic sequences are not even subject to industrial property rights (i.e. patents) because they are discoveries and not inventions. Rather, Gilbert, who was entering the market with his own company, continued, ‘we intend to impose copyright on the sequence. This simply means that in order to read it, you have to pay a fee to consult it. Our aim is to make the information available to anyone, but for a fee.’ Gilbert was not entirely clear on the nature of copyright, but this does not detract from the clarity of his position: there must be a right to the data extracted from organs, tissues and cells, and this right must be attributed to those who find this data.

It is not difficult to find strong similarities in this position with the arguments put forward today by AI companies to justify their claim to freely use all the data they find to train their models.

Another historical precedent: (non-)ownership of tissues, bodily fluids and the data that can be extracted from them

There is little doubt that data is of enormous value to scientific research and that, at the same time, it has economic value not only in the construction of datasets for training AI models, but also for its individual capacity to contribute to discoveries and inventions beyond the field of ML/AI. It would therefore be logical to conclude that whoever holds — or rather, contains — this data, i.e. the individual, has the right to be paid for allowing its use by third parties. However, this conclusion is not so straightforward.

In 1990, the California Supreme Court ruled in the case of Moore v. Regents of the University of California that a person is not entitled to a share of the profits from research based on their own organic tissue and bodily fluids. Once removed, the Court held, these “pieces” become the property of the researchers, who can therefore exploit them in full autonomy and, applying the decision to the present day, also extract data for reuse and economic exploitation.

A further precedent: not “ownership” of tissue but “unjust enrichment”

In contrast, and still a source of debate today, is the case of HeLa cells, the first immortalised cell line, derived from an African American woman, Henrietta Lacks, who died of cancer in 1951.

HeLa cells have become an indispensable tool for a wide range of research, from virus research to research on the effects of radiation, without resorting to in vivo experiments. Johns Hopkins Hospital, where the peculiarity of Lacks’ cancer cells was discovered, never sought to exploit them economically, deciding instead to make this incredible tool available to the entire scientific community. However, this did not eliminate the controversy, which culminated in a series of lawsuits filed in 2023 by the woman’s heirs against biopharmaceutical companies accused of obtaining economic benefits from the unauthorised and unpaid use of HeLa cells.

Limits and consequences of the Danish proposal

Putting the reasoning developed so far into a systematic framework, we can understand what the side effects of the Danish proposal might be.

On the one hand, imposing copyright on individuals transforms their identity into a saleable commodity without necessarily guaranteeing any benefit to the individual; on the other hand, it exposes research to the risk of legal action or unsustainable costs, as highlighted by Prof. Yu Takagi in an interview published on these pages on the importance of free access to AI models for neuroscientific research.

Paying each person for the use of their data seems, in theory, a fair solution. On the other hand, even volunteers for clinical drug trials are remunerated — albeit poorly — for participating in studies and research. This approach could work for cases such as Moore or Lacks, but for large amounts of data relating to an equally large number of people, even if one wanted to negotiate a cost, the compensation for the individual would be negligible.

The value of data only emerges if and when the enormous amounts of information analysed produce scientific discoveries or products to be placed on the market. Therefore, if those who collect and manage this data were to pay a price of some kind, they would have to bear costs in advance in the hope of obtaining a result that might never materialise — especially considering that in research, even results that lead to dead ends are considered “results”.

The unstoppable bullet against the indestructible wall

The debate on these issues has been going on for some time and is polarised between those who argue for the prevalence of a series of more or less explicitly recognised rights, such as minority claims, the protection of individual identity and the protection of bodily privacy on the one hand, and the freedom of public research or the industrial interests of private companies on the other.

It is clear that as long as each side considers its position in abstract and ideological terms, these positions will not be negotiable, making it highly unlikely that a balance will be reached that takes into account collective and individual interests.

A possible solution

Instead of turning a debate that is crucial for human beings into a crusade, it seems preferable to adopt a pragmatic approach that does not involve individual commodification but is inspired by collective mechanisms of compensation and redistribution. In such a scheme, a share of the revenues generated by the collection and use of biometric data could finance funds for scientific research, biosecurity and the development of public data infrastructures.

In this way, individuals would benefit both individually (through compensation) and collectively (through the development of methods for diagnosis and treatment), the social function of data would be enhanced, and scientists could work without fear of legal action.

All this shows how reductive — as well as wrong — the Danish proposal is. Even before legal issues, the real challenge is not to decide “how much” a single face or voice is “worth” or “who it belongs to”, but to understand how to transform the cumulative value of data into a widespread benefit that balances the interests of research with those of the community.

Leave a Reply

Your email address will not be published. Required fields are marked *