Building AI-animated replicas of deceased people is the archetype of our era’s individualistic selfishness and progressive self-condemnation to loneliness by Andrea Monti – Originally published in Italian by La Repubblica-Italian Tech
On 8 May 2025, in a murder trial in Arizona, USA, the relatives of the victim asked the court – and were granted permission – to view an animation of the victim created using data from the deceased, in order to address the defendant and “forgive him” for his actions.
No one, not even those directly involved, thought that this meant “resurrecting” the deceased, but precisely for this reason, creating a digital reproduction is symptomatic of a toxic relationship with generative technologies.
The “forgiveness” granted to the murderer was decided, in form and content, by the victim’s relatives and not by the victim himself. No one can say whether the victim would have agreed with such a choice, or whether he would have used those words or not. Therefore, it is factually false that the victim forgave his murderer. Yet even authoritative newspapers such as the New York Times ran headlines such as “Reincarnated by AI, Arizona Man Forgives His Killer”, demonstrating the ambiguous nature of the perception of these phenomena.
The victim is not “reincarnated”, the clone is not the victim, and he has not “forgiven” anyone. Yet this is the perception that is conveyed.
Even limiting ourselves to aspects relating to the trial, we should ask ourselves whether it is admissible to use such tools in a trial, given their extreme ability to influence judges and juries by appealing to the non-rational part of the conscience. This is a pointless debate in the US judicial system, which since 1998 has allowed digitised reconstructions of crime scenes and how the crime might have been committed (the conditional is a must here) to be presented in court, but that is not the point today.
What happened in the Arizona courtroom is the manifestation of one of the many forms that one of the most toxic behaviours latently enabled by the possibility of creating (more or less) interactive deep fakes can take: practising digital resurrection.
This term defines a service that, using data from a deceased person — writings, images, videos, sounds and so on — builds an avatar or chatbot that interacts with those left behind.
It is clear, and worth repeating, that this is not a real “resurrection” because, although it may be obvious, it must be said that the result of processing the data of the deceased is an inanimate object, devoid of consciousness and awareness, but above all, it is not the deceased who returns to a different form of life, like the father of Hiroshi Shiba, the protagonist of the anime Jeeg Robot d’Acciaio.
Therefore, “digital resurrection” serves only those who remain and not those who have departed on their final journey, and represents a condemnation to (re)live the pain of loss every time we delude ourselves into thinking we are in the presence of our loved one.
One might ask what is wrong, after all, with such a service. Ultimately, the cult of the dead has always been celebrated first with symbolic objects and then with effigies, images and videos, with screens taking the place of altars. Why, then, should we not take this further step, given that technology allows us to do so?
There are several reasons why we should not go down this road.
One of the first studies on the subject, published in 2023, highlights that people do not readily accept the idea that their data could be used after their death to build these clones and notes that this should only be allowed if the deceased had clearly expressed this wish during their lifetime. Therefore, in the absence of an explicit decision, a digital replica service for the deceased could not be used. Then there is the question of who should be authorised to request the service: only the heirs or also relatives, collateral relatives, in-laws and friends?
However complex, the legal aspects of the issue are less relevant than the personal and collective impact of such services. The grieving process is a fundamental part of everyone’s life, and the ability to “historicise” a traumatic event is a way of healing the wound and, perhaps, changing one’s attitude towards individual priorities and relationships with other people. On the contrary, deluding oneself into thinking that a deceased person is still with us, that we can “interact” with them and receive comfort, risks paralysing the person in an eternal present, stuck on the day of their first interaction with the chatbot.
Not everyone reacts in the same way to death, and this (fake) digital resurrection may not necessarily cause psychological damage to the individual and their relationships. The fact remains, however, that there is a growing tendency to irrationally humanise technological objects which, although inanimate, seem to behave as if they really recognise us. Paradigmatic in this sense is the behaviour of many owners of AIBO, the robotic dog developed by SONY, who have gone so far as to celebrate a religious ritual for the “departure” — i.e. the irreparable breakdown — of what was no longer a toy to them.
In reality, the story of AIBO is much more complex. It cannot be dismissed as a “whim” or, conversely, elevated to an anthropological paradigm. Together with similar phenomena, however, it is indicative of the prevalence of form over substance, and of appearance over objectivity. If an object appears to be sentient and behaves like a sentient being, then it makes no difference whether it is a who or a what. What matters is that it serves its purpose, which in this case is to anaesthetise pain by confining the person to an ever deeper — and lonely — recess of their consciousness.