ChatGPT Block. Why the Italian Data Protection is wrong

The “ChatGPT block” was ordered on 30 March 2023 by the Italian data protection authority on the grounds that the data used to train the model had been collected without informing the people to whom it related and without verifying their age. This, according to the order, exposes minors who use the service “to answers that are totally inappropriate to their level of development and self-awareness”.

The order, it must be said, is highly questionable from a technical, legal and cultural point of view. It reveals, on the one hand, the weakness of the national data protection authorities in dealing with the matter and, on the other hand, the substantial inapplicability of the ‘privacy protection’ legislation. Finally, it triggers a very dangerous reciprocity mechanism whereby other countries with similar regulations – including Russia and China – could use them as a ‘legal’ tool to target companies on this side of the new Iron Curtain by Andrea Monti Continue reading “ChatGPT Block. Why the Italian Data Protection is wrong”

We should fear ourselves, not ChatGPT

The consequences of the Big Tech industrial model, based on the indiscriminate commercialisation of immature products at all costs  to generate profits as quickly as possible, are coming to the surface, with not only economic but above all social and cultural effects for society at large by Andrea Monti – Initially published in Italian by Strategikon – an Italian Tech Blog. Continue reading “We should fear ourselves, not ChatGPT”

ChatGPT and Knowledge Loss

ChatGPT is yet another ‘trend’ that, like blockchain, NFT and their offspings, will sooner or later disappear from the headlines (and from the professional qualifications of the ‘experts’). Meanwhile, warnings of millenarians, Luddites, Canutes and catastrophists are multiplying, never missing an opportunity to predict the ‘dangers to privacy’, the job losses caused by the use of AI to produce editorial content, studies and research, and the ‘bias’ that will lead AI to utter inappropriate ‘oracles’ or not in line with the politically correct. Then there are the heirs of Eliza’s ‘patients’, the software that in the 1960s imitated a psychotherapist of the Rogerian school, who ask ChatGPT existential questions and are amazed by the answers, and the plagiarists  who, in arts and in the workplace, take advantage of these platforms by claiming as their own the results of the automated processing of a topic (be it text, images or sounds) by Andrea Monti – Initially published by Strategikon – an Italian Tech blog. Continue reading “ChatGPT and Knowledge Loss”