The consequences of the Big Tech industrial model, based on the indiscriminate commercialisation of immature products at all costs to generate profits as quickly as possible, are coming to the surface, with not only economic but above all social and cultural effects for society at large by Andrea Monti – Initially published in Italian by Strategikon – an Italian Tech Blog.
The alarm of Elon Musk and a group of other experts on the ‘perils of ChatGPT’ accused of being a danger to humanity, which would not be ready, they claim, for such a revolution is as misleading as it is conceptually wrong. It starts from the unexpressed assumption that ChatGPT is an ‘interlocutor’ and not a tool, that it ‘behaves’ without rules and that it must be subjected to ‘ethical governance’. This is yet another iteration of the misconception that has been around since the days of the first computers, on the basis of which one persists, against all evidence, in attributing some form of ‘subjectivity’ to a piece of software merely because it works, apparently, intelligently.
ChatGPT is not an ‘individual’, it does not ‘escape’ anyone’s control, and it is very clear what it is capable of doing: it broadens the base of the ignorant who have access to information they are unable to understand and use. Moreover, it is not society that has to adapt to technology but the other way around. Therefore, if a gimmick malfunctions, it should be fixed, instead of forcing those who use it to adapt to the defect. Finally, ethics cannot be the instrument to regulate the use of a technology because in a democratic world, the state does not impose a system of values, and individual morality remains a private matter. Hence the unacceptability that a single entity (a fortiori a company) can claim to dictate rules valid for all.
Nevertheless, the allure of the illusion of having joined the Baron Frankenstein Social Club -that of creature makers rebelling against their master- is too strong to resist. If one adds to this the blissful condition of the futurologist -not living long enough to be proven wrong- and, why not, even a bit of understandable quest for visibility in the media, it is not surprising that the target of criticism is ‘intelligent’ software instead of those who use it.
Rather than an intrinsic danger, in fact, software such as ChatGPT are once again bringing into the open a fairly widespread trait of human behaviour in our times that had already manifested itself at the dawn of the spread of consumer electronics: the disinterest in self-improvement and the unscrupulous veneration of exteriority in the name of which it is enough to appear capable of knowing (doing) something and not actually being able to do it.
I was, I think, fourteen years old when I bought software for my ZX Spectrum that handled the graphic representation of mathematical functions. Initially, I used it to check the correctness of calculations, but then the temptation to use it directly for homework prevailed. At the first test, obviously tackled without the help of the programme, the less than satisfactory grade confronted me with the stark reality: the availability of a tool does not replace the need to study. In other words, it is fine to use a programme that performs complex functions, but it is essential to retain an awareness of what one is doing.
It is obvious that scientific research and technological applications cannot do without computing power and programmes. It is unthinkable that, in the name of mistrust of machines, we should continue to reckon with pen and paper (which, though, are also machines). This does not, however, justify switching off the brain and taking everything produced by software for granted.
The transition from the use of search engines to that of generative AI and the invariance of the issues are empirical evidence of the correctness of this reasoning.
Those who were there when Altavista promised wonders, witnessed its replacement by Google, and the proliferation of specialised search engines, will remember the perplexity triggered by access to unverified content, news disseminated by non-professional information providers, and the deliberate alteration of results by SEO techniques. The risk of misinformation and the fact that it was necessary to assess the reliability and usefulness of the results provided by a search engine before organising them to formulate conclusions were already clear at the time.
Today, technologies such as ChatGPT pose similar and therefore not new problems. They make the phase of searching, selecting and organising information material superfluous, and directly produce the final result, i.e. the structured elaboration of an argument around a given topic. For now, however, they are essentially concerned with reworking and not with assessing the reliability of sources and results. Thus, they sometimes give the impression of giving ‘random’ answers, but that is not the point.
A lot of people, those with ‘native’ knowledge of their field, will undoubtedly work better thanks to technologies like ChatGPT that already manage to put some order into the information they have access to and will probably be trained to do so better and better.
Many others will lose their jobs. They are no longer farriers, coachmen and stable boys reduced to poverty by the steam engine and then the internal combustion engine, but graphic designers, illustrators, content producers. Even the ‘intellectuals’ discover that they are not too different from the ‘proletarians’ and that they too can be victims of modernity. It is called progress and it is a bloodthirsty god that wants human and social sacrifices to dispense its favours.
Then there is a vast audience of ignorant and functionally illiterate people who, both in private and at work, use (and suffer the results of) these tools in total ignorance.
If, as the ‘futurologists’ say, this is where the ‘concerns’ about ChatGPT’s ‘social consequences’ come from, then out of all egalitarian hypocrisy, we should stop complaining about generative AI, simplistically calling for its (temporary) suspension of use to give society time to metabolise it. Instead, we should ask ourselves very ‘disturbingly’ whether we should not completely prevent access to such a technology for those who do not know how to use it.
Knowledge and its applications are not for everyone and should only be accessible to those who have developed the appropriate knowledge to handle it in (reasonable) safety. So, like the mathematical function study programme from high school days, tools like ChatGPT should only be available to those who need them. But if one narrows the user base, one might ask, ‘who pays’ for the development of the product? ChatGPT is not a finished product. It still does not work properly and more time and training – i.e. economic and financial investments – are still needed to reach a maturity that stabilises it.
The question is more than legitimate. Probably, if ChatGPT had been destined to remain an research project or did not provide a reasonable expectation of profit it would have developed much more slowly or after a while would have been shelved, as happened with NFT and Metaverse, already ‘dumped’ by the former by Meta and the latter by Disney after the first phase of futuristic drunkenness.
These considerations highlight once again the negative consequences of Big Tech’s industrial model based on the indiscriminate commercialisation at all costs of immature products in order to generate profits and gains as soon as possible, offloading on society not only economic but – and above all – social and cultural consequences and effects.
So, if alarm is to be raised, it should not be about the use of ChatGPT by time wasters, improvised ‘professionals’ who boast knowledge they do not have, or those who skimp on the use of ‘creatives’. Rather, the concerns should concern those who make immature technologies available on the market, those who, not recognising the value of knowledge, exploit it unscrupulously, and, most of all, those who live by practising appearance capitalism — profit and success built upon pretending to know rather than actually knowing.
Perhaps, Pythagoras had a point when he decided to keep the knowledge of mathematics to himself and his disciples, and perhaps, in order to reduce the consequences of the distorted use of generative AI, it would be necessary to increase the diffusion of culture and critical spirit among people, instead of continuing to work on turning them into the equivalent of the inhabitants of the world of the Matrix.