While the world is focused on the release of GPT 5, the almost simultaneous publication of GPT-OSS has gone virtually unnoticed. This is an “open” model with 120 billion parameters that can also run locally on older machines. It is censored but, above all, it is designed to resist jailbreaking by Andrea Monti Initially published in Italian by Italian Tech – La Repubblica
On 5 August 2025, OpenAI released two versions of GPT-OSS, described on the OpenAI blog as capable of delivering truly competitive performance compared to the competition, at low cost, and with results almost on par with some of the previous models.
‘Available under the flexible Apache 2.0 licence,’ the blog states, ‘these models outperform open models of similar size in reasoning tasks, demonstrate strong tool use capabilities, and are optimised for efficient deployment on consumer hardware.’
So much for OpenAI’s marketing. Now let’s take a closer look at what GPT-OSS really is.
Open source or open weight?
Despite the statement above, which says that the models are ‘Available under the flexible Apache 2.0 licence…’, GPT-OSS is not at all, as one might be led to believe, open source but only ‘open weight’, which is something totally different.
OpeanAI has, in fact, applied the Apache 2.0 licence of GPT-OSS only to the weights and not to everything else. In other words, OpenAI has decided to disclose and make reusable the parameters that define how a neural network reacts after training (the weights, in fact), but has not done the same with the software components used, for example, for training.
This choice is entirely legitimate and understandable from a commercial point of view, but it does not allow the meaning of the words to be twisted to imply that users have access to everything that is part of the model. Therefore, it would have been correct to write “With the weights available under the flexible Apache 2.0 licence, these models…” — three words that radically change the form of the sentence and the substance of its meaning.
What does it mean that GPT-OSS is not open source
GPT-OSS as a whole is not “open source” or “free” in the legal sense of the word as defined by the Free Software Foundation, i.e. totally free even in its software components. Rather, as far as software is concerned, GPT-OSS is a “proprietary” model in the sense that OpenAI retains control and secrecy over how it was built and how it is managed.
An AI platform is made up of many parts, which can be roughly and extremely (and even overly) simplified as follows: raw data organised into a dataset, software used to create and manage the dataset on which the model runs, and the “weights”. Only the latter, as mentioned, are released under an Apache 2.0 licence, which allows users to “reproduce, prepare derivative works, publicly display, publicly perform, sublicense and distribute the work and such derivative works in source or object format”.
The fact that “flexibility” only concerns the reuse of weights is crucial because if only these are reusable and modifiable, then it is possible for anyone to “customise”, but only partially, the way the model works. This poses a serious problem that could make one think twice before creating “accurate” GPT-OSS models and using them to offer products and services.
Open weight, safety checks and jailbreaking
In addition to creating (partially) specialised versions of GPT-OSS, working on weights would, in theory, make it possible to eliminate or at least reduce the effectiveness of the safety checks built into the model, which are designed to prevent the generation of responses that designers have deemed unacceptable — albeit according to subjective standards that are not necessarily imposed by law.
The introduction to the GPT-OSS model card and then the latter in more detail highlight a particular focus on filtering, for example, data relating to the chemical, biological, radiological and nuclear fields. This is to prevent the model from achieving a high capacity to provide dangerous responses in these areas, even when performing “malicious” fine-tuning.
Two hypothetical consequences and one certain consequence seem to follow from this.
The first is that it would seem in any case (extremely difficult but) possible to fine-tune GPT-OSS for illicit purposes (unless, for example, systems are implemented that “break” the model in the event of unwanted fine-tuning).
The second, based on the previous assumption, is that it is not specified what harm could be done with these less efficient and capable illicit-made models, which are still functional on the “dark side”.
Whatever the answers to these two questions, in the absence of experimental evidence, any statement would be purely speculative; on the contrary, however, it is certain that, for the same reasons, not all GPT-OSS fine-tuning would be possible. Therefore, the legal classification of the model would need to be further clarified, specifying that the weights can be modified within the strict limits set independently by OpenAI and that, therefore, the model is “partially open weight” or that the Apache 2.0 licence is not fully applicable.
So do (almost) all
In conclusion, it is clear (and in some ways obvious) that GPT-OSS, like its proprietary versions and those of (almost) all its competitors, allows very broad use but is in any case restricted by the design choices of its creators, which is not acceptable.
It matters little whether this is done to avoid legal action, to limit the dissemination of information unwelcome to executives (as in the case of DeepSeek), or to avoid being overwhelmed by the inevitable waves of social outrage that arise every time someone blames the tool (and not those who use it) because it has been used illegally or disruptively.
Preventively limiting — censoring — the use of an LLM because someone could abuse it to commit illegal acts means treating all users as potentially dangerous individuals who must therefore be monitored regardless, and moreover by a private entity according to its own rules.
Rightly so, this approach has sparked controversy and protests when Apple and the European Commission started talking about client-side scanning (the preventive and automatic search of all users’ devices for illegal content before it can be sent). If this is the case, it is therefore unclear why OpenAI and all other AI companies should be allowed to do what we are asking others to ban.
On the other hand, if the concern that a model without security controls is too dangerous is well-founded, then it should be up to states to take responsibility for defining the scope of these controls, rather than delegating it to private entities whose agendas do not necessarily coincide with the protection of the public interest (of citizens who are not necessarily US citizens).