Japan’s new AI law has something to teach the US and EU

What does the package of measures approved by the Tokyo government to boost development and competitiveness say? It challenges European rigidity, but also American laissez-faire. by Andrea Monti – Initially published in Italian by La Repubblica – Italian Tech

On 4 June 2025, Japanese law on artificial intelligence was enacted, the title of which clearly indicates the political choice made by Tokyo: Jinkō chinō kanren gijutsu no kenkyū kaihatsu oyobi katsuyō no suishin ni kansuru hōritsu(Law on the Promotion of Research, Development and Utilisation of Artificial Intelligence-Related Technology).

Unlike the European Union, which is blocked by the application of a precautionary principle that is not anchored to objective and measurable elements, Japan has made an extremely pragmatic and conscious choice: not to “regulate AI” —whatever that may be— but to strengthen what is needed to build the technologies necessary for its functioning.

In short, where the EU is pulling the handbrake on a parked car with the engine off to prevent it from tipping over, Japan is building efficient roads to get to its destination faster and better.

“In the past,” explains the observatory Keiyaku Watch, “the EU promoted the adoption of a so-called hard law, the AI Act, establishing strict rules on those types of AI considered high risk. In response, the US, fearful that such a choice could slow down innovation … coordinated with Japan and other countries to adopt a regulatory approach based on soft law”. However, the EU approach and the shift towards a “legalistic” route imposed by the Biden administration’s AI Executive Order have convinced Japan to continue along the soft law path.”

The focus is on applied research and international competition

Article 3 lays the foundations for guiding the development of AI in the knowledge that competition in this sector knows no borders. The regulation is therefore designed from the outset to improve competitiveness in the international arena for industrial sectors — and it is worth emphasising the word “industrial” — related to AI. At the same time, a brief aside, seemingly inserted by chance, establishes the importance of research and development in AI-related sectors for national security. The importance of this aside lies in the fact that, in a manner that is very unhypocritical compared to the Western debate, it recognises without pretence that AI-related technologies can and must be used for the defence of the country (it is worth remembering that Japan, due to the pacifist nature of its Constitution, cannot have an army with offensive capabilities).

Technological transparency as a tool to prevent illegal activities

The issue of illegal uses or uses that harm the normal life of the nation is also dealt with in a structured manner and not with a myriad of articles regulating individual cases, with the risk of being faced with unforeseen events that cannot be managed in the absence of specific rules.

The political choice embodied in the law was to focus first and foremost on transparency at every stage of the research, development and deployment cycle of all AI-related technologies.

This choice deserves further consideration because, unlike the EU regulation on artificial intelligence, it does not impose “explainability” requirements that are impossible to achieve, but creates the conditions for those who need to, and have developed the necessary skills, to verify what has been done, how and by whom.

In other words, imposing “explainability” of AI by law would mean defining its level. What should be the benchmark for measuring explainability? That of a researcher working in a Big Tech company? That of a mathematics graduate? Or that of an ordinary citizen with a high school diploma?

On the contrary, the obligation of transparency means, much more pragmatically, enabling qualified individuals to access all the information necessary to understand the causes of damage to private individuals or attacks on institutions.

The framework of public, private and individual duties

The “architectural” approach of the law on the promotion of AI-related technologies divides duties and responsibilities into three areas.

Unlike EU legislation, Japanese legislation imposes a duty on all parties to cooperate in achieving the stated objective, i.e. achieving technological leadership. Therefore, central and local public administrations must use AI to improve their efficiency, universities must actively promote research and the dissemination of results, and build a broad and robust knowledge base in cooperation with the state and administrations. Similarly, the private sector must improve process efficiency and create new industries through the use of AI-related technologies, and citizens must cultivate an interest in these technologies.

At the top, it is up to the state to take the necessary measures to ensure that all actors move in a coordinated manner on this technological stage without hindering each other or individual performance.

The role of the Prime Minister in implementing the strategy

The strategic approach of this law results in the Naikaku—the Prime Minister’s Cabinet—being assigned the powers/duties of coordinating and monitoring the implementation of regulatory objectives. This is done through the establishment of what, in Italy, could be equated to a department of the Prime Minister’s Office, to which all other state bodies, including administrations and independent agencies, must provide opinions, clarifications and the necessary cooperation.

Access to technological infrastructure and datasets

An extremely interesting aspect of Japanese AI law is the requirement to share facilities and equipment—i.e., supercomputing centres, telecommunications networks and more—but above all datasets, which must also be made available to the private sector. This is while in the West — perhaps with the exception of Italy, whose draft AI bill proposes a compromise in the name of the public interest — no solution has yet been found to balance, on the one hand, the interests of copyright holders and the (anti-historical) claims to control by national data protection authorities, on the one hand, and the need for access to the resources needed to build models for machine learning and AI, on the other.

The geopolitical role of knowledge and the importance of training

The differences between Japanese law and the approach taken by the European Union are also evident in the development of knowledge and training.

Japan clearly recognises the importance of developing a national knowledge base — i.e. one that is not dependent on foreign patents and intellectual property — and, consequently, the need to develop training in AI-related technologies at all levels. This applies not only to scientific research but also to areas where the results are to be used.

The importance of strategic vision

No plan survives the impact of battle, according to a much-quoted aphorism by General von Moltke, but this does not mean that planning is wrong or impossible. This is precisely the approach that emerges from the Japanese AI law, based on the awareness that it makes no sense to cage technological evolution in regulations, but that it is instead necessary to create an ecosystem that allows its development to be guided, adopting corrective measures on a case-by-case basis where necessary.

It matters little whether this approach is the result of a “political vision” or the consequence of the need to compensate for the shortcomings arising from population decline and ageing with extensive automation. In fact, it represents a third way between the US approach based on “better to ask for forgiveness than permission” and that of the EU, which is unable to free itself from bureaucratic dirigisme, despite repeated attempts.

It is too early to say which of these approaches will succeed, although the (negative) effects of the first two are already evident. In the US, Big Tech is clamouring for the relaxation of restrictions on access to data relating to copyright-protected works (the subject of legal disputes such as that brought by the New York Times for the unlawful exploitation of its articles) and has launched a massive campaign to obtain users’ consent (or non-dissent) to the reuse of their data.

The European Union is producing weighty implementing acts for the AI Regulation, undermining a virtuous approach that uses rules to support research, which is, in practice, incentivised essentially through economic leverage.

In this regard, if it is true that control over AI technologies is a fundamental element for the development of individual Member States and the acquisition of political autonomy for the European Union, then strong criticism of its choices is necessary and right. Not because they are necessarily wrong, but because they are viewed through the lens of pragmatism rather than through the lens of statements of principle that are not supported by a comparison with reality, but with the pretence of bending it.

Leave a Reply

Your email address will not be published. Required fields are marked *