Researchers advise government on regulation of ChatGPT and other generative AI

Fabian Ferrari, Antal van den Bosch (photo: Dirk Gillissen), and JosÚ van Dijck
Fabian Ferrari, Antal van den Bosch (photo: Dirk Gillissen), and JosÚ van Dijck

Governments around the world find themselves confronted with pressing challenges presented by artificial intelligence (AI), with one of the major questions being how to regulate generative AI systems like ChatGPT. The technologies are developing at lightning speed and the interests of the companies behind the AI are predominantly commercial. It is up to the government to keep them in check, researchers Fabian Ferrari , Antal van den Bosch , and JosÚ van Dijck advise the Dutch Ministry of Economic Affairs and Climate Policy in a technical briefing.

Regulating AI

According to the government, more than one and a half million Dutch people use generative AI, ChatGPT being the best known programme. The programmes, however, do not only provide convenience, but using them also entails risks, Ferrari, Van den Bosch, and Van Dijck warn. While AI may seem objective at first glance, it is not. "As a user, you may be exposed to risks such as misinformation, discriminatory outputs, and privacy violations. Especially in the absence of rules on the production of the underlying models and codes."

Several Dutch ministries have now joined forces to develop an overarching ’vision for generative AI’. The UU researchers advise the Ministry of Economic Affairs and Climate Policy based on an article on the regulation of AI they previously published in Nature Machine Intelligence. "In our briefing, we stressed that the governance of generative AI systems depends on precise definitions and clearly specified transparency obligations. We also offered policy recommendations to achieve this precision."

A European challenge

AI governance is not only a challenge for Dutch regulators, Ferrari, Van den Bosch, and Van Dijck emphasise, it is a European challenge. The EU is currently drafting the EU Artificial Intelligence Act, which includes transparency requirements. "However, lawmakers have not yet specified which technical details need to be made accessible by the firms behind AI, nor to whom. Independent researchers should be granted access to clearly defined layers of information, which would allow them to deeply inspect and test the models."

The governance of generative AI systems depends on precise definitions and clearly specified transparency obligations.

Fabian Ferrari, Antal van den Bosch, and JosÚ van Dijck

"At the national level, it is useful to bear in mind that regulation alone is not a panacea. Given that a handful of commercial American and Chinese technology firms dominate the field of generative AI, investments in open and public alternatives are indispensable." These alternatives, such as BLOOM, an open-source large language model, are promising, the researchers say. "Nevertheless, such AI systems will continue to require state support, especially for competing with Big Tech’s edge in computational infrastructure."

"Both on the national and the EU level, it is crucial for regulators to keep up with the rapid pace of AI development. To regulate those technologies, synergistic collaborations between AI and governance researchers lead to policy recommendations that otherwise remain unelaborated", Ferrari, Van den Bosch, and Van Dijck conclude. "A strong vision for generative AI necessitates leaving behind ministerial and disciplinary silos."