At the beginning of March 2021, there is a big fuss among the more fanatical wing of CDA supporters. What is going on? Just before the Dutch Parliamentary elections, charismatic party leader Hugo de Jonge committed a slip of the tongue that even Joe Biden would not have made. ’As Jesus would say, don’t pin me down on it,’ he said in a respected talk show. The ultra-short but oh-so-explosive fragment spreads over the internet like an oil slick. The indignation is great - not least of De Jonge himself, who claims never to have made the statement.
Implausible story? It may sound strange, but something similar really happened. CDA supporters have indeed become angry for a misplaced statement by a prominent politician of their party. The difference with the scenario above is that they participated in a study by Natali Helberger, university professor of Law and Digital Technology at the University of Amsterdam. In the study, it was not De Jonge, but Sybrand Buma who made the statement - in a deep fake: a video doctored with the help of AI.
With the experiment, Helberger and co. not only wanted to find out whether the voters would fall for it and lose confidence in ’their’ politician, but also whether this effect would be enhanced if the images were mainly shown to a specific, more sensitive target group: microtargeting. The amplifying effect of microtargeting turned out not to be that spectacular. ’But I did worry about the number of people who thought it was real,’ says Helberger. ’People apparently have great faith in moving images. Deepfake is, therefore, a pretty effective tool for deception. We were quite proud of our video, but it wasn’t really a very good deepfake. And, as the technology improves, probably within a few years, it will become very difficult for a non-professional to recognise whether a video is fake.
’Should we now fear that the VVD will actually distribute a doctored video in order to bring down the competition? ’I hope that the Dutch democratic system is solid enough that parties will not resort to such measures. I think you can expect people to do it for non-democratic purposes, to undermine. Think, for example, of foreign interference, or individuals who want to cause trouble, they exist - yes, also in the Netherlands.’
Corona app That the rise of AI is important for the elections has been made more than clear, but politics will have to relate to it in other ways as well. Just think of the privacy issues that a corona app raises. And that is exactly the kind of topic Helberger deals with on a daily basis. As a professor of information law, she studies the legal, ethical, and policy challenges associated with the use of algorithms and AI in media, political advertising, commerce, and the health sector, and the implications for users and society. Perhaps she can give voters interested in AI - and that should be all of us - a few insights to prepare them before entering the polling booth in March.
In April, she sounded the alarm with a number of fellow scientists: we should take a critical look at that corona app, they wrote in a letter to the Lower House. ’It was only about whether the app should be created or not. The question was never: what’s the real problem? Which technological solution is the best fit? What more do we need to make sure the technology really works the way we want it to? A second point we made is that contact tracing, or any digital solution, interferes with fundamental rights. All the more reason to weigh carefully what the purpose is, and what the role of the law is to ensure that fundamental rights are safeguarded.’
In any case, there should be more parliamentary discussion on this subject, says Helberger.
The proposal for the corona act led to an unbelievable amount of discussion just before the summer. Strangely enough, the provisions on technological measures in the act have hardly been discussed.
In response to the letter about the corona app, Helberger was asked by ZonMw, an organisation for health research and care innovation that works closely with the Ministry, to write a study on the legal, social, and ethical implications of digital technology. ’We are also developing a monitoring tool to measure how people use the app and what possible social consequences the introduction of the app could have. Think, for example, of stigmatisation in the sense of social exclusion of people who have tested positive, or of people who refuse to use the app, or that employers won’t let you in if you haven’t downloaded the app.’
Back to the future: the 2021 elections. As March 17 approaches, the flow of political news swells and hopefully we will all read the (digital) newspaper a bit more carefully in order to make an informed choice. But how do we know that we haven’t ended up in a nasty filter bubble of algorithmic recommendation systems that offers us nothing but tunnel vision? If we only consume our news through Facebook, we’re on the wrong track. ’Facebook is one big recommendation system. They use it mainly to show you relevant content and personalised ads.’ But in the news personalisation that the media themselves use, Helberger does not immediately see a threat to democracy. ’We see that news media such as DPG Media and RTL are experimenting with news personalisation in order to provide us with targeted information and to recommend content that is relevant based on your reader profile. Usually these recommendation systems are limited to parts of the website; it is not the case that the entire Volkskrant is suddenly personalised. More and more attention is also being paid to the responsible use of news personalisation and the effects on, for example, diversity or privacy.’
And last but not least, how do we choose the party that is most AI-ready? Helberger advises us to read the election programmes critically. ’When talking about technology, look for depth. Are the phrases hollow? Or have we really thought about the positive aspects, but also about how we can protect civil rights? It’s very modern to say: we want to invest in AI, but to really think about it you have to be aware of the implications. Only then can you invest in AI responsibly. And how do parties want to deal with large American platforms that are leading in the development of the technology, and that have sole control over large amounts of data? How do we ensure that sufficient talent is developed in the Netherlands to produce good technology ourselves? How do we ensure that we stimulate innovation in Dutch companies? How do we ensure that there is sufficient funding for research into the effects of technology?
With this last aspect, we have arrived at initiatives such as AI technology for people, in which Amsterdam knowledge institutes and the municipality work together and develop initiatives to put AI on the map.
For the Netherlands, that’s super important. I am very happy that attention is being paid to the broader social interests
Could Helberger maybe give us a little push in the right direction for March 2021? ’No, I don’t think it’s a good idea to give voting advice.’ Perhaps it’s not so surprising: responsible handling of the implications of artificial intelligence remains primarily an issue of using our good human sense.