Fairer AI systems

(c) Image generated with Fotor AI image generator
(c) Image generated with Fotor AI image generator

Artificial Intelligence (AI) systems are increasingly being used in various aspects of our society. However, we do not always know how the algorithms work and whether the choices they make are fair. To address that problem, VUB AI specialist Dr. Carmen Mazijn conducted research with the central question, "How can we better understand AI systems so that they do not have a bad impact on our society by making unfair decisions and discriminating?"

Mazijn conducted interdisciplinary research as part of her PhD, affiliated with the Data Analytics Laboratory at the Faculty of Social Sciences & Solvay Business School and the Applied Physics Research Group at the Faculty of Science and Bioengineering. She got to work on decision algorithms: AI systems that support human decisions or even completely replace humans in the decision-making process. For example, in a selection process, an AI model may make seemingly gender-equal choices, but when the algorithm is scrutinized, those choices may be motivated completely differently for men and women.

"The algorithm can sometimes make seemingly fair decisions, but therefore not always for the right reasons," Mazijn said. "To know for sure whether an AI system has some bias or does make socially acceptable choices, you have to crack the system and the algorithms."

During her PhD, Mazijn developed a detection technique called LUCID to crack open those AI algorithms and examine whether the system uses acceptable logic. It allows testing whether the system can actually be used in the real world. Moreover, during the research, it was noticed that AI systems also easily interact with each other and problematic feedback loops can occur due to bias in one or more of them.

"A police department can use AI to determine which streets to deploy more patrols on," Mazijn gives as an example. "Consequently, by deploying more patrols, more violations are also identified. When that data is fed back into the AI system, a bias, whether present or not, is reinforced and you get a selffulfilling prophecy."

The key message is that AI systems should be used intelligently and long-term effects should also be considered. To make the results of the research accessible, Mazijn also outlined policy recommendations to clarify how the technical and social insights of her PhD can be applied

The dissertation was published Sept. 7, 2023, under the title: "Black Box Revelation: Interdisciplinary Perspectives on Bias in AI" and was supervised by Vincent Ginis and Jan Danckaert.

follow us on twitter @VUBrussels

The Vrije Universiteit Brussel is an internationally oriented university in Brussels, the heart of Europe. By delivering high-quality research and customized education, the VUB aims to make an active and committed contribution to a better society.

The World Needs You

The Vrije Universiteit Brussel takes its scientific and social responsibility with love and vigor. The VUB therefore launched the platform "The World Needs You. Here ideas, actions and projects are brought together, started and developed around six P’s. The first P stands for People, because that’s what it’s all about: giving people equal opportunities, prosperity, well-being, respect. Peace stands for fighting small and large injustices in the world. Prosperity combats poverty and inequality. Planet stands for actions concerning biodiversity, climate, air quality, animal rights, etc. With Partnership, the VUB seeks collaborations to make the world a better place. The sixth and last P is from Poincaré , the French philosopher Henri Poincaré from whom the VUB derives its slogan, that thinking should not submit to anything except the facts themselves. The VUB is an ’urban engaged university’, strongly anchored in Brussels and Europe and working according to the principles of free inquiry. www.vub.be/dewereldhe­eftjenodig