Preventing incorrect decisions using AI

NoBias consortium with participation of the University of Stuttgart investigates
NoBias consortium with participation of the University of Stuttgart investigates bias in artificial intelligence [Picture: pixabay/ Gerd Altmann]
NoBias consortium with participation of the University of Stuttgart investigates bias in artificial intelligence [ Picture: pixabay/ Gerd Altmann] Whether credit reports, car insurance, or medical tests: Artificial intelligence (AI) is currently being used for decisions that have major consequences for individuals and society as a whole. However, the data may be interpreted incorrectly or unfairly by the underlying algorithms, thus leading to discriminatory decisions. Detecting and preventing such bias is the goal of the European research project NoBIAS (Artificial Intelligence without Bias) in which Dr. Steffen Staab, Chair of "Analytic Computing" and Cyber Valley Professor at the University of Stuttgart is involved. In a credit rating report, for example, artificial intelligence is used to estimate whether a loan can be successfully repaid or not. Data such as the salary of the individual concerned is part of the decision criteria. However, if this is evaluated automatically, a seemingly plausible criterion could lead to an unfair decision. For example, because women earn less on average, an automated decision might result in them being denied a loan - even though they most likely would have paid it off.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience