Making data management responsible

In order to improve the ethical and legal compliance of automated decision systems, it is crucial to inspect the full life cycle of the data being used: from the moment the data are being collected to the moment the data are fed into a machine learning system. This is the thesis that UvA assistant professor Sebastian Schelter and four international colleagues have been arguing for in a recent article in the Communications of the ACM. Automated decision systems - Computer scientist Sebastian Schelter takes his inspiration from a grand challenge that links the fundamental rights of citizens to data management systems: 'How can we build automated decision systems in such a way that people's fundamental rights are already respected by the design of the system?' Schelter is assistant professor at of the Informatics Institute of the University of Amsterdam, working at the intersection of data management and machine learning in the INtelligent Data Engineering Lab ( INDElab ). In recent years, more and more automated decision systems have been introduced into society: from systems that automatically select job candidates or automatically recommend products, to systems that automatically predict where certain crimes are going to happen or determine who can get which loan. Unfortunately, the application of such systems has led to various forms of discrimination, for example by gender, age, ethnicity or socio-economic status. A famous example is a hiring algorithm used by Amazon that showed bias against women.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience