New mathematical principle used to prevent AI from making unethical decisions

A new mathematical principle has been designed to combat AI bias towards making unethical and costly commercial choices. Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and businesses manage artificial intelligence (AI) systems' biases towards making unethical, and potentially very costly and damaging, commercial choices. It may be necessary to rethink the way AI operates in very large strategy spaces, so that unethical outcomes are rejected by the optimisation process. Dr Heather Battey AI is increasingly deployed in commercial situations, for example to set the prices of insurance products to be sold to specific customers. The AI will choose from many potential strategies, some of which may be discriminatory or may otherwise misuse customer data in ways that later lead to severe penalties for the company. For example, regulators may levy significant fines and customers may boycott the company. Ideally, unethical methods such as these would be removed from the pool of potential strategies beforehand, but as the AI does not have a moral sense it cannot distinguish between ethical and unethical strategies.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience