Two new AI research projects awarded seed funding

The UvA's research priority area Human(e) AI has awarded seed funding to two research projects following its third annual call for funding. One project will investigate the conditions under which an autonomous agent should take the responsibility to act, the other project will examine AI transparency in brand-based communication. Responsible artificial agency: a logical perspective . When should an artificial agent intervene to resolve a dilemma? And when should it alert its user or a relevant authority instead? Given the growing number of safety-critical applications of autonomous systems in areas like medicine, engineering, surveillance, transportation and media, developing rigorous tools to determine when it is responsible for an agent to act becomes increasingly urgent. The aim of this project is to develop logics for reasoning about the conditions under which an autonomous agent should take the responsibility to act. We will first use logic as a meta-analytical tool to analyse artificial systems, and to formalise the main criteria for assessing when an AI intervention is responsible or not. Our medium-term aim is to apply this analysis to developing fully formalised, decidable systems that can in principle be used by AI for its internal reasoning about its own and others' actions and their consequences.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience