Berenice Boutin (photo: TMC Asser Institute)
Berenice Boutin (photo: TMC Asser Institute) - Although types of automated weapons have existed for hundreds of years - with anti-personnel mines being the earliest example - the development of such systems controlled by artificial intelligence (AI) has bought with it a whole raft of new issues and concerns. And as such systems have proliferated over recent years, so has the disquiet about their use grown. Berenice Boutin of the Amsterdam Law School and the TMC Asser Institute is head of the DILEMA project - Designing International Law and Ethics into Military Artificial Intelligence - which is examining the legal, ethical, and technical approaches to safeguarding human agency over military AI. What is the DILEMA project about? . 'We're looking at military AI in its broadest sense. Lots of the coverage of this issue focuses on 'killer robots' (i.e. autonomous weapon systems), and that is certainly an important aspect of our research, but our project seeks to look at the implications of the whole sweep of military use of AI.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.