AI Ethics Experts Propose Driverless Car Regulations Similar To Drug Approval Process
By Shilo Rea As autonomous systems, such as driverless cars, increasingly perform tasks that previously could only be performed by humans, two artificial intelligence ethics experts at Carnegie Mellon University argue that regulations governing such vehicles should be done in phases similar to the drug approval process. In an opinion article published in IEEE Intelligent Systems , David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that they will perform safely and reliably. "Currently, we ensure safety on the roads by regulating the performance of the various mechanical systems of vehicles and by licensing drivers," said London, professor of philosophy and director of the Center for Ethics and Policy in the Dietrich College of Humanities and Social Sciences. "When cars drive themselves we have no comparable system for evaluating the safety and reliability of their autonomous driving systems." Autonomous vehicles have the potential to save lives and increase economic productivity. But these benefits won't be realized unless the public has credible assurance that such systems are safe and reliable. Alex John London Danks and London cite the Department of Transportation's recent attempt to develop safety regulations for driverless cars as an example of traditional guidelines that do not adequately test and monitor the novel capabilities of autonomous systems. Instead, they suggest creating a staged, dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring.