U-M study finds facial recognition technology in schools presents many problems, recommends ban
Research reveals inaccuracy, racial inequity and increased surveillance are the touchstones of a flawed technologyShare on: Share on Twitter Share on Facebook Share on LinkedIn
Facial recognition technology should be banned for use in schools, according to a new study by the University of Michigan’s Ford School of Public Policy that cites the heightened risk of racism and potential for privacy erosion.
The study by the Ford School’s Science, Technology, and Public Policy Program comes at a time when debates over returning to in-person school in the face of the COVID-19 pandemic are consuming administrators and teachers, who are deciding which technologies will best serve public health, educational and privacy requirements.
Among the concerns is facial recognition, which could be used to monitor student attendance and behavior as well as contact tracing. But the report argues this technology will "exacerbate racism,” an issue of particular concern as the nation confronts structural inequality and discrimination.
In the pre-COVID-19 debate about the technology, deployment of facial recognition was seen as a potential panacea to assist with security measures in the aftermath of school shootings. Schools also have begun using it to track students and automate attendance records. Globally, facial recognition technology represents a $3.2 billion business.
The study, "Cameras in the Classroom,” led by Shobita Parthasarathy, asserts that not only is the technology not suited to security purposes, but it also creates a web of serious problems beyond racial discrimination, including normalizing surveillance and eroding privacy, institutionalizing inaccuracy and creating false data on school life, commodifying data and marginalizing nonconforming students.
"We have focused on facial recognition in schools because it is not yet widespread and because it will impact particularly vulnerable populations. The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous,” said Parthasarathy, STPP director and professor of public policy.
The study is part of STPP’s Technology Assessment Project, which focuses on emerging technologies and seeks to influence public and policy debate with interdisciplinary, evidence-based analysis. The study used an analogical case comparison method, looking specifically at previous uses of security technology like CCTV cameras and metal detectors, as well as biometric technologies, to anticipate the implications of facial recognition. The research team also included one undergraduate and one graduate student from the Ford school.
Currently, there are no national laws regulating facial recognition technology anywhere in the world.
"Some people say, ’We can’t regulate a technology until we see what it can do.’ But looking at technology that has already been implemented, we can predict the potential social, economic and political impacts, and surface the unintended consequences,” said Molly Kleinman, STPP’s program manager.
Though the study recommends a complete ban on the technology’s use, it concludes with a set of 15 policy recommendations for those at the national, state and school district levels who may be considering using it, as well as a set of sample questions for stakeholders, such as parents and students, to consider as they evaluate its use.