Two new studies on facial recognition and policing in the southern seaboard of the U.S. do little to increase the public’s trust in AI-armed law enforcement.
The first study examines a law passed last year in the state of Virginia that the researchers say has failed to “properly account for the harms” of police using facial recognition.
A second, broader, study is based on qualitative interviews of police officers in the state of North Carolina.
Both documents indicate that law enforcement’s use of AI in the United States is going ahead without answers to fundamental ethical and operational questions.
‘From Ban to Approval’ examines the history of one of the first state facial recognition laws. It was published in the Richmond Public Interest Law Review. Its lead author is a director of the state’s public defender’s office. Her co-writers are from Georgetown University and the Future of Privacy Forum, which is partly funded by corporations.
The authors outline accuracy and bias risks that are common to many facial recognition algorithms, but also bring up how pervasive, always-watching surveillance changes the balance of power between citizens and the government. Citizens have no similar way to secretly watch officials from a distance and without going through a judge to do it.
The paper goes deeply into the weeds, finding, for instance, that Virginia law requires the federal National Institute of Standards and Technology certify the state’s algorithm scores a minimum 98 percent for true positives.
But measuring false positives is not mandated.
In fact, according to the researchers, the state’s law lets police use facial recognition “in a generally unregulated manner, and in ways that can harm privacy, free speech, due process, and other civil rights and liberties.”
The North Carolina-focused study, by researchers from North Carolina State University, found that police are optimistic that AI in general is making them better and more effective at preserving public safety.
However, given unabated ethical concerns and just the potential for AI to harm civil rights, the police feel it “will not necessarily increase trust” between law enforcement and those they serve.
At least in the context of the university study, not enough is being done to create rounded policies that address how to get positive outcomes from AI-enhanced police based on principled ethics, harm mitigation and a need to create societal benefits that are obvious and lasting.
Article: Studies show US police facial recognition use is advancing without thinking through harms