Cops Have a Database of 117M Faces. You’re Probably in It

Posted by on Nov 25, 2016 in IT News | 0 comments

Cops Have a Database of 117M Faces. You’re Probably in It

It’s no secret that American law has been building facial recognition databases to aide in its investigations. But a new, comprehensive report on the status of facial recognition as a tool in law enforcement shows the sheer scope and reach of the FBI’s database of faces and those of state-level law enforcement agencies: Roughly half of American adults are included in those collections. And that massive assembly of biometric data is accessed with only spotty oversight of its accuracy and how it’s used and searched.

The 150-page report, released on Tuesday by the Center for Privacy & Technology at the Georgetown University law school, found that law enforcement databases now include the facial recognition information of 117 million Americans, about one in two U.S. adults. It goes on to outline the dangers to privacy, free speech, and protections against unreasonable search and seizure that come from unchecked use of that information. Currently the report finds that at least a quarter of all local and state police departments have access to a facial recognition database—either their own or another agency’s—and law enforcement in more than half of all states can search against the trove of photos stored for IDs like drivers’ licenses.

“Face recognition technology lets the police identify you from far away and in secret without ever talking to you,” says Alvaro Bedoya, the executive director of the Center for Privacy & Technology. “Unless you’ve been arrested, the chances are you’re not in a criminal fingerprint database or a criminal DNA database either, yet by standing for a driver’s license photo at least 117 million adults have been enrolled in a face recognition network searched by the police or the FBI.” He went on to describe the databases as an unprecedented privacy violation: “a national biometric database that is populated primarily by law abiding people.”

The report notes that no state has passed comprehensive legislation to define the parameters of how facial recognition should be used in law enforcement investigations. Only a handful of departments around the country have voluntarily imposed limits on searches to require reasonable suspicion or necessitate that they be used only in investigation of a serious crime. Similarly, few departments have enacted standards for testing the accuracy of their digital systems or teaching staff to visually confirm face matches—a skill that seems like it would be innate, but actually requires specialized training.

The report also raises unexpected concerns about the potential for racial bias in the facial recognition databases. Law enforcement agencies have, in many cases, argued that the biometric tools reduce racial policing. After all, a computer doesn’t know the societal meaning of race or gender; it simply sorts and matches photos based on numeric analysis of features and patterns. But research has shown that facial recognition algorithms aren’t as impartial as they seem. Depending on the data sets used to train machine learning systems, they can be become far better at identifying people of some races than others. For example, some research indicates that facial recognition systems in the United States have lower accuracy when attempting to identify African Americans. Meanwhile, since law enforcement facial recognition systems often include mug shots and arrest rates among African Americans are higher than the general population, algorithms may be disproportionately able to find a match for black suspects.

The FBI declined to specifically comment on the report, but referred to previous statements about its facial recognition program in which the agency said that its use of the technology prioritizes privacy and civil liberties “beyond the requirements of the law.” The agency also noted that when an investigator searches a facial recognition database, two separate human reviewers check potential matches the system returns before identifying any individual to an investigator, and only about 12 percent of searches result in a positive identification. It’s not clear if any such safeguards apply to state and local-level police using facial recognition systems.1

Perhaps the most dystopian aspect of the report is its findings that real-time facial recognition—identifying people in public as they pass a live-feed video camera—is increasing in popularity among police departments. The researchers found that five departments in major cities like Los Angeles and Chicago either already use real-time face recognition, own the technology to do it, or want to buy it. That pervasive surveillance raises similar concerns to image databases, but significantly expands questions about expectation of privacy and the ability for police to perform this new form of surveillance en masse and in secret.

In reaction to the report, a coalition of more than 40 civil rights and civil liberties groups, including the American Civil Liberties Union and the Leadership Conference for Civil and Human Rights launched an initiative on Tuesday asking the Department of Justice’s Civil Rights Division to evaluate current use of facial recognition technology around the country. With facial recognition, “Police are free to identify and potentially track anyone even if they have no evidence that that person has done anything wrong,” says Neema Singh Guliani, the legislative counsel for the ACLU. “We don’t expect that the police can identify us when we are walking into a mosque, attending an AA meeting, or when we’re seeking help at a domestic violence shelter.”

For about half of American adults, it’s too late to keep their faces out of law enforcements’ biometric surveillance system. Now privacy advocates’ best hope is to limit how that collection of faces can be used—and abused.