In the aftermath of the horrific 2013 Boston Marathon bombing, which killed three people and injured hundreds, police had one big thing going for them: video and photos of two suspects. But despite intense efforts, they couldn’t identify them. Three days later, the FBI released images in hopes that someone would recognize the pair and contact authorities. It wasn’t until Tamerlan and Dzhokhar Tsarnaev got into a shootout with police that evening that the manhunt ended.
At the outset, police hoped matches could quickly be made using new facial recognition software. It failed, even though government databases had photos of both brothers. Laments were heard about the unfortunate shortcomings of the innovative technology in catching terrorists.
Different laments are being heard lately. San Francisco has become the first city to prohibit police and other city agencies from making use of facial recognition technology. Critics fear it could be used to conduct continuous mass surveillance of the entire populace, keeping track of every person’s every movement.
Orwellian fears are premature. No one is plotting to implement such a system in this country. Though the prospect of privacy violations is a legitimate concern, the potential of this technology for protecting ordinary people is too great to dismiss. Facial recognition technology, correctly applied, would be the worst nightmare of every dangerous criminal.
The Indiana State Police has used it not only to flag potential suspects but also to locate crime victims. Pop star Taylor Swift, who has been the target of many death and kidnapping threats, reportedly has used it to detect stalkers who show up at her concerts.
And wouldn’t it have been a blessing for police to quickly identify the vicious killers who set off the marathon bomb? Notes George Washington University law professor Jonathan Turley: “Instead, the police did area searches which were not only ineffectual but arguably unlawful. The ‘old school’ approach in Boston was to isolate whole parts of Boston for door-to-door searches.”
The Chicago Police Department says it seldom uses the technology and only after a crime has been committed. In response to a Freedom of Information Act request last year from the American Civil Liberties Union of Illinois, the Illinois State Police indicated that agency doesn’t use it. But neither has been terribly forthcoming on the matter. Transparency is essential to fostering public understanding and support.
The key to this law enforcement method is the same as with previous ones: not banning it entirely, but subjecting it to rules that weigh the needs of law enforcement against reasonable expectations of privacy — as we do with searches, street stops, wiretaps and DNA swabs. Police may employ all of these tools, but only under specified restrictions designed to limit intrusions and minimize abuse.
One option, surely subject to debate about its real-time practicality, is to require police to get a search warrant before putting facial recognition technology to use to identify a suspect in a particular crime. Another is to stipulate that, because the software is not infallible, a match by itself should be taken not as definitive proof but merely suggestive evidence, warranting additional investigation. To that end, law enforcement officials frequently say they use possible matches as clues, not as sufficient grounds for criminal charges. In sum, police and prosecutors should be humble about the technology’s accuracy and ever alert to the risks of misidentification.
As an abstract matter, Americans may worry at the specter of mass surveillance. But when a terrorist or other violent criminal is at large, most of us would regard facial recognition as a priceless asset that should be enlisted as quickly as possible.
It would be a mistake to give the government carte blanche with this innovation. But it would be equally unwise to deprive the public of the benefits it could provide.