Even if you’re sitting at home on your couch, there’s a chance you could be arrested for protesting.
How? If the police force in your area is using any kind of facial recognition software to identify protesters, it’s possible you could be misidentified as one.
Most facial recognition was trained to identify white male faces, experts told Digital Trends, which means the probability of misidentification for anyone who is not white and not a man is much higher.
“Anytime you do facial recognition, it’s a best guess. It’s a probability score,” said David Harding, chief technology officer for ImageWare Systems, a cybersecurity firm that works with law enforcement on facial recognition. “Anytime you’re in an area where they [law enforcement or the government] are using facial recognition, you have to worry about being falsely matched to someone. Or what’s even worse, someone being falsely matched to you.”
Deployed against protesters
Facial recognition has been gaining in popularity among law enforcement.
In Minnesota, where demonstrators have flooded the streets for days protesting the killing of George Floyd by police, officers are still using the controversial facial recognition software Clearview AI, according to digital security firm Surfshark.
Clearview AI came under fire earlier this year for scraping people’s photos from social media, counter to companies’ terms of service. Several companies, including Twitter, issued a cease and desist order.
Surfshark told Digital Trends that in addition to Minnesota, several other states, including New York, are still using Clearview AI technology, and in Washington County, Minnesota, police are using Amazon’s Rekognition software.
But in a statement to Digital Trends, the Minneapolis Police Department denied that it possesed any facial recognition technology.
A peer-reviewed study from the Massachusetts Institute of Technology found that Rekognition was extremely bad at recognizing female and dark-skinned faces, more so than other similar services. The software misclassified women as men 19% of the time, the New York Times reported. That error rate got even higher when skin color was taken into account: 31% of dark-skinned women were labeled as men.
“False results can incriminate the wrong people as FRT [facial recognition technology] is proven to be problematic while spotting criminals in a crowd,” Gabreille Hermier, Surfshark’s media officer, said.
A shaky system at best
A facial recognition system prone to false positives could cause innocent people to be arrested, according to Mutale Nkonde, a fellow at the Berkman Klein Center of Internet & Society at Harvard University and a non-resident fellow at the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society.
“Police will use the mug shots of people who have been committed for other crimes to train facial recognition, arguing that if you’ve committed one crim,e then you’ve committed another,” Nkonde said. “First off, that’s unconstitutional. Second, that means that if you’ve been arrested for looting in the past, but haven’t looted recently, the police could now come arrest you for looting in May or June because your picture is in the system and it may have turned up a false positive.”
Harding said those who are non-white and female are “very much at risk” of misidentification. Harding emphasized there’s a big difference between facial recognition as the sole tool in mass surveillance and law enforcement using a mug shot, fingerprints, and other evidence in a controlled environment alongside facial recognition software to find a specific suspect.
“Even if it were 100% accurate, this isn’t compatible with a democratic society,” said Saira Hussain, a staff attorney with the Electronic Frontier Foundation. “It’s always a possibility that someone will be misidentified.”
Dangerous precedents
Critics say the case of Eric Loomis looms large over facial recognition use.
Loomis was sentenced to six years in prison after a 2013 arrest, due in large part to an assessment from a private security company’s software. But the company that wrote the algorithm kept the software proprietary, even as it was being used to help determine a defendant’s prison sentence, the New York Times reported at the time.
It’s a chilling precedent. Protesters and demonstrators have feared police surveillance, in some cases for good reason. The Christian Science Monitor found evidence that Chicago police were tracking people using their cell phones in 2014 following the Black Lives Matter protests that sprung up following the shooting of Michael Brown in Ferguson, Missouri.
It happened again in 2015, Hussain said, following the protests surrounding the death of Freddie Gray in Baltimore, also at the hands of police.
“Law enforcement used people’s social media posts as a tool to identify who was in a certain vicinity of the protest, identify the protesters, and then arrest them for unrelated charges,” Hussain said.
“People who are currently protesting are taking a stand on racial injustice. If they risk becoming subjects of state surveillance, we are leaning towards China, where [facial recognition technology] is a tool for authoritarian control,” Hermier wrote.