Facial recognition technology is here – and it isn’t always accurate
Law enforcement is increasingly using facial recognition technology to identify suspects who are caught on camera. It’s as if there were a perpetual lineup that includes every American. If you have a photo on file with the government – and you almost certainly do – police and federal agencies may have had an artificial intelligence (AI) program evaluate that photo for comparison to criminal suspects.
This is happening now. According to a recent survey of 42 out of 86 federal law enforcement agencies, at least 20 agencies were willing to admit using facial recognition technology. Yet none of those agencies could point to adequate privacy and accuracy controls or even significant oversight of how employees use the technology.
It’s happening despite the fact that facial recognition technology has a long history of inaccuracy – and an even lower accuracy rate for people of color.
In 2018, the ACLU tested Amazon’s facial recognition tool “Rekognition” on photographs of members of Congress. The tool incorrectly identified 28 members of Congress as people who had been arrested for crimes. The mistake affected people of color disproportionately. Almost 40% of the Rekognition mis-matches were people of color even though people of color only make up 20% of Congress.
Congress is beginning to respond to concerns over accuracy of facial recognition. In July, four federal lawmakers introduced legislation to slow down or halt the adoption of facial recognition technology by federal law enforcement. Meanwhile, municipalities such as King County, Washington, have banned police use of the technology altogether.
Tech companies who stand to profit from this technology are lobbying now for the rules to be lax. While facial recognition is a powerful surveillance tool, we need to insure that the constitutional rights of Americans are also protected.