A recent Government Accountability Office (GAO) survey shows that at least 10 federal agencies have plans to expand their use of facial recognition technology over the next two years—a prospect that alarms privacy advocates who worry about a lack of oversight.
The GAO released the results of a survey of 24 federal agencies, finding that 18 of them use facial recognition technology. Fourteen of those agencies use the tech for routine activity, such as unlocking agency-issued smartphones, while six reported using facial recognition software for criminal investigations and five others use the technology for surveillance, the Aug. 24 report found.
“For example, [U.S. Department of Health and Human Services] reported that it used an FRT system (AnyVision) to monitor its facilities by searching live camera feeds in real-time for individuals on watchlists or suspected of criminal activity, which reduces the need for security guards to memorize these individuals’ faces,” the GAO said. “This system automatically alerts personnel when an individual on a watchlist is present.”
According to the GAO, at least 10 government agencies plan to expand their use of facial recognition technology through 2023. To do so, many agencies are turning to the private sector.
For example, “[the] U.S. Air Force Office of Special Investigations reported it began an operational pilot using Clearview AI in June 2020, which supports the agency’s counterterrorism, counterintelligence, and criminal investigations,” the GAO said.
“The agency reported it already collects facial images with mobile devices to search national databases and plans to enhance searches by accessing Clearview AI’s large repository of facial images from open sources to search for matches.”
The GAO’s Aug. 24 report follows June research that focused specifically on law enforcement’s use of facial recognition technology. The GAO’s June report revealed the vast troves of data held by federal law enforcement, including 836 million images held by the Department of Homeland Security alone.
The June report also revealed the lack of oversight regarding facial recognition technology. According to the report, 13 of the 20 federal law enforcement agencies that use the technology didn’t know what systems they use.
“For example, when we requested information from one of the agencies about its use of non-federal systems, agency officials told us they had to poll field division personnel because the information was not maintained by the agency,” the report said.
“These agency officials also told us that the field division personnel had to work from their memory about their past use of non-federal systems and that they could not ensure we were provided comprehensive information about the agency’s use of non-federal systems.”
The lack of oversight of the government’s use of surveillance technology is an issue that has drawn the attention of lawmakers from both sides of the aisle. Democrats have largely focused on the racial disparities in the accuracy of facial recognition, while some Republicans have expressed concerns about domestic surveillance.
Michigan resident Robert Williams, a Black man who was wrongly arrested in January after Detroit police incorrectly identified him as a felon based on shoddy facial recognition technology, testified about such problems at a U.S. House Judiciary Committee hearing.
“Why is law enforcement even allowed to use such technology when it obviously doesn’t work?” Williams said to lawmakers July 13. “I get angry when I hear companies, politicians, and police talk about how this technology isn’t dangerous or flawed or say that they only use it as an investigative tool.
“If any of that was true, I wouldn’t have been arrested.”
Williams said he supports the Facial Recognition and Biometric Technology Moratorium Act, which would halt the use of facial recognition technology by federal agencies until that use was authorized by Congress. However, little action has been taken on the measure—though Sen. Ed Markey (D-Mass.) reintroduced the legislation in June.
With inaction on the federal level, states and localities have taken to curbing the use of facial recognition technology.
The state of Washington enacted a law in March 2020 that requires government agencies to obtain a warrant to run facial recognition scans. Local jurisdictions such as Oakland, San Francisco, and King County, Washington, have also banned government use of the technology.
Groups such as the American Civil Liberties Union (ACLU) support such efforts, arguing that the expansion of facial recognition technology must be halted until lawmakers can enact safeguards.
Others have cautioned against banning useful technology in the zeal to protect privacy.
“Critics miss the fact that the benefits of law enforcement use of facial recognition are well-proven—they are used today to help solve crimes, identify victims, and find witnesses—and most of the concerns about the technology remain hypothetical,” the Information Technology & Innovation Foundation, a largely pro-tech industry think tank, stated.
“In fact, critics of the technology almost always make a ‘slippery slope’ argument about the potential threat of expanding police surveillance, rather than pointing to specific instances of harm. Banning the technology now would do more harm than good.”