Not only did the algorithm falsely match 28 members of Congress to criminal mug shots, but the false matches were more common for congresspeople of color.
AI researchers have repeatedly found that machine learning software
picks up biases in the data from which it learns. This bias has been demonstrated in
text,
images, and
audio, and now the American Civil Liberties Union (ACLU) is bringing this message to US lawmakers.
A test by the civil-rights organization trained Amazon Rekognition, the company’s machine learning tool specifically meant for facial recognition, to match faces with 25,000 publicly-available mugshots, and then had the algorithm try to match mugshots with images of members of the US Congress, according to a
blog post from ACLU lawyer Jacob Snow. Not only did the algorithm falsely match 28 members of Congress to criminal mug shots, but the false matches were more common for congresspeople of color.
The ACLU is using this demonstration to call for a moratorium on police use of facial-recognition technology, claiming that it’s both too biased and not accurate enough to be used effectively.
“If law enforcement is using Amazon Rekognition, it’s not hard to imagine a police officer getting a ‘match’ indicating that a person has a previous concealed-weapon arrest, biasing the officer before an encounter even begins,” Snow wrote, “Or an individual getting a knock on the door from law enforcement, and being questioned or having their home searched, based on a false identification.”
(Read more)