Machines are often thought to be above reproach when it comes to bias of any kind. They don’t have the same social hangups that encumber humans.
But is that actually true?
Congresswoman Alexandria Ocasio-Cortez used her five minutes during a House Oversight Committee Hearing last week to call attention to biases literally programmed into certain forms of technology, especially—as Ocasio-Cortez pointed out—in facial recognition technology.
In questioning the founder of the Algorithmic Justice League, Joy Buolamwini, Ocasio-Cortez asked for information regarding the demographic that’s primarily creating these algorithms, and who these algorithms are designed to recognize.
After Buolamwini confirmed for Ocasio-Cortez that facial recognition algorithms are less reliable at identifying women, people of color, and transgender individuals, Buolamwini went on to point out that these algorithms are primarily calculated by white cisgender men and subsequently identify them more reliably.
“So, we have a technology that was created and designed by one demographic that is only mostly effective on that one demographic and they’re trying to sell it and impose it on the entirety of the country?”
“We have the pale male data sets being used as something universal when that’s actually not the case,” Buolamwini confirmed.
The use of facial recognition technology is growing rapidly, especially within law enforcement agencies like the FBI. Large tech companies are courting these agencies in a race for whose technology can be perfected first.
However, the use of facial recognition technology to identify potential criminals—unless corrected to include people of color, trans people, and women—could lead to misidentification and wrongful imprisonment of marginalized communities.
People were cheering the Congresswoman’s line of questioning.