Researchers from MIT and Stanford University found that that three different facial analysis programs demonstrate both gender and skin color biases. The full article will be presented at the Conference on Fairness, Accountability, and Transparency later this month.
Specifically, the team looked at the accuracy rates of facial recognition as broken down by gender and race. “Researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent for a face-recognition system they’d designed. But the data set used to assess its performance was more than 77 percent male and more than 83 percent white.” This narrow test base results in a higher error rate for anyone who isn’t white or male.
In order to test these systems, MIT researcher Joy Buolamwini collected over 1,200 images that contained a greater proportion of women and people of color and coded skin color based on the Fitzpatrick scale of skin tones, in consultation with a dermatologic surgeon. After this, Buolamwini tested the facial recognition systems with her new data set.
The results were stark in terms of gender classification. “For darker-skinned women . . . the error rates were 20.8 percent, 34.5 percent, and 34.7,” the release says. “But with two of the systems, the error rates for the darkest-skinned women in the data set . . . were worse still: 46.5 percent and 46.8 percent. Essentially, for those women, the system might as well have been guessing gender at random.”
There have certainly been accusations of bias in tech algorithmspreviously, and it’s well known that facial recognition systems often do not work as well on darker skin tones. Even with that knowledge, these figures are staggering, and it’s important that companies who work on this kind of software take into account the breadth of diversity that exists in their user base, rather than limiting themselves to the white men that often dominate their workforces.