Joichi Ito, MIT’s Media Lab director, said during an artificial intelligence panel held this week at the World Economic Forum Annual Meeting that the software’s apparent trouble with recognizing diversity is likely because the engineers, and the faces used to train the software, are mostly white.
The issue goes back to the basics of artificial intelligence. Machine learning programs are based on teaching a computer with a set of data. In the case of facial recognition software, that computer is taught to recognize faces using a series of photos — sometimes of the engineers themselves. Since a majority of the photos used to train the software contain few minorities, the program often has trouble picking out those minority faces, according to Ito.
The issue spreads even further because many programmers don’t completely rewrite their code from scratch. Software engineers use libraries, or prewritten code, in multiple programs. When that prewritten code is based on a set of photographs that favors one race over another, that bias is integrated into multiple programs.
Joy Buolamwini, a graduate researcher on the project, said during a TED talk that she had to use a white mask for her face to be picked up by facial-recognition systems, from a cheap webcam to a smart mirror and even a social robot being tested on the opposite side of the globe.
Besides not being able to use a smart mirror or being misidentified in social media, bigger issues arise when law enforcement use facial-recognition software, she says, like when monitoring video feeds.
Bulamwini said a solution to the issue would be to simply train the facial-detection systems with a more diverse set of images. Developing diverse teams to work on the projects would also help create more immersive coding, she suggests, as well as auditing existing software to identify biases. At the end of her discussion in December, Bulamwini invited anyone interested in working to change bias in the code to join the Algorithmic Justice League.