Facial recognition

Facial recognition software is useful for a plethora of things – unlocking your phone, helping people with autism recognise emotions, or for police to decide which individuals should be stopped and searched on the street. However, facial recognition systems trained with biased data are likely to algorithmically discriminate against women and minorities. 

Facial recognition systems are widely used by the government to identify crime suspects. Algorithms developed in the United States perform worse on women, dark-skinned individuals, and people aged between 18 and 30. That means people from these groups are more likely to be stopped and searched by police who use computer vision to recognise potential criminal suspects (bearing in mind racial police violence has resulted in numerous police killings). 

Scholarly analysis of several commercially available facial recognition systems revealed that many (if not all) companies employ binary sex categories of male and female. Systems built on such a classification system are unable to recognise the complexities of gender and will misclassify transgender and non-binary persons. Besides leading a person to feel rejected or excluded, this can have consequences when facial recognition is applied in, say, airport security. Security procedures following a misclassification could easily cause someone to miss their flight.

Diversity in tech is imperative to mitigate algorithmic bias in computer vision systems, as well as bias in other applications like voice recognition systems. That is, diversity of software developers, who are aware of their own potential biases, and who use diverse datasets to train their algorithms. An intersectional lens should be applied when assessing the extent to which computer vision systems function. Want to know more about algorithmic discrimination? We recommend watching the documentary Coded Bias on Netflix.