Artificial intelligence

Artificial Intelligence (AI) systems are commonly considered to counter human error and bias, but whether this is the case heavily depends on how the algorithm underpinning the AI system was built. If it is fed with data from a skewed set, or if it is coded without taking existing inequalities in mind, its output will reproduce the same flaws. 

The truth is that algorithms are often biased towards race and gender, and the more widely algorithms are used in society, the larger the real-life consequences. This holds especially true for those who already are most vulnerable, as they tend to bear the cost of innovation whilst not reaping as many of the benefits. 

Biased AI causes problems. For instance, AI is being used to evaluate applications for mortgages, business loans and other credit products but discriminates against women who have an income gap around their childbearing years (because, due to limited childcare and the expectation that women do most domestic unpaid labour, motherhood often pulls women out of the workforce). This limits women’s financial independence, even if they have a steady income or solid business plan. Gender bias in AI also results in skewed predictions of recidivism rates, which limits our ability to design fair and effective policies aimed at preventing recidivism. 

Very often, algorithms reproduce racial bias. In health care, when assessing patients’ medical needs, AI often fails to refer Black patients who need extra care. Algorithms can be trained to recognise melanoma from images, but if this is done with a dataset lacking diversity of skin colour, thickness or amount of hair, it will not be able to detect skin cancer at equal rates of accuracy across a diverse population. In education and recruitment assessments, AI favours high-income, white candidates who are recognised as “more fit” because they match the data on which the algorithm runs.

To counteract the harmful consequences of biased AI, we must carefully assess the assumptions that underlie the data we feed into the system. Diverse teams are generally better at doing so. Besides, it’d be wise not to (over)rely on AI when the benefits are minor, or when those using AI  barely understand how the algorithm works. Judges are generally not experts in statistics, whereas data scientists may not be fully aware of civil rights issues. This gap makes it hard to disentangle the causes of discriminatory outcomes in AI.