Does ‘AI’ stand for augmenting inequality in the era of COVID-19 healthcare?

Leslie, Mazumder et al, BMJ (16.03.21)

‘Artificial intelligence can help tackle the covid-19 pandemic, but bias and discrimination in its design and deployment risk exacerbating existing health inequity.’

Among the most damaging characteristics of the COVID-19 pandemic has been its disproportionate effect on disadvantaged communities. As the outbreak has spread globally, factors such as systemic racism, marginalisation and structural inequality have created path dependencies that have led to poor health outcomes. These social determinants of infectious disease and vulnerability to disaster have converged to affect already disadvantaged communities with higher levels of economic instability, disease exposure, infection severity and death.

Artificial intelligence (AI) technologies – quantitative models that make statistical inferences from large datasets – are an important part of the health informatics toolkit used to fight contagious disease. AI is well known, however, to be susceptible to algorithmic biases that can entrench and augment existing inequality. Uncritically deploying AI in the fight against COVID-19 thus risks amplifying the pandemic’s adverse effects on vulnerable groups, exacerbating health inequity.