Using AI to fight COVID-19 may harm disadvantaged groups, experts say

The university’s researchers also highlighted discrimination in AI technology as they pick symptom profiles from medical records, reflecting and exacerbating biases against minorities

March 19, 2021 12:03 pm | Updated 12:08 pm IST

The use of contact-tracing apps has also been criticised by several experts around the world.

The use of contact-tracing apps has also been criticised by several experts around the world.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Companies worldwide have devised methods in the past year to harness the power of big data and machine learning (ML) in medicine. A model developed by Massachusetts Institute of Technology (MIT) uses AI to detect asymptomatic COVID-19 patients through coughs recorded on their smartphones. In South Korea, a company used cloud computing to scan chest X-rays to monitor infected patients.

Artificial intelligence (AI) and ML have been extensively deployed during the pandemic, and their use ranged from data extraction to vaccine distribution. But experts from the University of Cambridge raise questions on ethical use of AI as they see the technology to have a tendency to harm minorities and those from lower socio-economic status.

“Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic,” said Stephen Cave, Director of Cambridge’s Center for the Future of Intelligence (CFI).

Also Read | Competition between prediction algorithms is bad for customers, study finds

Making clinical choices like predicting deterioration rates of patients who may need ventilation can be flawed as the AI model uses biased data. These trained datasets and algorithms are inevitably skewed against groups that access health services infrequently, including minority ethnic communities and those belonging to lower social status, Cambridge team warned.

Another issue is in the way algorithms are used to allocate vaccines locally, nationally and globally. Last December, Stanford Medical Centre’s vaccination plan algorithm left out several young front-line workers.

“In many cases, AI plays a central role in determining who is best placed to survive the pandemic. In a health crisis of this magnitude, the stakes for fairness and equity are extremely high,” said Alexa Hagerty, research associate at University of Cambridge.

Also Read | How bias crept into AI-powered technologies

The university’s researchers also highlighted discrimination in AI technology as they pick symptom profiles from medical records, reflecting and exacerbating biases against minorities.

The use of contact-tracing apps has also been criticised by several experts around the world , stating that it excludes those who don’t have access to the internet and those who lack digital skills, among other user privacy issues.

In India, biometric identity programmes can be linked to vaccination distribution, raising concerns for data privacy and security. Other vaccine allocation algorithms, including some used by the COVAX alliance, are driven by privately owned AI. These private algorithms are like ‘black box’, Hagerty noted.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.