Avoiding bias and increasing diversity in AI and health research – Part 1

11.05.2021

This article is part 1 of our bias in AI series, an update to the original article in our Biotech Review of the year – issue 8. Read part 2 here.

During the COVID-19 pandemic, the notion of different health outcomes for different populations has gained increased profile in the public consciousness, particularly in light of the varying effect of COVID-19 on different community groups. Varying outcomes can arise for a variety of reasons, one of which is bias (whether conscious or unconscious) in the healthcare system. But surely this isn’t something that needs to be considered in relation to AI in health research, as AI systems are inanimate and can’t display human faults…right?

Can inanimate drugs and medical devices really be biased?

There is often a misconception that medical devices and AI systems can’t produce biased results, as they work using logic and process, rather than being tainted by flawed assumptions based on human error or prejudice. However, ultimately it is humans that design medical devices, which are tested on human collected datasets. Similarly, algorithms are also created by humans and many processes forming part of an AI system rely on human input, for example the selection of datasets on which the system is to be trained.

Therefore, it is possible for AI systems to provide unbalanced outputs with discriminatory effects, and – by extension – for medical devices to work more effectively for certain sectors of the population (being those sectors whose data has been used for the training). A similar situation arises in drug discovery, where some clinical trials only select participants from limited ethnic groups or backgrounds and the effect of the drug on other social groups is not tested.

In the healthcare sector, fairness is particularly important, as biases can lead to significantly worse health outcomes for whole communities and, in some cases, potentially the difference between life and death.

The COVID-19 pandemic has shone a spotlight on the need for diversity of data in the medical research spheres, as the apparently more severe effect of COVID-19 on individuals from the black and Asian ethnic communities (the cause of which, at the time of writing, has not yet been ascertained) has emphasised the negative impact that can occur when sections of society aren’t represented in research.

One of the themes of the ICO’s Guidance on AI and Data Protection[1] is the need to address the risk of bias and discrimination in AI systems, in particular to ensure compliance with the data protection principle of “fairness”.

What’s the challenge?

Where AI is trained on data relating only or mainly to one group of people, such as only men or only people from a white ethnic background, the system might not have enough data about other groups to pay attention to any statistical relationships that predict certain outcomes for those groups. For example heart disease risk factors tend to differ between the sexes[2] but most research on heart disease to date has focussed on middle aged men, meaning any AI trained on that data would be much better at identifying the risk factors for a man than a woman, and therefore a woman’s heart disease could go undiagnosed for longer, putting her at higher risk. The same is true that most medical research until recently has focussed on symptoms and risk factors for white people with Western lifestyles[3].

Why has this happened? Two theories have arisen and were discussed at the most recent BIA Bioscience virtual conference[4]. The first is that there is not enough trust between some ethnic minority communities and the research/medical industry. This is partly thought to be due to a lack of people from certain communities working in those industries due to discrimination and lack of opportunity, and partly due to past health inequalities leading these communities to doubt the efficacy of research and Western medicine for their benefit. The second theory is that researchers simply haven’t recognised the need to diversify the range of participants they include. The desire to control as many variables as possible in a trial also reduces the diversity in trial participants.

Bristows Life Science Summit 2021

Trust in AI in the healthcare sector is one of themes we’re going to debate at our next Summit in November ’21.

Keep an eye out on our events page for further details, and register your interest here.

 

Currently the majority of genetic data for biomedical studies comes from the Western world, for example from Genomics England, but population studies and biobanks are slowly growing across the world and can be incorporated into research. Data silos can also be damaging and destructive to drug research. They lead to adverse effects for minority groups as drugs simply aren’t tested on members of their communities and/or symptoms aren’t so recognised in those groups. Many drugs are therefore advertised to and used on a diverse population without being tested on a similarly diverse population. There is a risk of creating “second class” medical citizens with this approach.

In addition to clinical research, the ICO has recently focused on the risk of bias in AI systems and how to avoid this risk materialising. The ICO lists five main contributing factors to bias creation within AI systems:

  • Training data may reflect past discrimination (e.g. where it was previously thought that people from a certain ethnic minority didn’t suffer from a particular illness because they didn’t present symptoms in the same way as white patients);
  • Prejudices or bias can occur in the way variables are measured, labelled or aggregated;
  • The developers may use biased cultural assumptions;
  • Inappropriately defined objectives, which embed assumptions about gender, race or other characteristics;
  • The way the model is deployed (e.g. it may use a non-accessibly designed interface, limiting the people that can properly interact with it).

Achieving this change is important from both an efficacy and ethical point of view. For example, only 7% of individuals in the COVID-19 vaccine trials were from a BAME background[5] and yet that is a community which is appears to be more substantially affected by the illness, so a big opportunity was being missed to make sure the vaccine works for all. Not to mention, from a commercial point of view, drugs which are scalable globally are more profitable, with large markets overseas in continents such as Africa and Asia, so it makes good business sense for pharmaceutical companies and medical device companies to test their products on patients from Asian, African and South American communities, so that they can be confident that the drugs or devices will work across all continents, widening their customer base.

For detail on how these challenges can be overcome, see part 2 in our bias in AI series.

————
[1] https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection/what-do-we-need-to-do-to-ensure-lawfulness-fairness-and-transparency-in-ai-systems/#howshouldweaddress
[2] https://www.health.harvard.edu/heart-health/heart-attack-and-stroke-men-vs-women
[3] https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001918
[4] https://www.bioindustry.org/event-listing/uk-bioscience-forum-2020.html
[5] https://www.nihr.ac.uk/news/people-from-black-asian-and-minority-ethnic-backgrounds-and-the-elderly-encouraged-to-participate-in-vital-covid-19-vaccine-studies/25870

Related Articles