New York: Artificial Intelligence (AI) technology in the medical field has the possibility to increase inequities, particularly among people with low-access to healthcare and those who are from the vulnerable communities, according to a study.
AI is known to automate diagnoses, decrease physician workload, and even to bring specialised healthcare to people in rural areas or developing countries.
However, analysing crowd-sourced sets of data used to create AI algorithms from medical images, University of Maryland researchers found that most did not include patient demographics.
In the study published in the journal Nature Medicine, the researchers found that the algorithms did not evaluate for inherent biases either. That means they have no way of knowing whether these images contain representative samples of the population such as Blacks, Asians, and Indigenous Americans.
According to the researchers, much of medicine is already fraught with partiality toward certain races, genders, ages, or sexual orientations.
Small biases in individual sets of data could be amplified greatly when hundreds or thousands of these datasets are combined in these algorithms.
"These deep learning models can diagnose things physicians can't see, such as when a person might die or detect Alzheimer's disease seven years earlier than our known tests -- superhuman tasks," said Paul Yi, MD, Assistant Professor of Diagnostic Radiology and Nuclear Medicine at the varsity's School of Medicine. "Because these AI machine learning techniques are so good at finding needles in a haystack, they can also define sex, gender, and age, which means these models can then use those features to make biased decisions," Yi added.
"These deep learning models can diagnose things physicians can't see, such as when a person might die or detect Alzheimer's disease seven years earlier than our known tests -- superhuman tasks," said Paul Yi, MD, Assistant Professor of Diagnostic Radiology and Nuclear Medicine at the varsity's School of Medicine.
"Because these AI machine learning techniques are so good at finding needles in a haystack, they can also define sex, gender, and age, which means these models can then use those features to make biased decisions," Yi added.
Much of the data collected in large studies tends to be from people of means who have relatively easy access to healthcare, which means the data tends to be skewed toward men versus women, and toward people who are white rather than other races.
As a result, the data which gets compiled into algorithms have the potential to slant outcomes worldwide.
For the study, the researchers chose to evaluate the datasets used in data science competitions in which computer scientists and physicians crowdsource data from around the world and try to develop the best, most accurate algorithm.
Specifically, the researchers investigated medical imaging algorithms, such as those that evaluate CT scans to diagnose brain tumours or blood clots in the lungs.
Of the 23 data competitions analysed, 61 per cent did not include demographic data such as age, sex, or race. None of the competitions had evaluations for biases against underrepresented or disadvantaged groups.
"We hope that by bringing awareness to this issue in these data competitions -- and if applied in an appropriate way -- that there is tremendous potential to solve these biases," said lead author Sean Garin, Program Coordinator at the varsity's Medical Intelligent Imaging (UM2ii) Center.
For all the latest News, Opinions and Views, download ummid.com App.
Select Language To Read in Urdu, Hindi, Marathi or Arabic.