Melanie Mitchell, Computing Community Consortium (CCC) Council member and Professor at the Santa Fe Institute and Portland State University was recently interviewed on the Medscape podcast, Medicine and the Machine in an episode titled ‘Can AI Exist in Medicine Without Human Oversight?. The podcast, led by Medscape editor-in-chief Eric Topol and Abraham Verghese from Stanford, explores critical questions and discussions on artificial intelligence’s (AI) impact on modern medicine.
While it was acknowledged that AI has made great strides in the past decade on accomplishing narrow tasks, the episode highlights that the technology still lacks the ability to work autonomously in the field of medicine. Making this a possibility would require ‘transfer learning’, a term coined in the profession to describe technology that would enable AI systems to apply skills and knowledge across domains, beyond one particular task. Moreover, the deep learning techniques that have been used to train machines require large amounts of annotated data, something Topol points out is seriously lacking in the health industry. Along with the lack of data, other roadblocks in AI’s application to the field of medicine are ethical implications and the replication crisis. These realities prevent technology from currently performing without human supervision.
A large part of the discussion revolved around the ethical implications of bias in these types of systems. One example was brought up from an Optum study, published in Science in October 2019.
“There was discrimination against Blacks because they weren’t using as many medical resources as Whites. [Black persons were getting low risk scores for many chronic conditions not because their risk was lower but because the Optum algorithm was basing its findings, in large part, on bills and insurance claims.]”
It is a major research challenge at the moment to eliminate bias in AI systems. This is a particularly difficult problem due to the lack of transparency in these systems. Bias can be introduced through the datasets the technology is trained with or baked in through any bias that the programmer has. In order to eliminate the bias, you not only have to be able to identify it, but understand where the bias is coming from and how the technology came to the decision that it did. Even if that is accomplished there has to be a way to test and certify these types of machines as trustworthy and unbiased.
In this way, AI is not ready to act autonomously. But as Verghese points out there are some instances in medicine where a machine’s lack of human intuition and emotions are beneficial, such as predicting a patient’s mortality rate. AI machines are often more accurate in predicting mortality timing and rates because they can look at the patient objectively, whereas a doctor might factor in the patient’s will to live and their intrinsic hope for the patient to survive.
Mitchell points out humans and machines both have bias. If they work together they can be better than either entity acting alone. Despite not being at the point of acting autonomously, there has been progress in AI beneficially impacting the medical field over the past decade such as natural language processing systems able to transcribe medical records and an autonomous machine able to diagnose diabetic retinopathy. Despite the roadblocks, Mitchell has high hopes for the future of AI in medicine over the next 10 years.
“I think we’ll get a lot more AI as assistance, to help physicians broaden their ability to care for people, and I’m quite optimistic about that.” – Melanie Mitchell
Mitchell has recently joined the newly created CCC task force on Responsible Computing. Similar to the dilemmas discussed in the podcast, the task force focuses on issues concerning privacy, ethics and overall responsible practices in computing. Check out the full interview here.