A PHS student asks whether artificial intelligence holds the potential to reduce disparities … or unintentionally increase them. It’s a difficult question that has come up in other contexts (e.g., surgical robots, computer-assisted detection in radiology, automated dispensing cabinets, etc.), which I would guess comes down to whether the technology reduces costs by replacing skilled labor. If AI is cost-reducing, it may have a greater potential to reduce disparities (if it is adopted) — but if it is cost-increasing (but also quality-improving), then it is likely to be taken up in areas with more generous insurance coverage.
Our student writes:
A.I. making a difference in cancer care
The artificial intelligence we see in daily life is just a fraction of its vast potential. Already it’s making strides in cancer care
Recently, 60 Minutes had a segment regarding the use of artificial intelligence in medicine, specifically the use of IBM’s Watson (yes, the same one that played Jeopardy! 5 years ago) in cancer treatment. The question I would like to pose relates to the ethics of using artificial intelligence in medicine. While this technology has the potential to shrink disparities (e.g., through its use in underserved populations), it also has the potential to grow disparities through the unequal distribution of its use (e.g., if it is only available in places that can afford its implementation). How can we ensure that this technology is used to shrink disparities rather than grow them? In addition, what other practical applications to population health could Watson have that would be beneficial? What other problems may arise?