Artificial technology in health care: impact on disparities?

A PHS student asks whether artificial intelligence holds the potential to reduce disparities … or unintentionally increase them. It’s a difficult question that has come up in other contexts (e.g., surgical robots, computer-assisted detection in radiology, automated dispensing cabinets, etc.), which I would guess comes down to whether the technology reduces costs by replacing skilled labor. If AI is cost-reducing, it may have a greater potential to reduce disparities (if it is adopted) — but if it is cost-increasing (but also quality-improving), then it is likely to be taken up in areas with more generous insurance coverage.

Our student writes:

 ai A.I. making a difference in cancer care

www.cbsnews.com

The artificial intelligence we see in daily life is just a fraction of its vast potential. Already it’s making strides in cancer care

Recently, 60 Minutes had a segment regarding the use of artificial intelligence in medicine, specifically the use of IBM’s Watson (yes, the same one that played Jeopardy! 5 years ago) in cancer treatment. The question I would like to pose relates to the ethics of using artificial intelligence in medicine. While this technology has the potential to shrink disparities (e.g., through its use in underserved populations), it also has the potential to grow disparities through the unequal distribution of its use (e.g., if it is only available in places that can afford its implementation). How can we ensure that this technology is used to shrink disparities rather than grow them? In addition, what other practical applications to population health could Watson have that would be beneficial? What other problems may arise?

6 thoughts on “Artificial technology in health care: impact on disparities?

  1. I think this dilemma is relevant for any new medical technology, and is perhaps epitomized by AI. Implementing something so advanced certainly has the chance of increasing existing disparities as described in the post, and there will have to be a mindful, targeted effort to avoid this. So how should AI be provided to those who are the most in need? In the 60 minutes episode, the use of AI for analyzing vast amounts of data is described. Ideally, this process would be free and open-access to all clinics and healthcare workers, including those with under-served patient populations. I believe this could happen, but will take quite a bit of time. The technology behind Watson and other AI is still so advanced, it will a while for it to be fully adapted to a healthcare application and implemented at a mainstream level, let alone mass produced at a rate that would allow it to be low-cost or free.

    This post reminded me of another new medical technology application I found fascinating: the use of drones to deliver medical samples or supplies (link below). Compared to AI, I think this technology has a bit more potential to be broadly useful in the short-term. For the applications described in this NPR article, drones can be used to quickly deliver precious samples for lab tests from patients to laboratories. The technology for drones to fly anywhere already exists – if a company or hospital were to employ them, there would be no new health disparities created in terms of the benefit provided to different patients.

    http://www.npr.org/sections/health-shots/2016/09/13/493289511/doctors-test-drones-to-speed-up-delivery-of-lab-tests

    Like

  2. Really insightful comments Kaitlin! While watching the 60 minutes clip, I think the biggest thing that kept crossing my mind was the cost-benefit between increasing efficiency and the potential to minimize the need for skilled labor. While replacing certain “jobs” with AI may have the capacity to increase efficiency, it may also replace full time employment for skilled workers and therefore significantly increase disparities within a specific community. Particularly in under-served populations, where unemployment rates may already be high, the use of AI to increase services, reduce workload, or just generally increase efficiency may actually be a huge detriment by taking away potential job opportunities for residents within the community. Realistically it’s important to consider the actual comparison between the cost of AI and the cost of salary for skilled labor, but at this point, it is probably more expensive to implement AI than to train/employ more skilled labor. This is an incredibly interesting topic though, and gives some great food for thought!

    Like

  3. I really like this topic and find it to be a very challenging topic. Artificial intelligence, as mentioned earlier could potentially close disparities or cause them to grow even further. However, I think that artificial intelligence could help close the gap more so than make it bigger. If we think about how many individuals cannot afford certain surgeries or medical care, but need it, we would never be able to stop counting. If artificial intelligence were to be used, (i.e. machines for surgery that do not necessarily involve a physician to perform a surgery) this could potentially make the surgeries more affordable and allow them to occur when needed. Often times there can be a shortage of doctors who are specialized in certain fields, which can then cause there to be backups and waiting periods for procedures that need to be done, but are not necessarily life threatening. However, I do see the other side of the argument on how artificial intelligence could possibly take away jobs from nurses, doctors, PA’s etc. Yet, I do not think the artificial intelligence could ever supplement for the human brain, especially with trauma surgeries or emergency medicine where one has to try to diagnose or fix the medical issue even when they are unsure of the cause in a matter of minutes. I think that artificial intelligence could only further help medicine by providing artificial intelligence that could reduce the costs for patients and as a way to help the masses who might unfortunately not have readily available access to healthcare (third world countries, understaffed urgent care facilities, overcrowded hospitals, or large urban cities).

    Like

  4. While I agree that this technology has promise and being able to process and apply this vast amount of information is wonderful there are fundamental limits that need to be addressed. First the AI will come to its own conclusion based on the rules it is given, and it may not be the conclusion you would expect. If the rules set by the programmers are too general or too specific the AI can end up going down a logic path that is nowhere near the desired goal. Second is the data that gets put into the AI system. If the input is biased the outcome will be biased despite the best intentions of the programmers. Both the programming and the inputs will be influenced by the bias of people. The Beauty.AI contest demonstrates both of these issues. The idea was to have an impartial judge for a beauty contest so the organizers used a computer as a judge. People from around the world sent in pictures for the contest, but out of the 44 winners almost all were white. This is because the data set used to teach the computer what traits to select for contained mostly pictures of white people. Thus the computer program was taught that light skin was attractive and built this into its judging criteria. If the journal articles that are given to Watson are based off of biased research a biased conclusion will be reached. Since may scientific studies and trials are done on white males the results will be tailored to what works for white males.

    Like

  5. Considering what the implications of implementing Watson (or any other similar AI) into medicine is a tricky topic. As noted by the participants of this thread, there’s potential for Watson to improve the quality of healthcare – even beyond the focus of treating cancer. But who will be able to access said advanced care? This transition to a new AI-based approach for disseminating and translating treatment procedures would not be an overnight event – it’d be an era that demands time and proper context. Not all healthcare providers will be capable of implementing a smooth transition even if they were properly funded. Consider the amount of training that would be required to comprehend the strength and weaknesses of this new instrumentation; how it’d be applied; the accuracy and precision of the developers who have designed it; how efficacious Watson is; etc.

    I would predict disparities across healthcare providers who instilled Watson into their treatment programs based off of the fact that not all healthcare providers are created equally. Although implementing Watson into cancer treatment has clear benefits in quality improvements, not all is sound with the Midas touch.

    Like

  6. My first reaction to considerations of artificial intelligence in any field, particularly medicine, is one of absolute intrigue. There are so many unanswered questions about its potential roles in the provision of everything from primary to sub-specialized care and the implications of eventually relying on this tool to substantiate or dare I say supplant our clinical judgement. This technology is improving at an incredible pace as the 60 Minutes segment has shown, and although there is still a lot of work to do before this tool is implementable in practices across the country, there is a sense that it is only a matter of time before these obstructing kinks are worked out. To make a comparison, I believe the use of artificial intelligence in medicine today could be at a similar phase that the use of genomics was at in the 1990’s. We can imagine the utility of such a tool even though we don’t yet fully understand all of the complexity involved in interpreting its results. The previous responses to this post point out a couple of the unanswered questions we have about artificial intelligence in medicine, so allow me to pose yet another dilemma.
    To continue my comparison of artificial intelligence in medicine to that of genomics, I think it’s important to consider the ownership of such knowledge and technology. The Human Genome Project was largely funded by the NIH and as such, the results and a lot the progress made in this endeavor is publicly available. Artificial intelligence is proprietary technology, despite the fact that the knowledge it is accruing and consolidating into a manageable output is coming from publicly funded research. So, just as drug companies set the prices of their proprietary drugs, who is to say that IBM won’t eventually set the price of an individual’s exhaustive cancer research review at some prohibitively high price. Indeed, the company has invested billions of dollars into researching this technology and is entitled to recouping those costs based on a capitalistic economy. With that said, I don’t think it’s entirely appropriate to halt the progress made in medical artificial intelligence research (even if that were a possibility), but I think it’s important that we be wary that the availability of this technology to the entire public is not simply a given.

    Like

Leave a comment