
Would you trust a chatbot to provide an accurate medical diagnosis?
SCIENCE SPOTLIGHT
AI has already helped analyze medical images, detect drug interactions, and identify high-risk patients. But, many people are not sure about the next use - a diagnostic chatbot.
A qualitative study on the emergence of trust in diagnostic chatbots found that people see the benefits of interacting with both their physician and AI technology [1]. The difference was that trust in a chatbot was something the patient actively chose, while trusting their physician was affect-based. They also found evidence that a chatbot’s communication competencies are more important than empathetic reactions. We seem to understand that AI can be useful in understanding and disseminating complex medical information, but its inability to “feel” human emotions may make it hard to trust.
Since that study, AI chatbots have gotten a reboot. In March 2023 GPT-4 chatbots entered the scene and can now provide more accurate diagnoses, save both patients’ and doctors’ time, and even provide a more detailed explanation of a given disease or illness.
While GPT-4 is not trained specifically for health care or medical applications, it does have access to a tremendous amount of data from open sources on the Internet including medical texts, research papers, health system websites, and openly available health information podcasts and videos [2]. GPT-4 has been successful in taking medical notes, providing follow up suggestions to physicians, and catching its own “hallucinations” or mistakes. When prompted to fix its mistakes, the bot learns to not make them in the future. It also contains medical knowledge; it answers US Medical Licensing Exam questions correctly more than 90% of the time! This medical knowledge could be used in consultation, diagnosis, education, and far more. It’s not ready for prime time yet in many cases, and we would definitely recommend sticking with your doctor for now, but how do we prepare for the future of AI in healthcare?
Some of the underlying hazards and mistrust in using AI in healthcare center on demographic bias. Currently, race, sexuality, gender, and income status influence the way patients are treated both consciously and subconsciously by their healthcare providers. AI has the potential to exacerbate those inequities or to resolve them, and depending on how you train it.
A 2019 study shows how a biased algorithm provided the same risk ratings for black patients who were considerably sicker than white patients because it was designed to use data from patients’ past health care spending, information that does not reflect health but income [3]. However, AI can also be used to support healthcare equity. Another study showed that an ethically trained algorithm was able to reduce racial disparities in pain by 4.7x relative to standard measurements done by a human radiologist [4].
So, what’s next? Current capabilities of AI in healthcare are just the tip of the iceberg, and new challenges will come with each advancement. While physicians can start to increase the trust their patients have in diagnostic chatbots by helping to improve the accuracy of their outputs, nudge patients to use them, and provide a listening experience alongside the chatbot’s diagnosis, both healthcare providers and researchers in the field should take caution to prevent misdiagnosis due to their own inherent bias imprinted onto algorithms.
INTELLIGENT LIFE
EXHIBIT 01
LEAD ARTIST: MIDJOURNEY