Those of us who are in the know about artificial intelligence (AI) in medicine no doubt know that IBM’s efforts to use its Watson system in healthcare have been a mixed bag at best and many of the engineers working on the project have been laid off. What went wrong? Watson did so well on Jeopardy.
How Real Is Real?
One of IBM’s initiatives, Watson Genomics, was focused on using data from lab tests on patient’s cells to recommend treatments, replacing the 10-15 doctor “tumor boards” that do this sort of work. Some aspects of that initiative went very well. But another did not fare well. That initiative ran into real difficulties with patient data, so hypothetical data was used instead together with Watson’s huge intake of oncology textbooks and journal articles. That effort produced treatment recommendations that, in the real world, might have had fatal consequences.
And therein lies the rub. Real-world data is messy. Nothing guarantees that this info is accurate. Hospitals are still oriented towards billing; not excellent outcomes. But even so, this is all the data we have. Not using this information to train AIs, it seems, is not an option.
Current AI systems may use “deep learning” and other techniques to extract patterns from data; the data that they use to discover those patterns is called the “training set.” Once that work is done, the patterns learned are tested against other sets of data to see how well the AI performs. What the Watson experience, in part, indicates is something that AI researchers learned the hard way: it is very difficult to create training sets that mirror the real world. Using actual data is much more effective.
How Current Is Current?
The Watson experience indicates another problem. Medical treatment is constantly advancing, patient populations are changing – if nothing else, they are getting older – and this raises the issue of how the training set used relates to current information. Experts in the field say that so far, very little attention has been devoted to keeping the systems updated with new training set data. This increases the risk that treatment recommendations will no longer reflect the best clinical judgement or the real-world results of using new therapies.
Where Has AI Succeeded?
The success stories of AI applications in health care usually involve a combination of relatively simple questions – “Is this lump in this breast suspicious or not?” – rather than complex ones such as, “What is the best cancer treatment for this tumor in this patient?”
AIs have been proven better than human radiologists at detecting suspicious lesions on several kinds of X-rays. One focus – human eyes are in constant motion, AIs can scan the X-ray pixel by pixel.
What Is Decision Support?
For once, the name of a technology is not misleading – decision support systems act as inputs to medical decisions, and hopefully will improve them. What kinds of decisions? Among them are:
- Which antibiotic should I use to cure the patient’s infection and not increase bacterial resistance?
- What test should I order next to establish my guessed-at diagnosis?
- Which treatment option is the most effective and the cheapest?
- Can I safely discharge this patient? If not now, when?
- Should I have another radiologist look at this MRI?
Doctors face questions like these every day, and have to make decisions in real time, often without the luxury of contemplation or research. They also suffer from “cognitive overload.” Even with sub-sub-specialties, there is too much information for one doctor to carry in his or her head.
Decision support systems have the advantage of being able to handle huge amounts of data, process it in ways that a single human never could, and they do not suffer from fatigue. The combination of a human doctor and an AI ought to be a winning one. (Provided, of course, that the AI is kept current and retrained when things change.)
What’s The Next Big Thing?
Current decision support systems are notorious for generating “alert fatigue.” They hit the clinician with so many recommendations and warnings that the clinician tunes them out. They also are not well-integrated into the clinical workflow and electronic health record (EHR) operations.
The Holy Grail, of course, is for decision support to be driven by the EHR, with recommendations driven by what is happening to the patient in near real time.
“Clinical pathways” is ripe for innovation. Every patient is unique, but the course of care is in some ways highly predictable. An AI has the potential to automate orders, verify from the EHR that appropriate care has been delivered, flag deviations from the ideal pathway, and recommend corrective actions.
The key focus on AI development in the future should be on “the human use of human beings.” Maximizing outcomes for the patient while at the same time reducing the burden on caregivers—that’s the best case scenario.