We are not ready for what is about to come.
It is not that healthcare will be soon run by a web of artificial intelligences (AIs)
that are smarter than humans. Such general AI does not appear anywhere near the horizon.
Rather, the narrow AI that we already have, with all its flaws and limitations, is
already good enough to transform much of what we do, if applied carefully.
Amara’s Law tells us that we tend to overestimate the impact of a technology in the
short run, but underestimate its impact in the long [1]. There is no doubt that AI has gone through another boom cycle of inflated expectations,
and that some will be disappointed that promised breakthroughs have not materialized.
Yet, despite this, the next decade will see a steadily growing stream of AI applications
across healthcare. Many of these applications may initially be niche, but eventually
they will become mainstream. Eventually they will lead to substantial change in the
business of healthcare. In twenty years time, there is every prospect the changes
we find will be transformational.
Such transformation however comes with a price. For all the benefits that will come
through improved efficiency, safety, and clinical outcomes, there will be costs [2]. The nature of change is that it often seems to appear suddenly. While we are all
daily distracted trying to make our unyielding health system bend to our needs using
traditional approaches, disruptive change surprises because it comes from places we
least expected, and in ways we never quite imagined.
In linguistics, the Whorf hypothesis says that we can only imagine what we can speak
of [3]. Our cognition is limited by the concepts we have words for. It is much the same
in the world of health informatics. We have developed strict conceptual structures
that corral AI into solving classic pattern recognition tasks like diagnosis or treatment
recommendation. We think of AI automating image interpretation, or sifting electronic
health record data for personalized treatment recommendations. Most don’t often think
about AI automating foundational business processes. Yet AI is likely to be more disruptive
to clinical work in the short run than it will be to care delivery.
Digital scribes, for example, will steadily take on more of the clinical documentation
task [4]. Scribes are digital assistants that listen to clinical talk such as patient consultations.
They may undertake a range of tasks from simple transcription through to the summarization
of key speech elements into the electronic record, as well as providing information
retrieval and question-answering services. The promise of digital scribes is a reduction
in human documentation burden. The price for this help will be a re-engineering of
the clinical encounter. The technology to recognize and interpret clinical speech
from multiple speakers, and to transform that speech into accurate clinical summaries
is not yet here. However, if humans are willing to change how they speak, for example
by giving an AI commands and hints, then much can be done today. It is easier for
a human to say “Scribe, I’d like to prescribe some medication” than for the AI to
be trained to accurately recognize whether the speech it is listening to is past history,
present history, or prescription talk.
The price for using a scribe might also be an even more obvious intrusion of technology
between patient and clinician, and new risks to patient privacy because speech data
contains even more private information than clinician-generated records. Clinicians
might simply replace today’s effort in creating records, where they have control over
content, to new work in reviewing and editing automated records, where content reflects
the design of the AI. There are also subtler risks. Automation bias might mean that
many clinicians cease to worry about what should go into a clinical document, and
simply accept whatever a machine has generated [5]. Given the widespread use of copy and paste in current day electronic records [6], such an outcome seems a distinct possibility.
At this moment, narrow AI, predominately in the form of deep learning, is making great
inroads into pattern recognition tasks such as diagnostic radiological image interpretation
[7]. The sheer volume of training data now available, along with access to cheap computational
resources, has allowed previously impractical neural network architectures to come
into their own. When a price for deep learning is discussed, it is often in terms
of the end of clinical professions such as radiology or dermatology [8]. Human expertise is to be rendered redundant by super-human automation.
The reality is much more nuanced. Firstly, there remain great challenges to generalizing
narrow AI methods. A well-trained deep network typically does better on data sets
that resemble its training population [9]. The appearance of unexpected new edge cases, or implicit learning of features such
as clinical workflow or image quality [10], can all degrade performance. One remedy for this limitation is transfer learning
[11], retraining an algorithm on new data taken from the local context in which it will
operate. So, just as we have seen with electronic records, the prospect of cheap and
generalizable technology might be a fantasy, and expensive system localization and
optimization may become the lived AI reality.
Secondly, the radiological community has reacted early, and proactively, to these
challenges. Rather than resisting change, there is strong evidence not just that AI
is being actively embraced within the world of radiology, but also that there is an
understanding that change brings not just risks, but opportunities. In the future,
radiologists might be freed from working in darkened reading rooms, and emerge to
become highly visible participants to clinical care. Indeed, in the future, the idea
of being an expert in just a single modality such as image interpretation may seem
quaint, as radiologists transform into diagnostic experts, integrating data from multiple
modalities from the genetic through to the radiologic.
The highly interconnected nature of healthcare means that changes in one part of the
system will require different changes elsewhere. Radiologists in many parts of the
world are paid for each image they read. With the arrival of cheap bulk AI image interpretation,
that payment model must change. The price of reading must surely drop, and expert
humans must instead be paid for the value they create, not the volume they process.
The same kind of business pressure is being felt in other clinical specialties. In
primary care, for example, the arrival of new, sometimes aggressive, players who base
their business model on AI patient triage and telemedicine is already problematic
[12]
[13]. Patients might love the convenience of such services, especially when they are
technologically literate, young, and in good health, but they may not always be so
well served if they are older, or have complex comorbidities [14]. Thus, AI-based primary care services might end up caring for profitable low-cost
and low-risk patients, and leave the remainder to be managed by a financially diminished
existing primary care system. One remedy to such a risk is again to move away from
reimbursement for volume, to reimbursement for value. Indeed, value-based healthcare
might arrive not as the product of government policy, but as a necessary side effect
of AI automation.
There are thus early lessons in the different reactions to AI between primary care
and radiology. One sector is being caught by surprise and playing catch up to new
commercial realities that have come more quickly than expected; the other has begun
to reimagine itself in anticipation of becoming the ones that craft the new reality.
The price each sector pays is different. Proactive preparation requires investment
in reshaping workforce, and actively engaging with industry, consumers, and government.
It requires serious consideration of new safety and ethical risks [15]. In contrast, reactive resistance takes a toll on clinical professionals who rightly
wish to defend their patients’ interests, as much as their own right to have a stake
in them. Unexpected change may end up eroding or even destroying important parts of
the existing health system before there is a chance to modernize them.
So, the fate of medicine, and indeed for all of healthcare, is to change [15]. As change makers go, AI is likely to be among the biggest we will see in our time.
Its tendrils will touch everything from basic biomedical discovery science through
the way we each make our daily personal health decisions. For such change we must
expect to pay a price. What is paid, by whom, and who benefits, all depend very much
on how we engage with this profound act of reinvention. To fully engage brings promise
of the greatest reward. To not engage is to pay the highest price.