Abstract
In spite of my personal belief in the benefits of artificial intelligence (AI), reading
Cathy O'Neil's book “Weapons of Math Destruction” left me feeling unsettled.[1] She describes how flawed and unchecked algorithms are widely applied in areas that
affect us all: hiring, credit scoring, access to education, and insurance pricing.
In one example, a fixed percentage of teachers in a U.S. region was dismissed every
year based on biased and opaque algorithms. The authors concluded that such algorithms
act as “weapons of math destruction,” perpetuate and amplify societal biases, act
unethically, and harm vulnerable populations. The question arises as to what happens
when we apply these algorithms to medicine? How do we know whether we are giving our
patients the correct diagnosis or prognosis? Are we still sure that patients are receiving
the appropriate treatment? Would we notice if the algorithms were geared more toward
the needs of companies (make a lot of money) or health insurance companies (spend
as little as possible)? In fact, evidence of bias and inequality of algorithms in
medicine is already available.[2] Due to these risks, some of my colleagues suggest that AI should be completely banned
from medicine.