The real danger today is not that
computers are smarter than us, but that we think computers are smarter than us
GARY SMITH
AUGUST 30, 2019
Share
In 1997, Deep Blue
defeated Garry Kasparov, the reigning world chess champion. In 2011, Watson
defeated Ken Jennings and Brad Rutter, the world’s best Jeopardy players. In
2016, AlphaGo defeated Ke Jie, the world’s best Go player. In 2017, DeepMind
unleashed AlphaZero, which trounced the world-champion computer programs at
chess, Go, and shogi.
If humans are no
longer worthy opponents, then perhaps computers have moved so far beyond our
intelligence that we should rely on their superior intelligence to make our
important decisions.
Nope.
Despite their freakish
skill at board games, computer algorithms do not possess anything resembling
human wisdom, common sense, or critical thinking. Deciding whether to accept a
job offer, sell a stock, or buy a house is very different from recognizing that
moving a bishop three spaces will checkmate an opponent. That is why it is
perilous to trust computer programs we don’t understand to make decisions for
us.
Consider the
challenges identified by Stanford computer science professor Terry Winograd,which have
come to be known as Winograd schemas. For example, what does
the word “it” refer to in this sentence?
I can’t cut that tree
down with that axe; it is too [thick/small].
If the bracketed word
is “thick,” then it refers to the tree; if the bracketed word is “small,” then
it refers to the axe. Sentences like these are understood immediately by humans
but are very difficult for computers because they do not have the real-world
experience to place words in context.
Paraphrasing Oren Etzioni, CEO of the Allen
Institute for Artificial Intelligence, how can machines take over the world
when they can’t even figure out what “it” refers to in a simple sentence?
When we see a tree, we
know it is a tree. We might compare it to other trees and think about the
similarities and differences between fruit trees and maple trees. We might
recollect the smells wafting from some trees. We would not be surprised to see
a squirrel run up a pine or a bird fly out of a dogwood. We might remember
planting a tree and watching it grow year by year. We might remember cutting
down a tree or watching a tree being cut down.
A computer does none
of this. It can spellcheck the word “tree,” count the number of times the word
is used in a story, and retrieve sentences that contain the word. But computers
do not understand what trees are in any relevant sense. They are like Nigel Richards, who
memorized the French Scrabble dictionary and has won the French-language
Scrabble World Championship twice, even though he doesn’t know the meaning of
the French words he spells.
To demonstrate the
dangers of relying on computer algorithms to make real-world decisions,
consider an investigation of risk factors for fatal heart attacks.
I made up some
household spending data for 1,000 imaginary people, of whom half had suffered
heart attacks and half had not. For each such person, I used a random number
generator to create fictitious data in 100 spending categories.
These data were
entirely random. There were no real people, no real spending, and no real heart
attacks. It was just a bunch of random numbers. But the thing about random
numbers is that coincidental patterns inevitably appear.
In 10 flips of a fair
coin, there is a 46% chance of a streak of four or more heads in a row or four
or more tails in a row. If that does not happen, heads and tails might
alternate several times in a row. Or there might be two heads and a tail,
followed by two more heads and a tail. In any event, some pattern will appear
and it will be absolutely meaningless.
In the same way, some
coincidental patterns were bound to turn up in my random spending numbers. As
it turned out, by luck alone, the imaginary people who had not suffered heart
attacks “spent” more money on small appliances and also on household paper
products.
When we see these
results, we should scoff and recognize that the patterns are meaningless
coincidences. How could small appliances and household paper products prevent
heart attacks?
A computer, by
contrast, would take the results seriously because a computer has no idea what
heart attacks, small appliances, and household paper products are. If the
computer algorithm is hidden inside a black box, where we do not know how the
result was attained, we would not have an opportunity to scoff.
Nonetheless,
businesses and governments all over the world nowadays trust computers to make
decisions based on coincidental statistical patterns just like these. One
company, for example, decided that it would make more online sales if it
changed the background color of the web page shown to British customers from
blue to teal. Why? Because they tried several different colors in nearly 100
countries. Any given color was certain to fare better in some country than in
others even if random numbers were analyzed instead of sales numbers. The
change was made and sales went down.
Many marketing
decisions, medical diagnoses, and stock trades are now done via computers. Loan
applications and job applications are evaluated by computers. Election
campaigns are run by computers, including Hillary Clinton’s disastrous 2016 presidential
campaign. If the algorithms are hidden inside black boxes, with
no human supervision, then it is up to the computers to decide whether the
discovered patterns make sense and they are utterly incapable of doing so
because they do not understand anything about the real world.
Computers are not
intelligent in any meaningful sense of the word, and it is hazardous to rely on
them to make important decisions for us. The real danger today is not that
computers are smarter than us, but that we think computers are
smarter than us.