https://medium.com/mit-technology-review/is-ai-riding-a-one-trick-pony-b9ed5a261da0
Some passages from the article:
Neural nets are just thoughtless fuzzy pattern recognizers,
and as useful as fuzzy pattern recognizers can be — hence the rush to
integrate them into just about every kind of software — they represent, at
best, a limited brand of intelligence, one that is easily fooled. A deep neural
net that recognizes images can be totally stymied when you change a single
pixel, or add visual noise that’s imperceptible to a human. Indeed, almost as
often as we’re finding new ways to apply deep learning, we’re finding more of
its limits. Self-driving cars can fail to navigate conditions they’ve never
seen before. Machines have trouble parsing sentences that demand common-sense
understanding of how the world works.
It can be hard to
appreciate this from the outside, when all you see is one great advance touted
after another. But the latest sweep of progress in AI has been less science
than engineering, even tinkering. And though we’ve started to get a better
handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how
those systems work, or whether they could ever add up to something as powerful
as the human mind.
We make sense of
new phenomena in terms of things we already understand. We break a domain down
into pieces and learn the pieces. Eyal is a mathematician and computer
programmer, and he thinks about tasks — like making a soufflé — as
really complex computer programs. But it’s
not as if you learn to make a soufflé
by learning every one of the program’s
zillion micro-instructions, like “Rotate
your elbow 30 degrees, then look down at the countertop, then extend your
pointer finger, then …” If you had to do that for every new task, learning
would be too hard, and you’d be stuck with what you already know. Instead, we
cast the program in terms of high-level steps, like “Whip the egg whites,”
which are themselves composed of subprograms, like “Crack the eggs” and
“Separate out the yolks.”
Computers don’t do this, and that is a big part of the
reason they’re dumb. To get a deep-learning system to recognize a hot dog, you
might have to feed it 40 million pictures of hot dogs. To get Susannah to
recognize a hot dog, you show her a hot dog. And before long she’ll have an
understanding of language that goes deeper than recognizing that certain words
often appear together. Unlike a computer, she’ll have a model in her mind about
how the whole world works. “It’s sort of incredible to me that people are
scared of computers taking jobs,” Eyal says. “It’s not that computers can’t
replace lawyers because lawyers do really complicated things. It’s because
lawyers read and talk to people. It’s not like we’re close. We’re so far.”
A real intelligence doesn’t break when you slightly
change the requirements of the problem it’s trying to solve. And the key part
of Eyal’s thesis was his demonstration, in principle, of how you might get a
computer to work that way: to fluidly apply what it already knows to new tasks,
to quickly bootstrap its way from knowing almost nothing about a new domain to
being an expert.