Reading Two Books, by William Wegman, 1971 |
The Limits of
Explainability - Why Artificial Intelligence Needs To Learn How To Follow Its
Gut
Mar 2018, Joi Ito
for WIRED
Researchers at MIT led by Josh Tenenbaum hypothesize that
our brains have what you might call an intuitive physics engine, a noisy
Newtonian system. And that system will need to be better understood if we are
to make the robots more like us. (Is that the point? I think sometimes it is,
yes.)
>>>
After reading this article, I'm not really sure what's
meant by "giving AI its gut" (maybe it's because I've been listening
to so much panel discussions on AI from people who are really specific about
these things?).
I will assume that to give AI its gut means to give it
the ability to solve problems intuitively. The same argument I would use here
to express the difficulty in achieving this is the one used to show that we can
never create a robot that can smell.
To smell, by the full definition of what it means to
smell something as a human, requires a lifetime of experience (at least a
childhood's worth). We do not "know" that shit smells bad; we are
taught, either actively or passively. And every smell we recognize is done so
against a massive memoryplex accumulated and constructed over our lifetime.
We also do not understand the way our olfactory circuitry
works, not well enough to reverse engineer it into a robot. For each person,
the circuits work differently, or at least differently enough that we have no
model for it.
Intuition is difficult to put into an algorithm for
similar reasons. It is distilled from experience, and the rich, multi-sensory,
emotionally-laden, and socially-mediated experience that comes from human
existence. To write a line of code for every 'experience' is ridiculous. But
how do you get intuition without the experience? That's going to be a hard
problem to solve, and there are many other problems that will need to be
addressed before we get there.
The hardwiring problem is similar between olfaction and
intuition in that we don't know how intuition works in the brain. We don't know
'where' consciousness is in the brain. We don't get the models so how could we
design such a system?
I'm not saying this is the solution, just that it seems
to be the obvious extension of the endeavor - to get robots that do all the
things that we do, we need those robots to first be born and to live. In a
simulated world? Sure, maybe, if that comes first.
Post Script
On the topic of Artificial Intelligence, check out a
couple of these recent discussions:
Parliamentary Debate on the Future of Artificial
Intelligence
Hosted by Steven Pinker, 2017
2018 Isaac Asimov Memorial Debate - Artificial
Intelligence
Hosted by Neil DeGrasse Tyson, 2018