From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!wupost!darwin.sura.net!mlb.semi.harris.com!uflorida!mole.cis.ufl.edu!fred Mon Jan  6 10:30:13 EST 1992
Article 2460 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!wupost!darwin.sura.net!mlb.semi.harris.com!uflorida!mole.cis.ufl.edu!fred
>From: fred@mole.cis.ufl.edu (Fred Buhl)
Newsgroups: comp.ai.philosophy
Subject: Artificial Wisdom versus Artificial Intelligence
Message-ID: <33442@uflorida.cis.ufl.EDU>
Date: 31 Dec 91 20:08:11 GMT
Sender: news@uflorida.cis.ufl.EDU
Organization: UF CIS Dept.
Lines: 67

A few ideas about "intelligence", "understanding", and "meaning"
I'd love to hear your opinions on:

I think a useful distinction can be made between wisdom (to me, stored
knowledge) and intelligence (to me, the ability to learn).  Searle's
Chinese room is an example of wisdom without intelligence -- the room
can't learn *anything*.  The problem with most Turing-style tests is
that with enough wisdom stored away, the system looks as intelligent
as a human being, though it may be totally un-intelligent.

The situation of wisdom-without-intelligence doesn't occur in the
natural world, since in order to acquire wisdom, it must be learned
(except for that wisdom known as instinct).  We tend to ascribe
intelligence to agents with a lot of wisdom since we assume the agent
acquired its wisdom through learning, rather than having it dumped
into its brain by a creator, as the Chinese Room did.

I claim that the Chinese Room, even if it had human- equivalent
learning capabilities, could *never* discover the meaning of its
symbols.  Since it only has one input and one output channel (both
being the Chinese language) it could never discover what the symbols
meant, since it would have no way of associating the symbols with
anything either internal or external to itself.

I'm reminded of the story of Helen Keller.  No progress was made
communicating with her until her teacher had the insight of
dragging her outside to the well, pumping water onto her hand, and
then finger-spelling the word "WATER" over and over.  This allowed
her to associate the symbol (the finger-spelling) with an object
(the water).  If Helen Keller had been a brain in a vat, with the
only connection to the outside world being an ASCII terminal, I
contend she would *never* learn to attach meanings to any symbols
presented, and therefore would never learn to communicate. 
Fortunately for her, even she had other input channels to work
with.

The Cyc project, being developed by Doug Lenat at the MCC in
Austin, seems to me to have the same sort of sensory limitations of
the Chinese Room.  To my (albeit limited) knowledge of Cyc, all of
its input is symbolic -- it knows of relations between symbols, but
not any other knowledge.  In other words, it would know how a
"circle" is different from a "square", and that "barrels" and
"tires" are examples of "circles", and that "circles" are "round",
but it couldn't draw a circle if asked, and it couldn't recognize
a circle if it were presented with one.   

What these programs need to "understand" a language is other forms
of sensory input to associate meanings with the symbols they're
manipulating, and other forms of output to demonstrate that
knowledge.  As a Strong AI adherent, I'd say that this is *all*
they need to understand.  To be "intelligent", they'd need to be
able to learn new vocabulary, too.

P.S.  All this was inspired by a rebroadcast of a 1987 _Horizon_
episode called "Thinking" on my local PBS affiliate -- starring
Minsky, Dreyfus, and Searle (complete with an enactment of the
Chinese Room), with the most one-sided narration I've *ever*
encountered.  (The show ended by baldly stating that we have free
will, and that therefore the brain is not a computer -- glad they
cleared that up).  Nothing new in the show to readers of this
group, but it's nice to see the people behind the posts.

---------------------------------------------------------------------------
Fred Buhl, Grad Student        A proud member of the Union of
UF Computer Science Dept.      Unconcerned Scientists.       
fred@reef.cis.ufl.edu          "Ants are smart.  _Really_ smart." 
---------------------------------------------------------------------------


