Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!newsfeed.internetmci.com!in2.uu.net!news.erinet.com!netcom.com!nagle
From: nagle@netcom.com (John Nagle)
Subject: Re: Minsky's Future of AI Technology, was: How is AI going?
Message-ID: <nagleDo1L3z.4wu@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <4hs8to$lsg@Mars.mcs.com>
Date: Sun, 10 Mar 1996 08:04:46 GMT
Lines: 39
Sender: nagle@netcom11.netcom.com

drt@MCS.COM (Donald Tveter) writes:
>>(This is Minsky, right?)
>>However, progress has been slow in other areas, for example, in the
>>field of understanding natural language.  This is because our
>>computers have no access to the meanings of most ordinary words and
>>phrases.  To see the problem, consider a word like "string" or
>>"rope."  No computer today has any way to understand what those
>>things mean.  For example, you can pull something with a string, but
>>you cannot push anything.  You can tie a package with string, or fly
>>a kite, but you cannot eat a string or make it into a balloon.  In a
>>few minutes, any young child could tell you a hundred ways to use a
>>string -- or not to use a string -- but no computer knows any of
>>this.  The same is true for ten thousand other common words.  Every
>>expert must know such things.

(And this is Tveter?)
>The killer assumption I see here is the idea that the world can be
>represented by symbols, structures of symbols and rules, the old PSSH
>assumption.  To understand a string or a rope you must have a visual
>processing sub-system connected to the main system.  A child learns
>about strings and ropes by playing with them and storing visual images
>of how they behaved in these experiments.  When a person needs to deal
>with strings and ropes in the future the images are recalled.  Reasoning
>is done with visual images not symbolic structures.  (I don't claim that
>dealing with images will be easy, only that it is necessary.)

     This is on point.  Too much of AI still revolves around trying
to reduce problems to some semi-linguistic form, like predicate
calculus or rules, and crunching on that.  This doesn't work
on unstructured problems, and no amount of elaborating on that
approach seems to help much.  Drew McDermott's classic essay 
"Artificial Intelligence meets Natural Stupidity" is still relevant.

     Personally, I think it's time to back off on human-level AI and
work on animal-level AI some more.  After all, we still can't even
build a decent lizard brain, even though we probably have enough
MIPS to do it now.

					John Nagle
