Newsgroups: comp.ai,comp.ai.nat-lang,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.mathworks.com!news.kei.com!bloom-beacon.mit.edu!news!minsky
From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Announce: Book excerpt available
Message-ID: <1995Jul7.035916.14770@media.mit.edu>
Keywords: artificial intelligence
Sender: news@media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <jpeekDBBrx4.D8L@netcom.com>
Date: Fri, 7 Jul 1995 03:59:16 GMT
Lines: 46
Xref: glinda.oz.cs.cmu.edu comp.ai:31211 comp.ai.nat-lang:3572 sci.cognitive:8193

In article <jpeekDBBrx4.D8L@netcom.com> Sara Winge <sara@ora.com> writes:
>"The Future Does Not Compute," a new book from O'Reilly & Associates,
>examines the impact of computers on human consciousness and culture.
>Author Stephen L. Talbott addresses the nature of mechanical
>intelligence in Chapter 23, "Can We Transcend Computation?"

>...Once we have understood meaning in terms of
>Owen Barfield's "relation of polar contraries," we can see why every
>effort to give computers a "sense for meaning" is bound to fail--and
>also why we are continually deceived into thinking we have succeeded.
>The notion of polar contraries is here brought to bear for the first
>time upon the issues of cognitive science, and promises to cut through
>the confusions of many previously unresolvable debates.

This sound to me like another book of weak arguments that purport to
show that computers cannot ever think.  However, there is a sense in
which I agree that attempting narrowly to formalize what we mean by
"meaning" is indeed bound to fail.   Somewhere or other, I once wrote:

"How could machines accomplish such things when philosophers have
struggled endlessly to understand what 'meaning' means?    My answer
is that those efforts failed because meaning is no single thing, nor
is understanding a single act.  Instead, the activities of human
thought engage an enormous society of different structures and
processes.  The secret of what X means to us lies in how our
representations of X connect to other things we know.  So if you
understand something only one way then you scarcely understand it at
all -- because when something goes wrong, you have nowhere to go.
That's where logical philosophy got stuck.  But computers can use
multiply-connected representations, so that when one approach fails
you can try another.  Like turning ideas around in your mind and
trying out alternative perspectives till you find one that works.  And
that's what we mean by thinking!"

I've seen variants of *this* idea discussed in the net by Aaron
Sloman, Scott Fahlman, Vince Kerchner, and several others.  If Talbott
is developing some view like this, I'd read the book.  But these days,
I don't usually read a recommended book unless (1) someone explains to
me at least one new good idea in it or (2) it's by someone who I
already know to have good ideas.  Can anyone explain a really good
idea from Talbott's book?


