From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:54:26 EST 1992
Article 4358 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar7.160942.22844@oracorp.com>
Message-ID: <1992Mar9.174443.8046@psych.toronto.edu>
Date: Mon, 9 Mar 1992 17:44:43 GMT

In article <1992Mar7.160942.22844@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>
>>>Chris, why must you always attack strawmen ? Why bother with easy
>>>questions ("does a system that shuffles symbols understand the
>>>symbols") when more interesting and difficult ones are around ("does a
>>>system that models its own symbol shuffling understand the symbols") ?
>
>> And how, pray tell, does the system "model its own symbol shuffling"
>> without simply shuffling symbols?  How does this avoid being a regress?
>
>> (As I've noted before, I believe that "self-modeling" explanations
>> are merely smoke and mirrors.)
>
>Michael, the problem of infinite regress I believe is real, and I
>think it applies just as well to *any* notion of understanding, not
>just to AI. How does it help if the brain uses electro-chemical
>reactions to cause its thinking? There are still just electro-chemical
>patterns in the brain, and they only relate to other electro-chemical
>patterns. How do they ever relate to the Real World?

Daryl, I think you have the argument wrong.  I take it as a *fact* about
us (or at least me) that we *are* able to connect symbols to the world
in such a way as yields meaning (or semantics, or understanding, or
whatever).  I am certainly not committed to Searle's "story" of how
this is possible (I think he's absolutely wrong in this respect), but
to say we don't know how it *is* done is not the same thing as saying it
*isn't* done.  Sure, you can deny that humans actually have something
called "understanding" or "subjective experience" and thus counter 
Searle, but to do so it to avoid the question, not to answer it.  If
you *don't* think you have something called "understanding," fine.  But
then we can't talk...

>You can't argue against the possibility of artificial intelligence by
>using arguments that would apply to *any* intelligence. (Well, you can
>if you want to, but I don't see how it proves anything.) 
>
>In my opinion, the most that can be asked of an intelligent being
>(computer or human) is:
>
>1. It's internal processing produces the right relationships among its
>internal patterns.
>
>2. The being's to the world produces the right relationship between
>the internal patterns and the external world.

If this is all you want, then Searle shouldn't bother you, because
all you want is essentially a behaviouristic account of intelligence.  The  
Chinese Room is an attempt to show that such an account is insufficient,
in that it does not necessarily yield our subjective experience of intelligence.If you don't include this subjective component, then everything is hunky-dory.
But many people believe subjective experience to be the hallmark of the mental.

>To the extent that this isn't sufficient for true semantics, mortal
>beings don't *have* true semantics.

Well, we certainly have *something*, which might as well be called "true
semantics," since the term was developed to describe features of our
world.  Once again, if you deny that humans have semantics, then the
problem goes away, much as if you deny that birds can fly, then most
of the difficulties in aeronautics disappear.  I myself am not happy
with such "solutions".

In the end what is needed is a satisfactory account of semantics or meaning.
Once we have this, we can then see if purely syntatic devices are the
kinds of things which can have these things.  Currently, however, all
we have are some preliminary attempts at accounting for meaning, and a
principled distinction between syntax and semantics.  

- michael





