From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!sun-barr!olivea!uunet!psinntp!scylla!daryl Tue Mar 24 09:58:07 EST 1992
Article 4673 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!sun-barr!olivea!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <1992Mar23.153440.8464@oracorp.com>
Date: 23 Mar 92 15:34:40 GMT
Organization: ORA Corporation
Lines: 58

jeff@aiai.ed.ac.uk (Jeff Dalton) writes: (in reply to
zirdum@ccu.umanitoba.ca (Antun Zirdum))

>>In what way do people attach meaning to symbols, for instance, how
>>do you learn what a word means.

>We know humans can do it.  And there are arguments that machines
>can't (just by running the right program).  The correctness of
>those arguments does not depend on knowing how humans do it.

I think you are completely wrong about that. You want to impose an
extremely high standard for what it takes to have understanding. Since
the only examples we have of understanding beings are humans, I think
it is incumbent on you to demonstrate that your standards are not
overly high, that human beings would meet them. It isn't clear to me
that they would. I think your criteria for understanding are
impossible to meet, even by human beings, and it is only because you
refuse to discuss how humans are capable of understanding that you can
remain convinced that humans meet your criteria.

You repeatedly return to introspection as indisputable proof that
humans have some kind of understanding that is not available to symbol
crunchers. However, I think you are simply paying lip service to
introspection, and are not actually doing any. If you really try to
determine what is going on when you experience that you understand
something, does your introspection really tell you that there is
something other than correlation? Introspection certainly *doesn't*
tell you that your words having meaning in the sense of having unique
referents in the external world, since the external world is not
available to your introspection.

>>In other words, please explain to me in concrete terms what it is
>>that a machine is missing (but that people have) that enables them
>>to have knowledge of meaning!

>That is simply not necessary.

I believe it is necessary. I think that it is only possible for you to
hold your view that humans have something that computers lack because
you refuse to consider how humans are capable of understanding. In a
serious attempt to see how humans understand, I believe you would find
that humans have the same kinds of resources and limitations that
computers do.

>BTW, I have no interest whatsoever in proving that humans can
>understand.  If you're not willing to accept that they do, then
>there's no point in going further.

I disagree. I think that unless you are willing to look critically at
how humans can understand, there is no point in going further. I
believe that Dennett's book _Consciousness Explained_ is such a
serious attempt, and he finds that a lot of the problems that you see
with computer understanding are also problems for humans. We muddle
through with imperfect understanding, nevertheless.

Daryl McCullough
ORA Corp.
Ithaca, NY


