From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!asuvax!ukma!psuvax1!hsdndev!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Tue Apr  7 23:22:05 EDT 1992
Article 4699 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!asuvax!ukma!psuvax1!hsdndev!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <1992Mar24.142705.345@oracorp.com>
Date: 24 Mar 92 14:27:05 GMT
Organization: ORA Corporation
Lines: 50

jeff@aiai.ed.ac.uk (Jeff Dalton) writes (in response to
orourke@sophia.smith.edu (Joseph O'Rourke)):

>>If you feel such discrimination is not a type of primitive meaning,
>>perhaps you should sketch the key requirements of what constitutes a
>>meaningful symbol in your theory of meaning.

>I don't have a theory of meaning and, as always, I reject the
>suggestion that the burden of proof should be on the "anti-AI" side to
>provide definitions.

I don't see that the burden of proof lies with the pro-AI side. The
argument that functionalism is sufficient for understanding is simply
that a system with the right functionality will have all the
properties that we are *certain* that we want in a being that
understands. If you are going to say that the AI notion of meaning is
insufficient, then it seems to me that the burden of proof is on you
to say how. You don't have to have a formal definition, but it seems
to me that you need to have (a) a clear notion of what is missing in
computers understanding, and (b) an argument that it is not missing in
humans.

There are two things that I have heard for (a), computers supposedly
lack (1) qualia, and (2) reference. Qualia deserves a thread of its
own, so I will restrict discussion here to everything else. As far as
reference, I agree that the best that AI can do is to have the right
internal relationships between concepts and have the right
correlations between internal concepts and the external world.

What about (b)? What kind of argument is there for humans going beyond
intenal coherence and external correlation? The only argument anyone
ever gives is introspection: we *know* our thoughts have meaning. In
spite of the impression I may have given, I don't have anything
against arguments from introspection. However, I don't consider "It is
obvious that our thoughts have meaning" any kind of argument at all.
Introspection gives us *data* about the way our minds work, it doesn't
give us any conclusions. If you think that something or other follows
from introspection, I would like to hear how it follows. How do you
get from the raw data of senses and thoughts in our minds to the
conclusion that we have something computers lack?

I think that sometimes people confuse two things: incorrigible
beliefs, and facts obtained by introspection. They are not the same
thing. While it may be true that some facts about the mind can be
obtained through introspection, it is not true that every unshakeable
belief about the mind is actually justified by introspection.

Daryl McCullough
ORA Corp.
Ithaca, NY


