From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Apr  7 23:22:36 EDT 1992
Article 4752 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6528@skye.ed.ac.uk>
Date: 26 Mar 92 21:27:36 GMT
References: <1992Mar24.142705.345@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 87

In article <1992Mar24.142705.345@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>jeff@aiai.ed.ac.uk (Jeff Dalton) writes (in response to
>orourke@sophia.smith.edu (Joseph O'Rourke)):
>
>>>If you feel such discrimination is not a type of primitive meaning,
>>>perhaps you should sketch the key requirements of what constitutes a
>>>meaningful symbol in your theory of meaning.
>
>>I don't have a theory of meaning and, as always, I reject the
>>suggestion that the burden of proof should be on the "anti-AI" side to
>>provide definitions.
>
>I don't see that the burden of proof lies with the pro-AI side. 

If someone (eg, O'Rourke) wants to claim that X is a type of primitive
meaning, then they ought to be willing to say what it is they mean by
"primitive meaning", why we should consider it relevant to anything
else that might be called "meaning", and why X counts as an instance
of it.

Otherwise, we may as well all start playing this game.  For instance,
I could say: if you feel the shape of a grain of sand is not a type
of primitive meaning, perhaps you should sketch the key requirements
of what constitutes a meaningful symbol in your theory of meaning.
Or, better: if you think such discrimination _is_ a type of primitive
meaning, ...

>The
>argument that functionalism is sufficient for understanding is simply
>that a system with the right functionality will have all the
>properties that we are *certain* that we want in a being that
>understands. 

What do you mean by "functionality" and "functionalism"?  This is
a serious question, not a rehash of the "game" above.

Becuase, by "functionality" I think you mean something like "behavior"
(or behavioral capabilities), and by "functionalism" something about
mind being equivalent to implementing a program.  But "functionalism"
is not necessarily supported by arguments about the sufficiency of
behavior.  Indeed, suppose it's true that anything with the right
behavior has understanding.  That would hardly show that functionalism
-- as opposed to all other views of mind -- was right.

>If you are going to say that the AI notion of meaning is
>insufficient, then it seems to me that the burden of proof is on you
>to say how. 

What is "the AI notion of meaning"?  Is there really only one?
What would I have to have read in order to know about it?
And why is the burden of proof on me?  Is it sufficient to 
call something a theory of meaning to put the burden of proof
on anyone who says it's wrong?

But, whatever you think about that, please note that there's a
beig difference beteen saying that the burden of proof is on those
who disagree with some AI theory to show it is wrong and saying
that those who disagree have to provide an alternative theory.
It was the latter I was objecting to in my reply to O'Rourke.

>You don't have to have a formal definition, but it seems
>to me that you need to have (a) a clear notion of what is missing in
>computers understanding, and (b) an argument that it is not missing in
>humans.
>
>There are two things that I have heard for (a), computers supposedly
>lack (1) qualia, and (2) reference. 

Why do you think it's necessary to prove that humans have qualia
or know that "trees" refers to trees?

It's always open to you to try to show that some argument against
computer understanding would also apply to humans.  What's wrong, I
think, is to try to get out of this by demanding that other people
prove the opposite.

>I think that sometimes people confuse two things: incorrigible
>beliefs, and facts obtained by introspection. 

Do you have some reason to think such a confusion is involved here?

You know, if you demand that the other side have to prove that
this or that is absolutely certain, they will always have to
disappoint you.  But perhaps we can get somewhere if the demands
are more reasonable.

-- jd


