From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:49:44 EDT 1992
Article 5487 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <6684@skye.ed.ac.uk>
Date: 8 May 92 18:14:21 GMT
References: <1992Apr14.004021.3628@oracorp.com> <6640@skye.ed.ac.uk> <1992May5.195616.28038@gpu.utcs.utoronto.ca>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 53

In article <1992May5.195616.28038@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <6640@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>I am quite sure that it would be impossible to say that a "bee" is too heavy
>or too fragile (to fly) without knowing how flying is achieved. For instance
>you could not discount an argument that a "bee" can not fly because it has 
>a wrong color till you knew that flying involves interaction with air and not
>with photons of a suitable wavelength.

Of course we can discount it.  It's not as if we know _nothing_.

Moreover, I don't know how bees manage to fly, but I can tell you
right now that certain kinds of fake bees won't work.  I can tell
you that because I know enough about the materials in the fake
bees.

>>Well, we already know it's impossible to show (1) to the satisfaction
>>of some people on the net, because in effect they want a solution to
>>the other minds problem.
>>
>Rightly so! After all, deciding if a computer has a mind is the other minds
>problem, is it not?

No.

>>In any case, there is an important aspect of what I've been saying
>>that you seem to be factoring out.  Even if it were necessary to 
>>show that human brains are capable of (say) understanding it would
>>not therefore be necessary to show _how_ brains accomplish this.
>>
>No, but to argue that another entity (a computer) does not understand, even
>though it has an identical behaviour (to humans), you have to be able to show 
>how understanding  arises in humans and then show that this mechanism is not 
>present in computers.

No I don't.  I can _conclude_ that it's not present.  Like this:

   1. Computers can't understand.
   2. Mechanims M is necessary for understanding.
   3. Therefore computers lack M.

All I have to add is an argument whose conclusion is (1).  And that's
exactly what Searle and others have provided.  Of course there might
be something wrong with those arguments so that they fail to show
(1).  If so, we can tell by looking at the arguments whose conclusion
is (1).  It's flaws in those arguments that make them wrong (if they
are wrong), not our incomplete knowledge of how humans work.

It seems to me that the anti-Searle side must be in pretty severe
difficulty if instead of pointing out flaws in Searle's reasoning
they have to try to get the other side to do all the work!

-- jd


