From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Tue May 12 15:49:08 EDT 1992
Article 5421 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <1992May5.195616.28038@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Apr14.004021.3628@oracorp.com> <6640@skye.ed.ac.uk>
Date: Tue, 5 May 1992 19:56:16 GMT

In article <6640@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>Your analogy is wrong, but happens to illustrate my point about "how".
>The existence of bumblebees is sufficient.  It is not necessary to
>show _how_ bumblebees fly.
>
>A better analogy to Searle's arguments would be: I prove that merely
>having the right (bee-like) structure is not sufficient, because that
>structure could be realized in materials that would result in a "bee"
>that was too heavy (or, say, too fragile).  So I conclude that to fly
>the "bee" must use materials with equivalent physical properties (in
>certain respects) to those in actual bees.
>
I am quite sure that it would be impossible to say that a "bee" is too heavy
or too fragile (to fly) without knowing how flying is achieved. For instance
you could not discount an argument that a "bee" can not fly because it has 
a wrong color till you knew that flying involves interaction with air and not
with photons of a suitable wavelength. After all, the fact that we do not see
blue bumblebees does not mean that a robot bee cannot fly because it is blue.
We have to know how bumblebees fly to know which properties are relevent.

>Well, we already know it's impossible to show (1) to the satisfaction
>of some people on the net, because in effect they want a solution to
>the other minds problem.
>
Rightly so! After all, deciding if a computer has a mind is the other minds
problem, is it not?
>
>In any case, there is an important aspect of what I've been saying
>that you seem to be factoring out.  Even if it were necessary to 
>show that human brains are capable of (say) understanding it would
>not therefore be necessary to show _how_ brains accomplish this.
>
No, but to argue that another entity (a computer) does not understand, even
though it has an identical behaviour (to humans), you have to be able to show 
how understanding  arises in humans and then show that this mechanism is not 
present in computers.

>-- jd


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


