From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!bloom-beacon!eru.mt.luth.se!lunic!sunic2!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:48:36 EDT 1992
Article 5361 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!bloom-beacon!eru.mt.luth.se!lunic!sunic2!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <6640@skye.ed.ac.uk>
Date: 1 May 92 18:30:06 GMT
References: <1992Apr14.004021.3628@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 82

In article <1992Apr14.004021.3628@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>>> I think that you missunderstood my point, just because we cannot
>>> explain how its done in humans - does that mean that it cannot be
>>> duplicated by a physical system?
>
>> No, but if we have good reasons to conclude that it can't
>> be duplicated by a computer, we still have these good reasons
>> even if we can't say how it's done in humans.
>
>> Unless you are willing to accept this point, there is no point in
>> continuing to discuss these matters with me, because I am never going
>> to agree that showing how it's done in humans is necessary.
>
>Well, I think that's a big mistake on your part. I believe that
>investigating how (and *if*) human brains escape these purported
>proofs of the impossibility of AI is crucial in making any progress.

Investigate it then!

>Whether or not AI is actually possible, I would say that these
>arguments (such as Searle's about syntax versus semantics, or Putnam's
>about cherries and cats) are pretty worthless unless we know how human
>brains escape from them. It is as if you proved the impossibility of
>robot bumblebees by proving that nothing that worked like a bumblebee
>could possibly fly. The existence of bumblebees would suggest that
>something is wrong with your proof.

Your analogy is wrong, but happens to illustrate my point about "how".
The existence of bumblebees is sufficient.  It is not necessary to
show _how_ bumblebees fly.

A better analogy to Searle's arguments would be: I prove that merely
having the right (bee-like) structure is not sufficient, because that
structure could be realized in materials that would result in a "bee"
that was too heavy (or, say, too fragile).  So I conclude that to fly
the "bee" must use materials with equivalent physical properties (in
certain respects) to those in actual bees.

>>>Now I am not saying that understanding has already
>>>been duplicated by machines, I am arguing that there
>>>is no reason that it cannot be!
>
>> Several people have offered reasons why it cannot be.
>> They may be wrong, but they're not wrong just because
>> they haven't said how humans do it. Address their
>> arguments rather than demanding they they do things
>> that are unnecessary!
>
>I disagree. I believe that most of the arguments for why computers
>can't understand actually make the conditions on understanding so
>difficult that *nothing* meets them, not even human beings.

Then you should try to show that the arguments make the conditions
too difficult, instead of saying the other side has to show the
arguments don't make the conditions too hard.

The former is a good faith attempt to get at the truth.

The latter is a debating tactic.

>If you claim that "Here is a property that human brains possess, but
>computers do not", then you have two obligations: (1) to show that
>human brains possess the property, and (2) to show that computers do
>not. If you have only argued for (2), then your argument is worthless
>(as an argument against AI).

Well, we already know it's impossible to show (1) to the satisfaction
of some people on the net, because in effect they want a solution to
the other minds problem.

Moreover, we can certainly consider (1) and (2) separately.  So let's
consider (2).  There are arguments that computers don't have the
required property.  Do those arguments fail to demonstrate that
their conslusion is true?  Or is the only thing wrong with them
that they do not also address (1)?

In any case, there is an important aspect of what I've been saying
that you seem to be factoring out.  Even if it were necessary to 
show that human brains are capable of (say) understanding it would
not therefore be necessary to show _how_ brains accomplish this.

-- jd


