From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!sol.ctr.columbia.edu!lll-winken!iggy.GW.Vitalink.COM!psinntp!psinntp!scylla!daryl Thu Apr 16 11:34:44 EDT 1992
Article 5119 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!sol.ctr.columbia.edu!lll-winken!iggy.GW.Vitalink.COM!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <1992Apr14.004021.3628@oracorp.com>
Date: 14 Apr 92 00:40:21 GMT
Organization: ORA Corporation
Lines: 48

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
(in response to Antun Zirdum)

>> I think that you missunderstood my point, just because we cannot
>> explain how its done in humans - does that mean that it cannot be
>> duplicated by a physical system?

> No, but if we have good reasons to conclude that it can't
> be duplicated by a computer, we still have these good reasons
> even if we can't say how it's done in humans.

> Unless you are willing to accept this point, there is no point in
> continuing to discuss these matters with me, because I am never going
> to agree that showing how it's done in humans is necessary.

Well, I think that's a big mistake on your part. I believe that
investigating how (and *if*) human brains escape these purported
proofs of the impossibility of AI is crucial in making any progress.
Whether or not AI is actually possible, I would say that these
arguments (such as Searle's about syntax versus semantics, or Putnam's
about cherries and cats) are pretty worthless unless we know how human
brains escape from them. It is as if you proved the impossibility of
robot bumblebees by proving that nothing that worked like a bumblebee
could possibly fly. The existence of bumblebees would suggest that
something is wrong with your proof.

>>Now I am not saying that understanding has already
>>been duplicated by machines, I am arguing that there
>>is no reason that it cannot be!

> Several people have offered reasons why it cannot be.
> They may be wrong, but they're not wrong just because
> they haven't said how humans do it. Address their
> arguments rather than demanding they they do things
> that are unnecessary!

I disagree. I believe that most of the arguments for why computers
can't understand actually make the conditions on understanding so
difficult that *nothing* meets them, not even human beings. If you
claim that "Here is a property that human brains possess, but
computers do not", then you have two obligations: (1) to show that
human brains possess the property, and (2) to show that computers do
not. If you have only argued for (2), then your argument is worthless
(as an argument against AI).

Daryl McCullough
ORA Corp.
Ithaca, NY


