From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:49:48 EDT 1992
Article 5494 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <6688@skye.ed.ac.uk>
Date: 8 May 92 20:32:12 GMT
References: <1992Apr14.004021.3628@oracorp.com> <6640@skye.ed.ac.uk> <4@tdatirv.UUCP>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 224

In article <4@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <6640@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>|>Whether or not AI is actually possible, I would say that these
>|>arguments (such as Searle's about syntax versus semantics, or Putnam's
>|>about cherries and cats) are pretty worthless unless we know how human
>|>brains escape from them. It is as if you proved the impossibility of
>|>robot bumblebees by proving that nothing that worked like a bumblebee
>|>could possibly fly. The existence of bumblebees would suggest that
>|>something is wrong with your proof.
>|
>|Your analogy is wrong, but happens to illustrate my point about "how".
>|The existence of bumblebees is sufficient.  It is not necessary to
>|show _how_ bumblebees fly.
>
>No you must *also* show that the proposed robot bumblebee *differs* from
>the real one in some meaningful manner.

What?  In order to show that

  B. Anything that works like a bumblebee cannot fly.

is wrong, all I have to show is

  B-1. A bumblebee works like a bumblebee.
  B-2. A bumblebee can fly.

If (B) is therefore wrong, any argument against the possibility
of robot bumblebees that relies on (B) is also wrong.

>  *This* is where most anti-AI
>arguments fall down, they merely *assert* that humans are different, they
>do not *demonstrate* it.

They don't have to if we can agree that humans can understand,
are conscious, have minds, whatever.  If we can agree, we don't
need to demonstrate.

>The point of the analogy is that if the robot bumblebee works *the* *same*
>*way* as the real one, then the existance of the real one is sufficient to
>prove it will fly.  To claim that the robot bumblebee will *not* fly you
>must show how it differs from the real one.

Look, if my argument that concludes 

  R. Robot bumblebees cannot fly

is what you suggested in the article I quoted above, namely:

B/R-1. Anything that works like a bumblebee cannot fly.
  R-2. A robot bumblebee works like a bumblebee.

Then, as above, the existence of real bumblebees, plus the fact
that they work like a bumblebee and can fly, shows that this argument
for (R) does not work.

To show that this argument for (R) fails, it is not at all necessary
to show how robot bumblebees differ from real ones.  What matters is
that they're the _same_ as real ones in one key respect: they work
like a bumblebee.

So if I want to "claim that the robot bumblebee will *not* fly",
I need a _different argument_.

Now, as I said before about the AI arguments, if this argument
applies to bumblebees as well as robot bumblebees, then we can
tell it's wrog becuase, of course, bumblebees _can_ fly.

But in this case, all I have to do is show it applies to bumblebees.
I do not have to show how bumblebees manage to fly.

Likewise, if an argument against computer understanding applies to
humans, and we accept that humans can understand, we will conclude
that the argument is wrong.  But we should not conclude the argument
is wrong merely because no one has proved it doesn't apply to
humans.

>I am saying that the 'proofs' that computers cannot 'think' may be exactly
>like the purported 'proofs' that a bumblebee cannot fly in that the existance
>of something that meets all of the criteria of the proof (the human brain)
>nonetheless fails to conform to the conclusion thereof.
>
>So, the problem is to *demonstrate* that humans do *not* meet the criteria
>of the anti-AI 'proofs', because if they *do*, then the proofs are shown
>to be wrong directly.  It is only if the premises of the proofs do *not*
>apply to humans that they can even be *possibly* valid.

But of course if an argument agains understanding applies to humans,
and we know humans do understand, then we should reject the argument.
I have never disputed this!

>|A better analogy to Searle's arguments would be: ...

>Sigh, that is almost meaningless.

So forget it.  I'm not trying to prove anything by analogy.

>Thus my answer to Searle would be essentially - what keeps computers
>from having these so-called 'causal' properties?  I see no reason why
>computers cannot have causality.

Have you read what Searle says about "causal powers"?

Searle does NOT argue thus:

  N-1. Certain causal powers are necessary for understanding.
  N-2. Computers lack them.
  N-3. Therefore computers cannot understand.

Moreover if it NOT a quesiton of whether computers have causality or
not.

>Now, show me a clear, unambiguous property that humans have that computers
>*cannot* have.  Please specify it a sufficiently precise way that it can
>be independently verified.  Please cite the evidence it is based on,
>and the assumptions you used in evaluating that evidence.

Look, Searle says we are computers!  We are machines.   Machines
can understand.  Etc.

It's instantiating a program that fails to produce underastanding,
according to Searle, not being a machine.

>From time to time, I remember to insert a Searlish qualification
into "computers cannot understand" by adding "mearly by instantiating
the right program".  But it's a pain to type, and after a while I
think it's safe to assume everyone remembers Searle making such
distinctions.  Perhaps I am wrong?

>|>I disagree. I believe that most of the arguments for why computers
>|>can't understand actually make the conditions on understanding so
>|>difficult that *nothing* meets them, not even human beings.
>|
>|Then you should try to show that the arguments make the conditions
>|too difficult, instead of saying the other side has to show the
>|arguments don't make the conditions too hard.
>
>The only two ways I know to do this are to build an artificial mind, or to
>show that humans fall within the set of entities covered by the arguments
>that computers cannot 'think'.
>
>Either way it is necessary to know how human minds work to make the
>demonstration complete.

(BTW, I think you should compare your approach in this exchange with
Dennetts in _Elbow Room_.  Like you, Dennett thinks that what Searle
says computers cannot have is something so difficult to have that not
even humans have it.)

Anyway, as shown in the bumblebee discussion it is not necessary
in general to know _how_ something works.  Of course it might be
necessary for some particular arguments.  But then you should be
able to say what it is about those arguments that makes it
necessary.

>|The former is a good faith attempt to get at the truth.
>|
>|The latter is a debating tactic.
>
>NO, I am justing trying to say 'wait and see, your debates and arguments
>are ,in themselves, inconclusive'.  Since I can see a perfectly logical,
>internally consistant alternative point of view, the arguments are merely
>that, arguments, not true mathematical proofs.

If you demand mathematical proofs before you will accept a conclusion
than you're not going to accept many conclusions.

Instead of looking for proofs, let's look for "good reasons".
We conclude all kinds of things on the basis of good reasons
rather than proof.  (It should be clear that we don't want to
conclude on the basis of bad reasons and that we seldom have
proofs outside mathematics.)

>I am merely asking that you quit trying to say "it is impossible",
>when all you can really show is that it *may* be impossible.

Well, actually _my_ view is that the question of computer
understanding is an open one and that we will not be able to
settle it until we know more about (a) how programs with
interesting behavior work and (b) how humans work.

But I think this is true in a practical sense, not one of absolute
principle.  That is, I leave open the possibility that there can
be sufficiently good arguments agains computer understanding
even if we haven't accomplished (a) or (b).

>|>If you claim that "Here is a property that human brains possess, but
>|>computers do not", then you have two obligations: (1) to show that
>|>human brains possess the property, and (2) to show that computers do
>|>not. If you have only argued for (2), then your argument is worthless
>|>(as an argument against AI).
>|
>|Well, we already know it's impossible to show (1) to the satisfaction
>|of some people on the net, because in effect they want a solution to
>|the other minds problem.
>
>Yes, I do.  Because that is the only way of truly knowing the answer.
>Anything else is just guessing.
>
>And guesses can *always* be wrong.

There are some things we just cannot know for sure.  After all,
(this will annoy Daryl McC.) we can't know for sure that my
coffee cup isn't the most intelligent being in the universe.
Yet we are right to conclude that it isn't.

Unfortunately, the existence of other minds seems to be one of the
things we cannot know for sure.

But if all you want to say is that we can never know for sure
that computers cannot understand, then ok, we can never know for
sure.  

>It is not so much that I require full understanding about how humans think,
>as I require *objective*, *confirmable* evidence that humans and computers
>differ in the relevant way.  

That is, instead of showing that certain arguments apply to humans
as well as machines, you plan to reject the arguments unless someone
can show they do not apply to humans.  Is that right?

I do not agree that the burden of proof should be allocated in
so one-sided a way.

-- jd


