From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Tue May 12 15:49:02 EDT 1992
Article 5409 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <4@tdatirv.UUCP>
Date: 4 May 92 23:19:56 GMT
References: <1992Apr14.004021.3628@oracorp.com> <6640@skye.ed.ac.uk>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 162

In article <6640@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
|>Whether or not AI is actually possible, I would say that these
|>arguments (such as Searle's about syntax versus semantics, or Putnam's
|>about cherries and cats) are pretty worthless unless we know how human
|>brains escape from them. It is as if you proved the impossibility of
|>robot bumblebees by proving that nothing that worked like a bumblebee
|>could possibly fly. The existence of bumblebees would suggest that
|>something is wrong with your proof.
|
|Your analogy is wrong, but happens to illustrate my point about "how".
|The existence of bumblebees is sufficient.  It is not necessary to
|show _how_ bumblebees fly.

No you must *also* show that the proposed robot bumblebee *differs* from
the real one in some meaningful manner.  *This* is where most anti-AI
arguments fall down, they merely *assert* that humans are different, they
do not *demonstrate* it.

The point of the analogy is that if the robot bumblebee works *the* *same*
*way* as the real one, then the existance of the real one is sufficient to
prove it will fly.  To claim that the robot bumblebee will *not* fly you
must show how it differs from the real one.

I am saying that the 'proofs' that computers cannot 'think' may be exactly
like the purported 'proofs' that a bumblebee cannot fly in that the existance
of something that meets all of the criteria of the proof (the human brain)
nonetheless fails to conform to the conclusion thereof.

So, the problem is to *demonstrate* that humans do *not* meet the criteria
of the anti-AI 'proofs', because if they *do*, then the proofs are shown
to be wrong directly.  It is only if the premises of the proofs do *not*
apply to humans that they can even be *possibly* valid.

|A better analogy to Searle's arguments would be: I prove that merely
|having the right (bee-like) structure is not sufficient, because that
|structure could be realized in materials that would result in a "bee"
|that was too heavy (or, say, too fragile).  So I conclude that to fly
|the "bee" must use materials with equivalent physical properties (in
|certain respects) to those in actual bees.

Sigh, that is almost meaningless.  What are the 'equivalent properties'?

In particular why does my robot bumblebee *not* have those properties?


You see, without specifying *what* those properties *are*, you cannot tell
whether a given implementation *has* them or not.

Thus my answer to Searle would be essentially - what keeps computers
from having these so-called 'causal' properties?  I see no reason why
computers cannot have causality.

Now, show me a clear, unambiguous property that humans have that computers
*cannot* have.  Please specify it a sufficiently precise way that it can
be independently verified.  Please cite the evidence it is based on,
and the assumptions you used in evaluating that evidence.


So far I have not seen any purported properties of humans that meet the
above criteria.  Searle's (and Penrose's) properties are all too vague,
too imprecise, to be independently verified.

|>I disagree. I believe that most of the arguments for why computers
|>can't understand actually make the conditions on understanding so
|>difficult that *nothing* meets them, not even human beings.
|
|Then you should try to show that the arguments make the conditions
|too difficult, instead of saying the other side has to show the
|arguments don't make the conditions too hard.

The only two ways I know to do this are to build an artificial mind, or to
show that humans fall within the set of entities covered by the arguments
that computers cannot 'think'.

Either way it is necessary to know how human minds work to make the
demonstration complete.

|The former is a good faith attempt to get at the truth.
|
|The latter is a debating tactic.

NO, I am justing trying to say 'wait and see, your debates and arguments
are ,in themselves, inconclusive'.  Since I can see a perfectly logical,
internally consistant alternative point of view, the arguments are merely
that, arguments, not true mathematical proofs.

It is only when there is no possible alternative that arguments become proofs.

I am merely asking that you quit trying to say "it is impossible",
when all you can really show is that it *may* be impossible.


I am perfectly willing to work on this sort of research problem, if someone
is willing to provide me with the necessary funds (i.e. hire me to do it).

However, since I have to eat, and sleep, I cannot actively pursue this
problem at this time.

|>If you claim that "Here is a property that human brains possess, but
|>computers do not", then you have two obligations: (1) to show that
|>human brains possess the property, and (2) to show that computers do
|>not. If you have only argued for (2), then your argument is worthless
|>(as an argument against AI).
|
|Well, we already know it's impossible to show (1) to the satisfaction
|of some people on the net, because in effect they want a solution to
|the other minds problem.

Yes, I do.  Because that is the only way of truly knowing the answer.
Anything else is just guessing.

And guesses can *always* be wrong.

There is just too much evidence that much of what we call our 'self' is
itself mainly a mental construct to take self-awareness at face value.

As long as it is feasable to consider that the human mind use 'cybernetic'
means to create this construct, then it is reasonable to continue hoping
that AI *may* be possible.

|Moreover, we can certainly consider (1) and (2) separately.  So let's
|consider (2).  There are arguments that computers don't have the
|required property.  Do those arguments fail to demonstrate that
|their conslusion is true?  Or is the only thing wrong with them
|that they do not also address (1)?

The problem with them is that they are usually based on certain definitions,
or on certain aspects of self-awareness that are themselves not properly
demonstrated as reliable.

They are like the proofs that bumblebees cannot fly, they do not, yet,
properly distinguish computers from humans in a sufficiently *objective*
way for me to be sure they do *not* apply to humans.

And if they *do* apply to humans than either humans do not think, or
the arguments are wrong however compelling they may *seem*.


It is not so much that I require full understanding about how humans think,
as I require *objective*, *confirmable* evidence that humans and computers
differ in the relevant way.  Otherwise the bumblebee scenario remains
a real *possibility*.

|In any case, there is an important aspect of what I've been saying
|that you seem to be factoring out.  Even if it were necessary to 
|show that human brains are capable of (say) understanding it would
|not therefore be necessary to show _how_ brains accomplish this.

No, but it *is* necessary to show that computers do not possess whatever
faculty humans use to accomplish understanding.

How else do you propose to prove this?  As long as 'understanding' is,
itself not understood it is possible to come up with many different models
of it, some of which are computable, some of which are not.  Why should I
blindly accept the non-computable ones as true?  I do *not* blindly accpet
the computable ones, I just use them as a reasonable starting place for
research (since they are amenable to testing in the near future, the others
are not, and may never be).
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



