From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Tue May 12 15:49:02 EDT 1992
Article 5410 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <5@tdatirv.UUCP>
Date: 4 May 92 23:44:55 GMT
References: <1992Apr14.012458.7058@oracorp.com> <6641@skye.ed.ac.uk>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 161

In article <6641@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
|In article <1992Apr14.012458.7058@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
|>>>I can only reiterate what I have said before.  If you wish to show that
|>>>computers lack something that humans possess, it seems to me that you
|>>>need to show (a) that computers lack it, and (b) that humans possess
|>>>it. If you only prove (a) then you have not proved your point.
|
|>I don't doubt human understanding. 
|
|>Nobody is disputing that humans understand,
|
|That should take care of (b).  Can we now look at the arguments
|on (a) without further demands that we must also show (b)?

No, the (b) that I am looking at is the *prior* assumption, that humans
have some mechanism other than 'syntax' involved in thier cognitive
processing.

*That* is what I am disputing, or at least questioning.

It *may* be true, but it is not, by any means, properly demonstrated yet.

|>However, if you rephrased the question "Can computers
|>understand?" to be "Can computers do what we call 'understanding' when
|>done by humans?" then it becomes clearer that the answer must involve
|>comparing what humans do with what computers do.
|
|It has always been clear that we're comparing humans (who do
|such things as understand Chinese) with computers (which, if
|Searle is right, cannot).  So I don't think this rephrasing 
|has anything to recommend it.

It clarifies the issue.  It makes it clear what we are trying to measure.

Now Searle's argument comes down to showing that what the CR is doing
is different than what a human is doing.

I cannot see that he has ever done so, all he has ever done is to appeal
to intuition.  Well, intuition may be a useful mechanism for making quick
decisions, but it is *far* from 100% reliable.

I require more than intuition for an impossibility proof.

Impossibility proofs have an uncanny way of coming unraveled with each new
scientific advance.

I am not even totally convinced that faster than light travel is impossible,
and that has *far* more evidence in favor of it than Searle's position.
[Not logic, actual observational evidence].

|>> Now, if an argument against computer understanding also applied to
|>> humans, I would regard that as reason to conclude the argument was
|>> wrong. But I'm certainly not going to conclude the argument is wrong
|>> just because no one has yet shown it doesn't apply to humans. Why
|>> should I?
|>
|>Why should you conclude that it is right?
|
|Well, roughly (I'm not demanding a mathematical proof), that its
|conclusions follow from its premises and that its premises are
|true.

But its premises are mostly 'true' only by assumption, or by intuition,
or by definition.  None of these are final.

It takes more than premises that *seem* true to really show anything,
you must show the premises are true by means of observational evidence.

|>It is an incomplete argument, an argument with steps missing.
|
|I don't agree.

It fails to support its premises.

|> It doesn't show anything
|>until those steps are filled in. And the steps are not showing that
|>humans are capable of understanding, it is in showing that humans
|>have whatever it is claimed is necessary for intelligence.
|
|What?  Why the switch from "understanding" to "intelligence"?

O.K, 'have whatever it is that is claimed necessary for understanding'

It is the same question.  Why do you assume that humans have these
undemonstrated capacities?  What evidence do you have for them. (I am not
talking the capacity for understanding, I am talking about the capacities
supposedly necessary for understanding).

|Moreover, you seem to be assuming that the anti-AI arguments 
|all have the form: computers can't understand because they
|lack X (ie, "whatever is necessary").

I have seen no others.

|We will have an argument that looks like
|
|   <part 1>
|   therefore: if something lacks X, it lacks underanding
|   <part 2>
|   therefore computers lack X
|   therefore computers don't understand.
|
|Your complaint is _not_ that parts 1 and 2 fail to lead to their
|conclusions but rather that we need to show
|
|   humans have X

Or alternatively to show beyond possible doubt that part 1 is true.

Since the discovery that humans lack X would *disprove* part 1, then
as long as this remains possible, part 1 is in doubt.

This is the sticking point 'humans don't have X' is *inconsistant* with
part 1, and there is as yet no evidence that it is false (that is that
humans *do have X).  Either truth value for that statement is consistant
with the currently available evidence.

|So let's further suppose you agree that we can accept
| 
|(H) humans understand

O.K.

|We can now reason
|
|   1. if something lacks X, it lacks underanding
|   2. humans understand
|   3. therefore humans have X.
|
|In short, the arguments in parts 1 & 2, together with (H), 
|give us (3), and we do not need to show (3) independently.

But what if humans actually do not have X, then we must declare the logic
above invalid.

I have seen too many 'proofs' of things that turned out to be false to
put much trust in a 'proof' without objective evidence to back it up.

|Your only way out of this is to argue that <part 1> necessarily
|must include "humans have X".  But that is simply false.

Is it though?  If <part 1> is a assumed 'by definition', then it may
indeed entail/include "humans have X".  (And I place the syntax/semantics
arguments in this category - the truth of the corresponding part 1 is
a matter of definitions that in themselves entail #3 above).

Either you must find independent, observable evidence of <part 1>
or you must show that is cannot be refuted by possible future observations.



I consider any proof relevant to the real world as merely an hypothesis
to be confirmed or disproved by future observations, *not* as a final
statement of truth.

It is only in the abstract realm of pure mathematics that proofs are final
in themselves.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



