From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Tue May 12 15:49:01 EDT 1992
Article 5408 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Keywords: AI Searle Dickhead Barf
Message-ID: <3@tdatirv.UUCP>
Date: 4 May 92 22:47:56 GMT
References: <1992Mar29.083336.6608@ccu.umanitoba.ca> <6589@skye.ed.ac.uk> <523@tdatirv.UUCP> <6638@skye.ed.ac.uk>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 82

In article <6638@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
|In article <523@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
|>So far no set of arguments against implementing
|>a brain in digital circuits has been such as to preclude its applicability
|>to humans as well.
|
|You must have been reading different arguments than I have.

No, but I find them all too speculative to be conclusive.

That is they all rely on undemonstrated assumptions, so they do not *preclude*
*anything*.

|>It is because it is the only way of showing that the reasons you have advanced
|>do *not* apply to humans.
|
|I think you are confused.  Here's an example.
|
|Suppose we have an argument like this:
|
|   1. if P(x) then not can_do(x,task)
|   2. P(computers)
|   3. therefore not can_do(computers,task)
|
|To show the argument does not apply to humans, it suffices to show
|that P(humans) is flase.

And that #1 is true.  *That* is usually the undemonstrated assumption in the
arguments such as the CR.

Just claiming that "obviously #1 is true" is not enough, you must provide
*evidence* that it is true - evidence that is based on *observations*,
that are *repeatable* by any competent observer.

This has never been done.  In most cases the premise corresponding to #1 is
true for some set of definitions and false for others, and is thus not
intrinsically either true or false.  It is *arbitrary*.

Thus one may either conclude that computers canot do 'task', *or* one may
copnclude that the set of definitions used to formulate #1 are of little
utility.


Also, it has rarely been shown that P(humans) is false, only that some
interpretation of human behavior *suggests* it *may* be false.

To show P(human) to be false you must show that the interpretation of
human behavior that entails it is necessarily true, preferably by observation.

In fact, P(human)=false is often entailed by the same definitions that
make #1 truem and thus the two are equivalent (tautology).  Thus little
is really proven except that it is logically consistant to say that humans
and computers are different.  This is a *far* cry from saying it is actually
true of the real world.

|To show that P(humans) is false it is not necessary to show _how_
|humans manage to do the task in question.

No, but it is necessary to show that it is false.

And in many cases the easiest way to do this is to show that humans perform
the 'task' in some set of ways.  That is it is easier to go backwards from
demonstrating the conclusion to the premises than it is to demonstrate the
premises themselves.

|Moreover, if we can conclude
|
|   can_do(humans,task)
|
|we can reason thus:
|
|   1. if P(x) then not can_do(x,task)
|   2. can_do(humans,task)
|   3. therefore not P(humans)
|

Again this assumes #1, which is rarely truly *demonstrated* in a
conclusive way.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



