From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!newshost.uwo.ca!torn.onet.on.ca!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!noose.ecn.purdue.edu!samsung!sdd.hp.com!cs.utexas.edu!uunet!tdat!swf Tue Jun  9 10:06:07 EDT 1992
Article 6017 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!newshost.uwo.ca!torn.onet.on.ca!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!noose.ecn.purdue.edu!samsung!sdd.hp.com!cs.utexas.edu!uunet!tdat!swf
>From: swf@teradata.com (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <455@tdat.teradata.COM>
Date: 1 Jun 92 19:49:29 GMT
References: <1992May18.120933.1683@oracorp.com> <6728@skye.ed.ac.uk>
Sender: news@tdat.teradata.COM
Reply-To: swf@tdat.teradata.com (Stanley Friesen)
Organization: NCR Teradata Database Business Unit
Lines: 81

In article <6728@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
|In article <1992May18.120933.1683@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
|>The main argument that Searle gives against the possibility of Strong
|>AI is that
|>     1. Syntax is not sufficient for semantics. 
|>     2. Computers have only syntax.
|>     3. Human thoughts have semantics.
|>     4. Therefore, computers are not capable of producing human thoughts.
|>
|>I don't think that "syntax" means the same thing in lines 1 and 2, and
|>I don't think that "semantics" means the same thing in lines 1 and 3.
|
|Fine.  Show that they don't, or at least give us some reason to
|agree with you.

But, isn't Searle trying to gice a *proof* (in the Aristotelean sense)
here?  If so, then for it to be a *valid* proof, all of the premises
*must* either be 'obviously true' or be proven themselves.  And, as long
as any reasonable person question the premises, they are *not* obviously
true, and must be proven.  Thus, on Searle's *own* terms, he has failed
toa actually prove anything, since his axioms are not clearly true,
and therefor do not qualify as axioms.

Now, this is where we get into needing to know how human understanding
works, as this is the only way I can see of resolving the question of
whether the two 'syntax's and the two 'semantic's are the same or different.
For now, not knowing just *how* these terms apply to humans, we simply
*do* *not*, and *cannot* know whether Searle's axioms are true or false.

Thus, for now, the *only* meaningful statement we can make is ;we don't
know'.  And it is only throught research that we can find the answer.

|This is almost completely wrong, and insulting besides.
|
|For instance, I am interested in how humans understand.  I just don't
|think we have to know this before we can reach any conclusions about
|computers.  I am happy to discuss the other minds problem, but do not
|think we have to be able to solve it before we can reach any
|conclusions about computers.

But determining whether a computer has a mind *is* the other minds problem.
There is *nothing* else involved.  To solve the other minds problem is
to solve the question of minds in computers, and, i firmly believe that
if we solve it for computers we will have solved it universally (since
the implementation level criteria we will have developed will apply equally
to any mind, manufactured or evolved, our own, or alien).

What you are being chided for is *exactly* this refusal to apply the
other minds approach to computers while do so easily with biological
systems.  It is *this* bias that we see in your writings.

|If you want to avoid any conclusions about computers, a good
|way to do it would be to say a number of very hard problems
|have to be solved first.  I would actually agree that we will
|be in a better position once we do know more.  I have even said
|it several times in articles that I suspect you have read.
|However, we can nonetheless look at arguments and see what
|conclusions we can reach on the basis of our current knowledge,
|and we can do this without having to do all this other work
|at the same time.

I *have* looked at Searle's arguments, and concluded that his premises
are *not* axiomatic, and that lacking further knowledge we cannot decide
the issue.  Thus, the issue is undecidable at our current level of
knowledge.

|BTW, when I've said in the past that it can matter "how it
|works" -- in computers and in humans -- all kinds of people
|have disagreed.  All that matters, they say, is the behavior.
|However it's accomplished, the behavior is enough.

Yes, some have said so.  Not me.  I have said that behavior is likely,
*in* *practice*, to be sufficient proof of mechanism to allow its use
as a 'de facto' standard until a better test is found.

All of the mechanisms that I would rule out for understanding (and such)
are ones that would be too expensive (in some sense) to build in practice.
-- 
sarima@teradata.com			(formerly tdatirv!sarima)
  or
Stanley.Friesen@ElSegundoCA.ncr.com


