From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima Wed Feb 26 12:54:44 EST 1992
Article 4031 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Keywords: consciousness,functionalism,meaning
Message-ID: <449@tdatirv.UUCP>
Date: 25 Feb 92 17:42:04 GMT
References: <426@tdatirv.UUCP> <1992Feb19.173620.10529@psych.toronto.edu> <439@tdatirv.UUCP> <1992Feb23.000457.19378@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 138

In article <1992Feb23.000457.19378@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
|>|1. Brains cause minds. Now, of course, that's really too crude....
|>|2. Syntax is not sufficient for semantics....a conceptual truth....
|>|3. Computer programs are entirely defined by their formal, or syntactical
|>|     structure....true by definition [of a computer program]
|>|4. Minds have mental contents; specifically, they have semantic contents....
|>|     just an obvious fact about the way minds work....
|>|
|>|Conclusion 4. For any artefact that we might build which had mental states
|>|              equivalent to human mental states, the implementation
|>|              of a computer program would not by itself be sufficient.
|>|              Rather, the artefact would have to have powers equivalent to 
|>|              the powers of the human brain.
|>|
|>
|>As I have already stated, I question assumptions 2 and 3.  
|
|I can't conceive of what you object to in 3. It doesn't need evidence.
|It's utterly analytic.

Hmm, well, perhaps I am misunderstanding what it claims, but, by my definition,
computers are capable of more than just syntactic manipulation.

I suppose if you limit your attention to *just* the running program, and
ignore I/O activity you could say that the *programs* per se are purely
syntactic.  But this strikes me as being an almost meaningless statement,
since computer programs do not exist on thier own, they always have some
external context.  It ignores the rest of the computer, and the capabilities
added thereby.

So, I suppose 3 might be strictly true, but rather irrelevant, due to
its excesively narrow scope.  (It ignore a large component of the capabilities
of computers).

|As for 2, it is crucial. As Searle claims, syntax and semantics are
|conceptually distinct, viz., they mean different things.

But that's just the point, he *claims* it, not demonstrates it.

And they can mean different things and still be interconnected.  There is
this matter of emergent phenomena, which are prevalent in living systems
(life itself is an emergent phenomenon).  So, how does having a different
meaning prove that semantics cannot be an emergent phenomenon on a syntactic
base, just like life is an emergent phenomenon on a chemical base?

Thus, for me to accept 2 I need a demonstration that semantics *cannot*
be an emergent phenomenon.  Since Searle has no idea how semantics arises
in humans, he cannot really rule out emergence.  And if it is emergent in
humans, why not in computers?

And if it is not so derived in humans, where does it come from?  Searle
fails to provide any useful alternative model for semantics, he just waves
his hand and says 'it is' (which is all hi 'causal powers' really do).

|The question is
|whether semantics might be reduced to syntax. Many have tried. None have
|succeeded as yet. In the meantime, prudence would dictate construing of
|them as different (unless one has a computatiopnalist axe to grind).

I would say they are different even if semantics is based on syntax.

But it is only by coming up with a verifiable model of human semantics
that we can determine whether or not computer are capable of the same
mechanism.  Merely saying "sytax and semantics are different, so computers
cannot have semantics' does not hold water.

I am *not* saying that pure, context-free, syntactic analysis can generate
semantics, but I am saying that there is no reason to *assume* that computers
*cannot* generate semantics in the same way as humans do.

There is no reason to assume they *can* either, but it is only by doing actual
research, in psychology, in neurology, in cognitive science, and in cybernetics
that we can have any hope of finding out.

Searle seems to be saying 'it is impossible, so let's give up'.  *That* is
what I object to.  His purely logical argument is not conclusive, because
it is based largely on ignorance, on lack of knowledge of what semantics
is and how it works.

|>
|>Also, what does Conclusion 4 *mean*?
|>
|I'm at a loss. It's an English sentence (two, actually) with well-formed
|subjects and predicates. I'll attempt to explicate, though I feel as if
|I'm just paraphrasing. It means that is follows necessarily from the 
|premises that in anything we can build that has mental states (as we know
|them) there must be more than a computer program at work; this because
|mental states have semantic content and computer programs don't. Thus,
|by Leibinz's law, the two things cannot be equivalent.

But computers are more than just computer programs.  And it is not clear that
mental states are in any way different than computer states.  Only research
will tell us for sure.  Searle's assertions to the contrary are just that,
assertions, with no evidential basis.
|>
|>
|>What I was getting at was this:
|>
|>Given a construct that shows behavior indistinguishable from a human
|>then either:
|>
|>	A) it accomplishes this by an internal mechanism that is different
|>	than a human
|>OR
|>	B) it accomplishes this by the same internal mechanism as a human.
|>
|>In case B) the construct is, in my mind, *necessarily* intelligent, since
|>it is indistinguishable from a human functionally. 
|>
|>In case A) the question is still open. To decide case A) it is necessary
|>to have a clear idea of what *classes* of mechanisms count as intelligent
|>and which ones do not.  
|
|This is a common error. Searle is not determining whether it is intelligent
|(under some very broad contrual of that term).  He is trying to find out
|if in understands Chinese in the same way that we do; a much more modest
|and manageable project.

So, reword A and B in terms of 'understanding Chinese' and the argument
still stands.

It still boils down to the same thing, what classes of mechanisms count
as 'understanding Chinese'?  Since we do not even know by what mechanism
*we* understand Chinese, we can hardly determine if a computer is capable
of using the same one or not.  And even if it does not use exactly the
same mechanism as we do, we must still answer the question of whether its
mechanism belongs to the same equivalence class.

Searle fails to provide a sufficiently tight definition of 'understand' to
allow us to judge equivalence classes for such mechanisms.  Without a
criterion for equivalence classes, his argument fails.

[Actually, I doubt that 'understand' is a *single* mental state, or even that
it is a seperable subset of mental states - I suspect that it is deeply
intertwined with other aspect of cognition and memory].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


