From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo Wed Feb 26 12:53:44 EST 1992
Article 3937 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Message-ID: <1992Feb23.000457.19378@psych.toronto.edu>
Keywords: consciousness,functionalism,meaning
Organization: Department of Psychology, University of Toronto
References: <426@tdatirv.UUCP> <1992Feb19.173620.10529@psych.toronto.edu> <439@tdatirv.UUCP>
Date: Sun, 23 Feb 1992 00:04:57 GMT

In article <439@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1992Feb19.173620.10529@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>|TO THE SOURCE!
>|_Minds, Brains, and Science_ pp. 39-40:
>|1. Brains cause minds. Now, of course, that's really too crude....
>|2. Syntax is not sufficient for semantics....a conceptual truth....
>|3. Computer programs are entirely defined by their formal, or syntactical
>|     structure....true by definition [of a computer program]
>|4. Minds have mental contents; specifically, they have semantic contents....
>|     just an obvious fact about the way minds work....
>|
>|Conclusion 4. For any artefact that we might build which had mental states
>|              equivalent to human mental states, the implementation
>|              of a computer program would not by itself be sufficient.
>|              Rather, the artefact would have to have powers equivalent to 
>|              the powers of the human brain.
>|
>|
>|Sounds like philosophy to me Stanley. Now could we please consider the
>|claims that are actually made?
>
>As I have already stated, I question assumptions 2 and 3.  

I can't conceive of what you object to in 3. It doesn't need evidence.
It's utterly analytic. Learning to program, even a little, should convince
you. As for 2, it is crucial. As Searle claims, syntax and semantics are
conceptually distinct, viz., they mean different things. The question is
whether semantics might be reduced to syntax. Many have tried. None have
succeeded as yet. In the meantime, prudence would dictate construing of
them as different (unless one has a computatiopnalist axe to grind).

>
>Also, what does Conclusion 4 *mean*?
>
I'm at a loss. It's an English sentence (two, actually) with well-formed
subjects and predicates. I'll attempt to explicate, though I feel as if
I'm just paraphrasing. It means that is follows necessarily from the 
premises that in anything we can build that has mental states (as we know
them) there must be more than a computer program at work; this because
mental states have semantic content and computer programs don't. Thus,
by Leibinz's law, the two things cannot be equivalent.
>
>
>What I was getting at was this:
>
>Given a construct that shows behavior indistinguishable from a human
>then either:
>
>	A) it accomplishes this by an internal mechanism that is different
>	than a human
>OR
>	B) it accomplishes this by the same internal mechanism as a human.
>
>In case B) the construct is, in my mind, *necessarily* intelligent, since
>it is indistinguishable from a human functionally.  (Since at this point
>denying the constructs intelligence is denying out ouwn).
>
>In case A) the question is still open. To decide case A) it is necessary
>to have a clear idea of what *classes* of mechanisms count as intelligent
>and which ones do not.  

This is a common error. Searle is not determining whether it is intelligent
(under some very broad contrual of that term).  He is trying to find out
if in understands Chinese in the same way that we do; a much more modest
and manageable project. By extension, this amounts to seeing if it has
but one of the mental states we have. If it doesn't understand, then 
there's one mental state we've got that it doesn't.

-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


