From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!olivea!uunet!tdatirv!sarima Wed Feb 26 12:53:22 EST 1992
Article 3904 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!olivea!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Keywords: consciousness,functionalism,meaning
Message-ID: <439@tdatirv.UUCP>
Date: 20 Feb 92 17:34:42 GMT
References: <418@tdatirv.UUCP> <1992Feb16.185120.9182@psych.toronto.edu> <426@tdatirv.UUCP> <1992Feb19.173620.10529@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 60

In article <1992Feb19.173620.10529@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
|TO THE SOURCE!
|_Minds, Brains, and Science_ pp. 39-40:
|1. Brains cause minds. Now, of course, that's really too crude....
|2. Syntax is not sufficient for semantics....a conceptual truth....
|3. Computer programs are entirely defined by their formal, or syntactical
|     structure....true by definition [of a computer program]
|4. Minds have mental contents; specifically, they have semantic contents....
|     just an obvious fact about the way minds work....
|
|Conclusion 4. For any artefact that we might build which had mental states
|              equivalent to human mental states, the implementation
|              of a computer program would not by itself be sufficient.
|              Rather, the artefact would have to have powers equivalent to 
|              the powers of the human brain.
|
|
|Sounds like philosophy to me Stanley. Now could we please consider the
|claims that are actually made?

As I have already stated, I question assumptions 2 and 3.  I find that
they have no *evidence* to back them up, they are purely assumptions.
Therefore, Searle's argument relies on unvalidated assumptions and proves
nothing.

Also, what does Conclusion 4 *mean*?



What I was getting at was this:

Given a construct that shows behavior indistinguishable from a human
then either:

	A) it accomplishes this by an internal mechanism that is different
	than a human
OR
	B) it accomplishes this by the same internal mechanism as a human.

In case B) the construct is, in my mind, *necessarily* intelligent, since
it is indistinguishable from a human functionally.  (Since at this point
denying the constructs intelligence is denying out ouwn).

In case A) the question is still open. To decide case A) it is necessary
to have a clear idea of what *classes* of mechanisms count as intelligent
and which ones do not.  I had assumed that Searle's Chinese Room was intended
to be a case of type A.  If it is not, then he is just blowing steam, and is
not worth even talking about.

Now, assuming case A), I want Searle to provide observational evidence of
his basic assumptions, and a usable criterion for determining the presence
of 'causal powers'.  This will then give me what I need to determine if
a computer is indeed capable of intelligence or not by his definition.
Right now I cannot evaluate it, becuase I do not know the scope of system
types he is including in his various categories.  He may be defining computer
and computer program in a narrower way than I do.  He may be defining
intelligence in a differetn way than I do.  I cannot tell.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


