From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:54:40 EST 1992
Article 4377 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Organization: Department of Psychology, University of Toronto
References: <1992Feb25.182526.12698@oracorp.com> <1992Mar05.182904.13232@icmv>
Message-ID: <1992Mar10.172406.8416@psych.toronto.edu>
Date: Tue, 10 Mar 1992 17:24:06 GMT

In article <1992Mar05.182904.13232@icmv> degroff@tricorder.IntelliCorp.COM (Les Degroff) writes:

>  In this simple example we still have distinct, predictable syntax but
>if we ran it, it builds a potentially unique history. A weakness in
>believing that the "system" of the Chinese room "understands Chinese" is
>that human understanding is a "Long History" effect  with a large 
>sensory/memory binding set  beyond the "token/symbol bindings.  I in
>part believe that at "human mind" that was confined and limited to
>learning the rules of a token language would not be intelligent or
>human as a result of the "unreality of its input" and "context problem"

The "unreality of its input" and "context problems" are red herrings.  
Encoding all the various types of inputs and contexts that you
want, and adding some historicity, still does not change the fact that
all manipulations are still syntatic.  See the "Robot Reply" in Searle's
original article for a discussion of this.

- michael




