From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!cornell!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Sun Dec  1 13:05:29 EST 1991
Article 1633 of comp.ai.philosophy:
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!cornell!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
Newsgroups: comp.ai.philosophy
Message-ID: <AdAd1qK00Uh7M2NfoH@andrew.cmu.edu>
Date: Tue, 26 Nov 1991 13:20:38 -0500 
>From: Franklin Boyle <fb0m+@andrew.cmu.edu>
Subject: Re: Searle (was .....)

Max Webb writes:

>> 2) If the computer, in the course of it's operation, developed it's
>>   own representation of the environment (many programs do this - I
>>   have written one, it is no great feat) and achieved complex goals 
>>   using the representation, then (in the context of the behavior
>>   of the system) it is clear that there are features in the representation
>>   that represent features in the outside world. It is also clear that
>>   it is the functioning of the system as a whole that makes it possible
>>   for us to talk about the 'meaning' of an internal symbol to the
>>   system as a whole.

Interesting. How does the system build this representation such that 
the representing entities are causal within the system?  That is, where
are the matchers (i.e. programs) that enable them to be causal so that
the system can achieve its complex goals?  Did
you already have routines in there that "knew" what they
were looking for in order to build the representation, or did this
all happen spontaneously, driven by the forms of the signals from the 
objects and structural relationships in the environment?  Seems to me
that unless it was the latter this is no different, with respect to an
intrinsic 
capacity for reference, than hand coding the representation. Sure, 
for us there is certainly a correlation between representing entities 
and things in the environment.

Also, Haugeland draws a similar conclusion about semantics as you do
("Artificial Intelligence and the Western Mind", 1989 -- sorry
I don't have the full reference at hand).  His, though, depends on
the fact that the number of consistent interpretations of particular
symbol systems is highly constrained.  Unfortunately, this still does not
solve the problem of intrinsic meaning.

-Frank Boyle
 




