From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!uunet!tdatirv!sarima Sun Dec  1 13:06:15 EST 1991
Article 1711 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle (was .....)
Message-ID: <290@tdatirv.UUCP>
Date: 27 Nov 91 20:39:39 GMT
References: <AdAd1qK00Uh7M2NfoH@andrew.cmu.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 35

In article <AdAd1qK00Uh7M2NfoH@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
|Interesting. How does the system build this representation such that 
|the representing entities are causal within the system?  That is, where
|are the matchers (i.e. programs) that enable them to be causal so that
|the system can achieve its complex goals?  Did
|you already have routines in there that "knew" what they
|were looking for in order to build the representation, or did this
|all happen spontaneously, driven by the forms of the signals from the 
|objects and structural relationships in the environment?  Seems to me
|that unless it was the latter this is no different, with respect to an
|intrinsic capacity for reference, than hand coding the representation.

Why should this be that important?  Unless you accept the 'tabula rosa'
model of human development, then even we are born with already existing
routines the 'know' what to 'look' for.  And this latter is indeed what
most current neurologists and psychologists accept.  *We* are preprogrammed
to find meaning, so why should it matter if the computer is also?

It is just that the preset programming is open-ended, it is capable of
building a complex mental representation system that far exceeds the
original basis.

|Also, Haugeland draws a similar conclusion about semantics as you do
|("Artificial Intelligence and the Western Mind", 1989 -- sorry
|I don't have the full reference at hand).  His, though, depends on
|the fact that the number of consistent interpretations of particular
|symbol systems is highly constrained.  Unfortunately, this still does not
|solve the problem of intrinsic meaning.

Intrinsic meaning?????  What is that?  I know of no examples of any such
thing even in humans. All human symbol systems (aka languages) are purely
arbitrary, and *learned*.  There is nothing *intrinsic* about them.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


