From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!milton!forbis Mon Dec 16 11:01:26 EST 1991
Article 2072 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!milton!forbis
>From: forbis@milton.u.washington.edu (Gary Forbis)
Subject: Re: Searle, again
Message-ID: <1991Dec12.203551.16694@milton.u.washington.edu>
Organization: University of Washington, Seattle
References: <4dFul6O00Uh7A37HQ7@andrew.cmu.edu>
Date: Thu, 12 Dec 1991 20:35:51 GMT

In article <4dFul6O00Uh7A37HQ7@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
>There are two reasons algorithms, as manipulations of symbols, do not
>refer to objects or states of affairs in the world: the first is that
>the symbols physically do not in any way resemble the things to which 
>we hold them to refer.  The second is that even if they did resemble
>the structures of external objects, the physical process of pattern 
>matching does not transmit physical structure, hence, inputs to the
>system (say from a camera) can only *trigger* further processing.

Surely you don't mean this?

I have the symbol "house" to refer to a house yet a "house" does not physically
resemble a house.

The second sentence is totally opaque to me.  What do you suppose is going on
in one's mind when one sees a house?



