From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!dimacs.rutgers.edu!rutgers!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Dec 16 11:00:38 EST 1991
Article 1986 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!dimacs.rutgers.edu!rutgers!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <wdEw6pa00iUzE2j7Qr@andrew.cmu.edu>
Date: 9 Dec 91 19:19:17 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 19

Mark Rosenfelder writes:

> Well, the Room (or any AI) could have representations of the same kind
> of information--in fact, it had better, if it is to be considered
intelligent.
> The fact that the Room's ability to deal with the external world is 
> ultimately a bunch of computer instructions is no more significant than
> the fact that our ability to do the same thing is ultimately a bunch of
> neurochemistry.

I think you're comparing two different levels here.  Computer instructions
are at the level of patterns or extended structures which are informational. 
Neurochemistry is a little bit below this.  A more apt comparision would
be with the retinotopically mapped response to the visual stimulus on the 
visual cortex, for example.

As for it's being "no more significant", I very much doubt it.

-Frank


