From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!csc.ti.com!tilde.csc.ti.com!fstop.csc.ti.com!ra!rowlands Mon Dec 16 11:01:33 EST 1991
Article 2085 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!csc.ti.com!tilde.csc.ti.com!fstop.csc.ti.com!ra!rowlands
>From: rowlands@ra.csc.ti.com (Jon Rowlands)
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec13.041902.28104@csc.ti.com>
Sender: rowlands@ra (Jon Rowlands)
Nntp-Posting-Host: ra
Organization: Texas Instruments SPDC
Date: Fri, 13 Dec 1991 04:19:02 GMT

In article 2994 of comp.ai.philosophy chalmers@bronze.ucs.indiana.edu
	(David Chalmers) writes:
> From Searle, "Minds and Brains without Programs", in (Blakemore/Greenfield,
> eds.) _Mindwaves_, p. 231.  (I've changed the order of the axioms, but
> that's all.)
> 
> (1) Programs are defined purely formally, or syntactically.
> (2) Minds have mental contents; specifically, they have semantic contents.
> (3) Syntax is not sufficient for semantics.
> (4) Therefore instantiating a program is never sufficient by itself for
>     having a mind.

Axiom 2 in this argument seems far from obvious. It is necessarily based
on introspection, i.e. what my own mind believes about my own mind. When
we sense our own thoughts, we don't feel the patterns of neurons firing,
chemicals squirting and gears clunking. We feel just thoughts, the underlying
processes remaining hidden ( :( ). If we *were* just pushing symbols around
in our brains, like our hapless pal in the Chinese room, would we know it?

It seems to me to be possible that semantics are not inherent in ANY system.
They can be *said* to be present when there is a rich enough mapping between
features of a system and features of the "real world". Our own minds form
features that map to the real world, because that is the purpose our brains
evolved. Similarly it seems reasonable that we form features, that we call
our thoughts and memories, that map to the introspectible processes in our
brains. But they are not the processes themselves any more than our concept
of a chair is a "real" chair, nor are they any more accurate descriptions.

In short, I believe the statement that our minds have "semantic content" to
be circular and meaningless (how's that for irony?) and a maybe little smug.
Searle is just defining "semantic content" to be whatever he observes in his
own mind. Any simulation of his mind would observe the same thing in itself,
although, interestingly, it would probably deny it in Searle.

Thanks for listening.

Jon
-- 
  _  ,                                               _
 / `- \ Jon Rowlands         phone: 1-214-995-3436 _| "--_ People say I sound
 \_--_/ rowlands@ra.csc.ti.com fax: 1-214-995-0304 \_  __/ like a corporation,
     ~ `-> Texas Instruments CSC, Dallas, TX <-?!-'  \_|  but I ain't no body.


