From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra Thu Jan 16 17:21:36 EST 1992
Article 2704 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra
>From: chandra@cannelloni.cis.ohio-state.edu (B Chandrasekaran)
Subject: Re: Semantics of thoughts
In-Reply-To: yee@envy.cs.umass.edu's message of 14 Jan 92 15: 58:04 GMT
Message-ID: <CHANDRA.92Jan14132526@cannelloni.cis.ohio-state.edu>
Originator: chandra@cannelloni.cis.ohio-state.edu
Sender: news@cis.ohio-state.edu (NETnews        )
Organization: Ohio State Computer Science
References: <41719@dime.cs.umass.edu>
Date: Tue, 14 Jan 1992 18:25:26 GMT
Lines: 43

Richard Yee ((yee@cs.umass.edu) says:

The difference between formal and semantic processing is profound: In
the former case, basic symbols cannot *represent* (literally
re-present) anything... to the processor.  In the latter case, they
can.  Given a symbol, two semantic processors must agree as to its
formal properties, but they may differ, to a greater or lesser extent,
as to its associations or content.  In semantically processing true
re-presentations (as contrasted with formal tokens), each step holds
the possibility of interpreting the basic symbols---using them to form
connections with subjective information.  This can yield inferences not
derivable solely from the intrinsic properties of the manipulated
symbols.  The point is that such interpretations and inferences are
available *within the processor itself*.  A formal symbol processor has
no such leverage with regard to its basic symbols: all additional
interpretation and inferencing, i.e., all additional *semantics*, must
lie elsewhere (e.g., in an external agent's use of "wishful mnemonics"
:-)

				
Me: The above seems on the right track to me, and it is also related
to the idea that conceptual symbols are anchored on perceptual
"symbols" which are in fact processed by a special purpose (i.e.,
modality-specific) machine, which is symbolic in many interesting
senses of the term, but not in the sense of a UTM.  Hari Narayanan and
I have been working on an image-representation and manipulation
approach along the abobe lines, which resolves the paradox about
whether reasoning with images is done "propositionally" or in some
"picture-like" fashion.  The bottom line for the discussion on the
Chinese Room is that we can think of an alternative to Searle's rule
book, an alternative in which most of the intermediate symbol
structures have a natural interpretation as images corresponding to
intermediate sematic structures that are built as the sentence is
being understood.  

Having said this, I should hasten to add that I am not entirely sure
that this eliminates the need for nonperceptual anchoring for
some symbols.  That is, I am not sure if it can be argued that *all*
concepts are anchored only on perceptual and sensory-motor symbols.
It is possible that "noumenal" perceptions will remain which will
be outside the sensory and motor domains, such perceptions requiring
additional "semantics" which we don't know how to give computers (yet).



