From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!rutgers!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:32:51 EST 1991
Article 1623 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10700 sci.philosophy.tech:1142 comp.ai.philosophy:1623
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!rutgers!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle
Summary: reductive materialism fails again
Message-ID: <1991Nov26.105451.5918@husc3.harvard.edu>
Date: 26 Nov 91 15:54:49 GMT
References: <MATT.91Nov24000158@physics.berkeley.edu> <1991Nov24.195230.5843@husc3.harvard.edu> <1991Nov26.011950.1658@hilbert.cyprs.rain.com>
Followup-To: sci.philosophy.tech,comp.ai.philosophy
Organization: Dept. of Math, Harvard Univ.
Lines: 122
Nntp-Posting-Host: zariski.harvard.edu

(I've removed rec.arts.books from the followup line)

In article <1991Nov26.011950.1658@hilbert.cyprs.rain.com> 
max@hilbert.cyprs.rain.com (Max Webb) writes:

>In article <1991Nov24.195230.5843@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>A symbol is an iconic or a substitutive sign, something that stands for
>>something else.  A C function is a symbol standing for an assembly language
>>algorithm, and, eventually, for a sequence of machine language instructions,
>>in virtue of your system's compilers.  Pray tell, what part of the computer
>>hardware or software could make it stand for something outside the machine,
>>as signs used by humans stand for things in virtue of their meanings?

MW:
>1) a C function does NOT denote machine language instructions. Otherwise
>   the concept of multiply-targeted C compilers would have no meaning.

Please don't put words in my mouth: I never said that it did.  Also note
that the Scott-Strachey conception of denotation, in spite of coming out of
the same Frege-Church school as my argument, doesn't apply here; if you
want to get a clue of what does, take a look at Girard's "Proofs and Types"
(Cambridge Tracts in Theoretical Computer Science 7).

MW:
>   You are discussing the semantics of programming languages, which you
>   (apparently) have never studied. How *ignorant* of you (to paraphrase
>   your insult of another poster).

Feel free to jump to any sort of conclusions you wish; at the same time, if
you wish to educate yourself on the difference between what passes for
semantics of programming languages, and genuine formal semantics, see the
book referenced above.

MW:
>2) If the computer, in the course of it's operation, developed it's
>   own representation of the environment (many programs do this - I
>   have written one, it is no great feat) and achieved complex goals 
>   using the representation, then (in the context of the behavior
>   of the system) it is clear that there are features in the representation
>   that represent features in the outside world. It is also clear that
>   it is the functioning of the system as a whole that makes it possible
>   for us to talk about the 'meaning' of an internal symbol to the
>   system as a whole.

You are assuming a ready-made notion of representation (which, of course,
arises only in virtye of your interpretation); this clearly won't do, as my
challenge was to give a description of a representational function in terms
of not what is clear to you, but of what is inherent in the construction of
the machine.  Can you say "jumping to conclusions"?

MW:
>Also, lets try this on humans and see if it is a fair question there:

MZ:
>>compilers.  Pray tell, what part of the human hardware or software could
>>make [sign] stand for something outside the human,...?

MW:
>Answer: there is no part of the human hardware that you can look at
>and say, because of this, 'gabi' maps to "late evening".
>You would (if you didn't already know Tagalog) have to analyse the
>behavior (including speech acts) of the human to determine that. Here,
>as with the machine, it is the behavior of the machine that supplies
>the context within which the phrase "meaning of internal representations"
>itself has meaning. (can you say "distributed representation"?)

It is precisely because, unlike you, I am not limited to the mechanistic
view of human mind, that I can give a successful account of abstract mental
structures.  You, on the other hand, in virtue of your claim of being able
to build a machine that represents its environment, appear to champion,
however unwittingly, reductive materialism.  So the burden of proof falls
squarely on your frail shoulders.  The generous lad that I am, I'll provide
you with an escape hatch: feel free to claim that the requisite abstract
semantical structures arise spontaneously from the machine's behavior,
unembodied in the physical stratum of its construction.  For this is, in
effect, what you are claiming above.

MW:
>Your question is not fair, because it assumes that all features of
>the functioning of a machine must each be represented by some separate
>bit of hardware. Not true. I can call the ability to pursue your
>desires diligently in the face of opposition "will" - that doesn't mean that
>there is a "will center" in the brain, which, when destroyed renders
>the person an obedient zombie. "Will" is a feature of the systems behavior
>as a whole, and attempts to find it by analyzing individual neuronal
>synapses, or even sections of cortex, would be very silly. Ever heard
>of "reification"? Methinks you take your rhetoric way too seriously.

The dual-aspect theory of will, which in no way relies on assuming the
existence of a "will center" in the brain, is rather well-developed, thanks
to Brian O'Shaughnessy; I refer you to his book "The Will", published by
Cambridge University Press, for further information; the dual-aspect
semantic theory is what I am proposing at this time.  Finally, on the
subject of ignorance, which you so conveniently raised above.  Note that we
are conducting this discussion in two philosophy groups; try to appreciate
that the discipline of philosophy is in no way limited to the sort of
crass, unreflective, inarticulate materialism that you are pushing as your
party line.  Furthermore, there are people that articulate your platform
fare more eloquently than you do.  So go back and read the Churchlands, or
some other such trash, and come back prepared to argue, rather than
cavalierly dismiss the opposing views, simply because they contradict the
prejudices you accumulated in the course of your engineering experience.

>	Max

'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


