From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcrware!grayhawk!siproj Tue Feb 11 15:25:08 EST 1992
Article 3531 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3531 sci.philosophy.tech:2071
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcrware!grayhawk!siproj
>From: siproj@grayhawk.rent.com (D. R. Arthur)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Is understanding algorithmic?
Message-ID: <1992Feb6.031947.1170@grayhawk.rent.com>
Date: 6 Feb 92 03:19:47 GMT
References: <6523@pkmab.se> <1992Jan28.122457.8161@husc3.harvard.edu> <6537@pkmab.se>
Organization: grayhawk; Des Moines, Iowa public access unix; 515/277-6753
Lines: 55

In article <6537@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>In article <1992Jan28.122457.8161@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
B
>
>>on one hand ... purely denotative signs ... one would be forced to postulate
>>a causal relation in virtue of which these signs denote, stipulating that
>>this causal relation is itself entirely immanent in nervous activity, in
>>direct contradiction to the fact that our language, allegedly founded
>>solely on such nervous activity, has no trouble referring to objects and
>>phenomena that occur outside of the latter.
>
>>On the other hand ... connotative signs, ... Now, as I have argued elsewhere
>>on the Putnam thread, it's well known that intensions, once admitted, bring
>>in a transfinite hierarchy thereof; in other words, on the connotative theory,
>>reference depends on the grasp of (and, under the reductive materialist
>>assumption, physical embodiment of) meanings, which depend on meanings of
>>meanings, which in turn depend on meanings of meanings of meanings, and so
>>on.
>
>I assume this is meant to say that a finite state automaton can not possibly
>refer, and that the brain is not a finite state automaton, and that this is
>the conclusion that proves that the machines in Mark's example do not refer?

First of all lets look at the aspect of a finite state automaton.. essentially
we are discussing a streamed state entry.  The entry can be quantitative or
qualitative.  The assumption that the states are purely quantitative is
limiting and basically untrue.  So a state is a quality assertion or 
a quantitative relationship.

So far this is well with the scope of the understanding of the biochemical
nature of the cerebral cortex.

So to refer, a state must assert a relationship... this is definately the
case in the neurons connected to a multitude of other neurons with many
many synapses that are junctions of these assertions.  A finite state
exists and in this context also refers.  Whats the problem with this?
The algorithmic nature of mind is not as limiting as some cognescenti
are leading the sheep of AI groups astray in there own limited assertions
as though it was fact.  I my example, it is debateable as to just how
far a neuron goes into being a finite state automaton, but it is hard to
refute the existence of those states being qualitative assertions or
quantitative relationships that lead to further references and is 
definitively hierarchically limited in scope with the degree of cross-
connects typical of neurons in the frontal lobes of high order primates.

Please point out where I may need to clarify the above, but to me
many parts of the puzzle are just to obvious and therefore passed by.



-- 
 /----------------------------/
 / Chronocide - Euthanasia    /   siproj@grayhawk.rent.com
 /             for boredom?   /   Is that European or African?
 /----------------------------/


