From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!unido!mcsun!news.funet.fi!sunic!seunet!kullmar!pkmab!ske Wed Feb  5 11:56:47 EST 1992
Article 3456 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3456 sci.philosophy.tech:2032
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!unido!mcsun!news.funet.fi!sunic!seunet!kullmar!pkmab!ske
>From: ske@pkmab.se (Kristoffer Eriksson)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Is understanding algorithmic?
Message-ID: <6537@pkmab.se>
Date: 1 Feb 92 05:54:35 GMT
References: <1992Jan26.014607.8073@husc3.harvard.edu> <6523@pkmab.se> <1992Jan28.122457.8161@husc3.harvard.edu>
Organization: Peridot Konsult i Mellansverige AB, Oerebro, Sweden
Lines: 55

In article <1992Jan28.122457.8161@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>>This is merely an assertion, not the application of a proof that Mark
>>Corscadden asked for.
>
> Well, here goes.

Than you. I have not been following this thread earlier.

> *aliquid stat pro aliquo*.

I don't know latin. English took the place of latin long before I started
school.

>on one hand ... purely denotative signs ... one would be forced to postulate
>a causal relation in virtue of which these signs denote, stipulating that
>this causal relation is itself entirely immanent in nervous activity, in
>direct contradiction to the fact that our language, allegedly founded
>solely on such nervous activity, has no trouble referring to objects and
>phenomena that occur outside of the latter.

>On the other hand ... connotative signs, ... Now, as I have argued elsewhere
>on the Putnam thread, it's well known that intensions, once admitted, bring
>in a transfinite hierarchy thereof; in other words, on the connotative theory,
>reference depends on the grasp of (and, under the reductive materialist
>assumption, physical embodiment of) meanings, which depend on meanings of
>meanings, which in turn depend on meanings of meanings of meanings, and so
>on.

You consider two ways of viewing signs, and for each one you conclude that
it doesn't work out. It appears to me that you have not ruled out one simple
solution that immediately suggested itself to me: use them both! They seem
to complement each other, such that each of them takes care of the other ones
problems, at least the ones you presented.

Suppose the meaning of connotative signs eventually terminate in denotative
signs, which in turn have a causal relation with attributes of the external
world through our senses. Through our senses, we can have denotative signs
refering to simple attributes of the external world, like colors, shapes,
sounds, touch and so on, which are well suited to build connotative signs
on (thereby solving the problems of connotative signs), that can, in essence,
use these denotative signs to describe any imaginable kind of referent whether
ever directly experienced or not (thereby solving the problems of denotative
signs).

>... On the assumption that the brain, like a computer, is a finite state
>automaton, this amounts to a reductio ad absurdum of materialist semantics.

I assume this is meant to say that a finite state automaton can not possibly
refer, and that the brain is not a finite state automaton, and that this is
the conclusion that proves that the machines in Mark's example do not refer?

-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske


