From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!emory!gwinnett!depsych!rc Fri Jan 31 10:27:08 EST 1992
Article 3280 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!emory!gwinnett!depsych!rc
>From: rc@depsych.Gwinnett.COM (Richard Carlson)
Newsgroups: comp.ai.philosophy
Subject: Finitude and Foundations
Message-ID: <D0gBFB2w164w@depsych.Gwinnett.COM>
Date: 29 Jan 92 15:08:48 GMT
Lines: 54

Mikhail Zeleny writes:
>On the other hand, should one assume that neural pulses are
>connotative signs, which refer by virtue of expressing an
>intensional meaning, then such meanings, by the above observation,
>must be entirely captured in the physical states of the brain.
>Now, as I have argued elsewhere on the Putnam thread, it's well
>known that intensions, once admitted, bring in a transfinite
>hierarchy thereof; in other words, on the connotative theory,
>reference depends on the grasp of (and, under the reductive
>materialist assumption, physical embodiment of) meanings, which
>depend on meanings of meanings, which in turn depend on meanings
>of meanings of meanings, and so on.  For at each intensional level
>it is reasonable to interpret the concept as yet another sign,
>asking what is the factor in virtue of which it succeeds in
>referring to an object; in other words, it does us no good to
>argue that in practice a brain or a computer only uses a finite
>initial segment of the intensional hierarchy, for the question of
>the nature of reference will only reappear on the highest admitted
>level thereof.  On the assumption that the brain, like a computer,
>is a finite state automaton, this amounts to a reductio ad
>absurdum of materialist semantics.  Moreover, as is well-known,
>classical model-theoretic semantics is incapable of fully
>characterizing reference, and ipso facto it is incapable of
>sufficiently constraining any derived operational criteria that
>purport to implement the AI notion of success of reference.

This sounds like a description of semiotics form a
post-structuralist point of view.  Wasn't it the unending nature
of the chain of signifieds ("connotations" or semantic elements or
"semes") which led to the view of semantics as inherently
non-foundational?  But "unending" doesn't really necessarily imply
"infinite."  The chains of connotations are presumed to be
circular, referring back to clusters of semes which share basic
semantic similarities.  So all constructs have, for example, some
semes (connotations, whatever) referring to how Good vs. Bad they
are, or how Strong vs. Weak they are, or how Active vs. Passive
they are, and so on.

Why does this *post*-structuralist (non-foundationalism puts the
"post" into post-structuralist) view fail along with the
model-theoretic semantics to which it is almost a classic
antithesis?  If AI theorists started playing around with these
post-structuralist and semiotic notions it seems to me that their
natural ingenuity would quickly operationalize them -- with things
like branching tree models (except that the "tree" would look as
if its branches were bending over and touching the tree's roots)
or even using masses of naive respondent raters, the way Osgood
got his measurements of semantic space.

--
Richard Carlson        |    rc@depsych.gwinnett.COM
Midtown Medical Center |    {rutgers,ogicse,gatech}!emory!gwinnett!depsych!rc
Atlanta, Georgia       |
(404) 881-6877         |


