From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uwm.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad Mon May 25 14:07:31 EDT 1992
Article 5881 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uwm.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad
>From: harnad@phoenix.Princeton.EDU (Stevan Harnad)
Subject: Can Abstract Categories Be Grounded?
Message-ID: <1992May24.172516.15226@Princeton.EDU>
Summary: The alleged "vanishing intersections" problem
Originator: news@ernie.Princeton.EDU
Keywords: horse, stripes, zebra, goodness, truth, beauty
Sender: news@Princeton.EDU (USENET News System)
Nntp-Posting-Host: phoenix.princeton.edu
Organization: Princeton University
Date: Sun, 24 May 1992 17:25:16 GMT
Lines: 188

There has been discussion about whether/how a category such as
"bachelor" can be grounded, and there has been some reference to
philosophical objections to sensory grounding.

The following passage is quoted from: 

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition.
In: A. Clarke and  R. Lutz (Eds) Connectionism in Context. Springer Verlag.
[whole article retrievable by anonymous ftp from host princeton.edu
directory pub.harnad filename harnad92.symbol.object]

-------------------------------------------------------
Philosophical Objections to Bottom-Up Grounding of Concrete and
Abstract Categories

Now philosophers are fond of raising 300 year old objections to this
kind of bottom-up proposal. It is supposed to be doomed to failure for
the same reason that the entire empiricist program of grounding
thinking in sense experience failed -- because, in a nutshell, most
abstract categories (e.g., goodness, truth, beauty, even games) do not
have any shared invariants, sensory or otherwise. Moreover, a zebra is
not a striped horse! [The striped-horse example was used in this paper
and in Harnad 1990.]

Perhaps this is not the place to fight this particular battle, but let
it be noted that the feasibility of grounding symbols in robotic
capacities has really never yet been tested. Philosophers have
concluded that sensory grounding was a dead end from the vantage point
of their armchairs, based on introspecting about the definitions and
sensory properties of abstract categories. Wittgenstein (1953), for
example, concluded that because he could find no common properties
among games, such invariants therefore did not exist, and that we
therefore categorize games on the basis of vague "family resemblances."

The picture is quite different if one adopts a roboticist's stance
(and, paradoxically, this can already be discerned from the armchair),
for the roboticist asks: What is it that people can actually sort and
label, reliably and "correctly," as "games" and "nongames," and how
might they be accomplishing that? We can already eliminate the cases
the people cannot sort, or cannot agree upon. We can forgot about what
a game is "really," "sub specie aeternitatis":  A roboticist is just
modeling performance capacity, not ontology. But among those cases that
people can and do sort and label reliably and "correctly," the
roboticist is quite justified in assuming that either the success is
grounded directly in sensory invariants (as in the hypothetical case of
"horse") or it is recursively grounded in labels that are grounded in
labels, etc., that are directly grounded (as in the case of "zebra").
Otherwise the robot's success in sorting and labeling would be
completely inexplicable -- for it certainly could not be hanging from a
skyhook of ungrounded symbolic representations.\**

FOOTNOTE
   The anti-empiricist objections can be summarized as follows: For most
   categories, necessary and sufficient conditions for category
   membership, and especially sensory ones, simply do not exist. The
   evidence for this is that we are not aware of using any, and when we
   think about what they might be, we can't think of any. In addition,
   categories are often graded or fuzzy, membership being either a matter
   of degree (Zadeh 1965) or even uncertain or arbitrary in some cases.
   Sensory invariants are even less likely to exist: The intersection of
   all the properties of the sensory projections of the members of the
   category "good" is surely empty. Moreover, sensory appearances are
   often deceiving, and rarely if ever decisive: A painted horse that
   looks just like a zebra is still not a zebra. The roboticist's reply is
   that introspection is unlikely to reveal the mechanisms underlying our
   robotic and cognitive capacities, otherwise the empirical task would be
   much easier. Disjunctive, negative, conditional, relational, polyadic,
   and even constructive invariants (in which the input must undergo
   considerable processing to extract the information inherent in it) are
   just as viable, and sensory-based, as the simple, monadic, conjunctive
   ones that introspection usually looks for. There are graded categories
   like "big," in which membership is relative and a matter of degree, but
   there are also all-or-none categories like "bird," for which invariants
   exist. There may be cases of "bird" we're not sure about, but we're not
   answerable to God's omniscience about what's what, only to the
   consequences of miscategorization insofar as they exist and matter to
   us. And it's our successful categorization performance that a robotic
   model must be able to capture -- including our capacity to revise our
   provisional, approximate category invariants in the face of error. As
   to goodness, truth and beauty: There is no reason to doubt that --
   insofar as they are objective rather than subjective categories -- they
   too are up there somewhere, firmly grounded in the zebra hierarchy,
   just as the "peekaboo unicorn" is: The peekaboo unicorn is "a horse
   with a horn that vanishes without a trace whenever senses or measuring
   instruments are trained on it." Unverifiable in principle, this
   category is nevertheless as firmly grounded (and meaningful) as "zebra"
   -- as long as "horse," "horn," "vanish," "trace," "senses" and
   "measuring instrument" are grounded. And we could identify its members
   on first encounter -- if we ever could encounter them -- as surely as
   we could identify a zebra. The case of the painted horse and of
   goodness, truth and beauty is left to the reader as an exercise in
   exploring the recursive possibilities of grounded symbols.
END FOOTNOTE

So, on the assumption that the viability of this bottom-up robotic
grounding scheme is an empirical question rather than an a priori one
that has already been decided, let us examine more closely how it might
be implemented and tested: The crucial component that is still missing
is the learning mechanism that will find the invariants in the sensory
projections of objects that will allow the robot to identify what
category they belong to. Here is a function for which neural nets are a
natural candidate. Whether or not they are brainlike, whether or not
they are symbolic, and whether or not they have the power to do other
things entirely on their own, neural nets seem well-suited to the task
of sensory category learning. Whether they will have sufficient
learning power to accomplish human-scale category learning is of course
likewise an empirical question, but this certainly seems worth
exploring. [end of quotation from this chapter]

Harnad, S. (1990) The Symbol Grounding Problem.
Physica D 42: 335-346.

Wittgenstein, L. (1953) Philosophical investigations. New York: Macmillan

Zadeh, L. A. (1965) Fuzzy sets.  Information & Control 8: 338-353.

---------------------------------------------------
The following passage is quoted from: 

Harnad, S. (1987) The induction and representation of categories.
In: Harnad, S. (ed.) Categorical Perception: The Groundwork of
Cognition. New York: Cambridge University Press.
[retrievable by anonymous ftp from host princeton.edu directory
pub/harnad filename harnad87.categorization]

FOOTNOTE:
   It seems to be a point of logic rather than one of theoretical
   preference that if a categorizer is able to perform error-free
   categorization then that performance must be based on detecting and
   using some set of features that is SUFFICIENT to serve as a basis for
   the successful categorization (though not necessarily "necessary" or
   exhaustive, for, especially with underdetermination, there might be
   other features that would suffice too).
   
   The putative alternatives to the "classical"
   necessary/sufficient-features approach to categorization -- originating
   with Rosch (Rosch & Lloyd 1978) and attributed to Wittgenstein (1953)
   -- seem to be based on confusions among the following additional (and
   independent) factors:
   
   (i) Some categorization is not all-or-none; there may be no "X's," just
   things that are X to greater or lesser degrees (e.g., the category "big").
   
   (ii) Some categorization performance may not be reliable; subjects may
   sometimes miscategorize, or there may be some instances whose
   membership is uncertain or graded or probabilistic (e.g., the category
   "guilty").
   
   (iii) The subject may not be aware of the features he is using; the
   ones he verbalizes may indeed be neither necessary nor sufficient, but
   then they're not the ones he's using.
   
   (iv) There is an element of arbitrariness in what one does and does not
   choose to call a "feature" (as opposed to a "metafeature"); there is no
   logical or practical reason why features cannot be disjunctive,
   negative, conditional, relational, polyadic or probabilistic -- or even
   derivable only by complex computational, constructive, algorithmic,
   propositional or "model-driven" processes -- as long as they are
   grounded in reliable, detectable invariant properties of the instances
   being categorized and they are sufficient to subserve successful
   categorization.
   
   Hence, at least insofar as our reliable, overlearned, all-or-none,
   BOUNDED categories are concerned -- and these are the categories (e.g.,
   "bird" and "pet") that tend to be used in the experiments stimulated by
   Rosch's work -- both the existence and the use of (singly) sufficient
   (and disjunctively necessary) sets of features seems inescapable. The
   origin of the putative alternatives to this -- non-necessary/sufficient
   "prototypes" and "family resemblances" -- seems to be attributable to a
   focus on typicality judgments and reaction times rather than
   categorization per se, together with a reliance on the subject's (and
   perhaps the experimenter's) introspections as to the basis for the
   categorization. The real basis for categorization can only be found by
   inference, as tested by models that attempt to generate reliable
   categorization performance when confronted with the same instances that
   subjects can categorize successfully.
END FOOTNOTE

Wittgenstein, L. (1953) Philosophical investigations. New York: Macmillan

Rosch, E. & Lloyd, B. B. (1978) "Cognition and categorization."
Hillsdale NJ: Er lbaum Associates


-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
harnad@learning.siemens.com / harnad@elbereth.rutgers.edu / (609)-921-7771


