From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!think.com!ames!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill Thu Apr 16 11:34:46 EDT 1992
Article 5122 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!think.com!ames!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill
>From: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Categories: bounded or graded?
Message-ID: <1992Apr16.005316.27101@organpipe.uug.arizona.edu>
Date: 16 Apr 92 00:53:16 GMT
References: <1992Apr14.143822.10246@psych.toronto.edu> <1992Apr15.010721.17700@organpipe.uug.arizona.edu> <1992Apr15.172904.3372@spss.com>
Sender: news@organpipe.uug.arizona.edu
Reply-To: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Organization: Center for Neural Systems, Memory, and Aging
Lines: 75

In article <1992Apr15.172904.3372@spss.com> 
markrose@spss.com (Mark Rosenfelder) writes:
>
>I have a lot of sympathy with the idea of prototypes, but I think they
>lead to nasty problems too.  
>
>The basic problem is, there needs to be something besides the prototype
>standing behind a word.  At the very least you need a radius: how far
>from the prototype can an object be and still be an instance of the
>category?  The radius must be different for (say) "terrier",
>"dog", and "animal."

  Yes, this is true.  But the problem is actually not so nasty as it
might seem.  Competitive learning, which is a very simple neural-
network learning scheme, naturally develops radial categories.  The
radius is set by the number of neurons in the network.  On the
other hand schemes that develop invariant-feature based categories
are generally a great deal more complicated, and are always limited
in the sorts of features they can recognize.

>If you picture the possible referents of a word as a fuzzy cloud surrounding
>the prototype, it seems clear (to me, at least) that the cloud is not always
>spherical-- it can be quite convoluted.  For instance, a stool looks more
>like the prototypical "chair" than many real chairs.  The problem of
>defining the boundary of the cloud begins to resemble the traditional one
>of defining features.

  Right.  There are different kinds of concepts.  The kind that
children form automatically, without feedback, when they are
learning language, are radial concepts, which as you say can`
be thought of as fuzzy clouds surrounding prototypes.  Their
great advantage is that they can be communicated non-linguistically,
by pointing at the prototypes.  But their big disadvantage is that
they lack precision.  The other kind of concept is more like a
bag -- a container with a definite boundary and clearly defined
interior and exterior.  This kind allows much more precision,
but it is correspondingly much more difficult to communicate
or learn.

>Purpose is relevant to the meaning of many words.  A "chair" is above all 
>something you can sit on, and that's not a direct physical feature of the 
>prototypical referent(s).  
>
  Right.  Physical appearance is not the only dimension along
which similarity is measured.  In fact, mapping out the dimensions
of perceived similarity is probably one of the deepest and most
important problems of cognitive science.

>Ostensive definitions are useful for some words (e.g. "dog"), but for others
>they are simply annoying.  If you ask someone what "opaque" means, and they
>point to their shoes and say "This is," that's not very helpful.  If providing
>a prototype is not enough, however, then meaning involves more than prototypes.
>
  "Opaque" is an adjective, not a noun, so its prototype is not
a thing.  For me the protoype of "opaque" is an image of a person
looking toward some unspecified object but unable to see it because
some big black thing is in the way.

  Intensive definitions are certainly useful in many ways, but since
radial categories are the only kind that can be learned non-linguistically,
intensive definitions must always ground out in ostensive definitions.

>I do see prototypes as part of cognition; I think they explain some of the
>flexibility of language, and also some of the sense of depth there is to
>human knowledge-- the sheer mass of information we have about the words
>we know.

  I see radial categories as more than this.  I believe one of
the most important morals of the past two decades of AI work is
that flexible intelligence cannot be based on intensive categories.
The learning problem is much too difficult.  Radial categories
are far easier to learn, and virtually all of the Connectionist
effort is concentrating on them.

	-- Bill


