From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima Wed Apr 22 12:04:14 EDT 1992
Article 5164 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Categories: bounded or graded?
Message-ID: <537@tdatirv.UUCP>
Date: 19 Apr 92 22:17:17 GMT
References: <1992Apr14.143822.10246@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 49

In article <1992Apr14.143822.10246@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
|
|From: Stevan Harnad
|
|We disagree even more on categories. I think the Roschian view you
|describe is all wrong, and that the "classical" view -- that categories
|have invariant features that allow us to categorize in the all-or-none
|way we clearly do -- is completely correct. ...

|reaction times and typicality judgments. The performance capacity
|at issue is our capacity to learn to sort and label things as we do,...
|not the metaphysical status of the "correctness" (just its relation
|to the Skinnerian consequences of MIScategorization), and certainly
|not how we happen to think we do it. ...
|Incorrect. I focus on categorical (all-or-none) categories because I
|think they, rather than graded categories, form the core of our
|conceptual repertoire as well as its foundations (grounding).

Ah, so you are mainly talking about our *internal* *mental* states
when you talk about all-or-none categorization.

You may well be right about that - humans do have an almost absolute
tendency to see things in sharp, well-defined categories.


My point of view was from the perspective of scientific research.
There we cannot allow our own internal prejudices to have the final
say, we must be prepared to find that reality is different than we
tend to view it.   And this is indeed so, graded categories are the
more common *external* reality, at least as far as biological and
geological entities are concerned.

Now, AI is concerned with *studying* minds (at least that is part of it),
and as a scientific study it is necessary to recognize the limitiations
of any given modelling system. (The original context of this discussion,
as I remember it, was the issue of the mind/not-mind problem - treated
as an *external* reality - so it is the external reality, not the
prefered internal model that matters).


It seems part of the problem is a matter of different discourse realms.
In talking about AI and the philosophy of AI we need to refer to entities
like 'minds' and 'beliefs' in a theoretical/philosophical context.  In
actually implementing a mind we need to deal with the internal represen-
tations of these things.  The two areas may require different approaches.

-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


