From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott Tue Feb 11 15:25:36 EST 1992
Article 3571 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3571 sci.philosophy.tech:2089
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Subject: Re: Projected Intentionality
References: <1992Feb6.032659.11087@a.cs.okstate.edu> <1992Feb06.194644.132242@cs.cmu.edu>
Message-ID: <1992Feb7.065934.28703@a.cs.okstate.edu>
Organization: Oklahoma State University, Computer Science, Stillwater
Date: Fri, 7 Feb 92 06:59:34 GMT

In article <1992Feb06.194644.132242@cs.cmu.edu> tp0x+@cs.cmu.edu (Thomas Price) writes:
>In article <1992Feb6.032659.11087@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>> I have termed this [deleted] sort of thing, elsewhere, 
>> "projected intentionality" where 
>>an intentional agent projects "intentionality" into an inanimate
>>object.  
>
>Alternatively, the person who is "projecting intentionality" could simply
>be trying to change the meaning of the word "intentionality" -- or, if
>you prefer to say it this way, trying to change the scope of the concept 
>"intentionality."

This is a valid response, particularly if you are a materialist.  Of course,
I haven't taken the time to explain in what ways I feel that changing the
scope of intentionality is fallacious.  I will be posting another article
relatively shortly provided you with you with more support.  
>
>>I think this is indicative of the anthropomorphic nature of
>>a lot of epistemological approaches.  
>
>Would I be out of line if I asked you to justify the tacit assumption that
>debate isn't about the scopes of the concepts involved but rather presupposes
>them to be static throughout the debate? 

Would I be out of line to ask you for evidence that these concepts are static?
As I mentioned above, I will provide more support of this in a forthcomming
article.  I think that your reactions are right on the mark; and believe
it or not, I have thought them through.
>
>>Particularly those who try to
>>determine animal intelligence and animal language capability; all
>>the while having stated that humans are the only things capable
>>of language and intelligence.  These definitions usually mean "something
>>that looks like human behavior and is different than animal behavior; so
>>much so that it might not be behavior but something else more mysterious
>>."  Anyway, along the same line an individual who wants to claim that
>>a thermostat has beliefs is falling into the problem of
>>projected intentionality.  
>
>Charles, what do you think of the role of metaphor in thought? Could you relate
>it by way of explanation to what is problematic for you in the "anthropomorphic
>nature of a lot of epistemological approaches"?

Indeed, I agree that, if anything, most epistemological approaches, particularly
strong AI approaches are PURELY METHAPHORICAL.  Because of this I find a 
contradiction.  If Strong AI is supposed to be essentially scientific, 
as Allen Newell tries to treat it, then why use metaphor in explanation? 
Isn't this a departure from hard scientific approaches?  I thought
metaphor was only used for students learning a new field to help them
understand what was going on; metaphor isn't a direct analogy.  The metaphor,
as I will explain in my forthcomming article, can lead to certain fallacies
such as the fallacy of the undistributed middle.  This is the fallacy that
I find Allen Newell making all over the place; not to mention many others
using similar approaches.  

  For example, in a book entitled "Language and Species" the author, Derek
Bickerton, tries to argue separations in language from lower species by
virtue of a "communication-purpose" argument.  But he uses a notion 
of language to show that animals don't have it.  The problem here is that
either: the conclusion is uninteresting; or it assumes what it is trying
to prove.  The assumption that he takes is that lanaguage is a human
species-specific phenomena and that it is different from animals.  Then,
by using the definition of language being human-specific, he "proves"
that animals are not language users.  Language, in his case, is anthropomorphic
and so he remains caught in the antropomorphism(if I may) without comming
to any interesting conculsions.  Then he relates this to evolution--oy vey.

BCnya,

------------------------------------------------------------------------
Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu


"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung

-----------------------------------------------------------------------




