Newsgroups: comp.ai.philosophy,comp.ai,sci.psychology.theory
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!news.er.usgs.gov!jobone!news2.acs.oakland.edu!cloudbreak.rs.itd.umich.edu!newsxfer3.itd.umich.edu!newsfeed.internetmci.com!newsfeed.direct.ca!nntp.portal.ca!news.bc.net!torn!tortoise.oise.on.ca!tortoise!dyeo
From: David Yeo <dyeo@tortoise>
Subject: Re: AI's Misconceptions & The Appropriate Role of Psychology? 
In-Reply-To: <855259788snz@longley.demon.co.uk> 
Content-Type: TEXT/PLAIN; charset=US-ASCII
Message-ID: <Pine.SOL.3.91.970206184449.16269B-100000@tortoise>
Sender: news@oise.on.ca
Nntp-Posting-Host: tortoise
Organization: Ontario Institute for Studies in Education, Toronto
Mime-Version: 1.0
Date: Fri, 7 Feb 1997 13:10:44 GMT
Lines: 30
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:51824 comp.ai:44043 sci.psychology.theory:6132

On Thu, 6 Feb 1997, David Longley wrote:

> What  I am saying is that there is a problem giving anything  the 
> status  of  existence  unless  one can give it  the  value  of  a 
> variable.  In contexts where this is not possible, one  literally 
> doesn't know what one is talking about, (nor can anyone else). As 
> a consequence, rational analysis and prediction (and predication) 
> is impossible.

As I have alluded to in previous postings, this is where your sanitized
utopian rationalism invariably gets dirty.  For Quine makes it clear that
theory touches the "real world" only at the edges. 

The value one gives to a variable is a function of the values given to
other variables in the system.  As such the value of a variable is only
one of a set of plausible values it can validly assume when the entire
"web of belief" is considered.  This is not only the basis of Quine's
indeterminacy of translation hypothesis it is, I think, in large part the
rationale underlying the notion of situated cognition.  Thus even if it
was possible (which, due to the number of propositions involved, it's not)
to apply your rationalist philosophy, different instantiations lead to
different (even contradictory) conclusions. 

In short, the best one can hope for in your rationalist utopia is internal
consistency within a highly restricted closed system of beliefs. 

Cheers,

- David Yeo (Applied Cognitive Science, University of Toronto)

