From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Thu Feb 20 15:21:26 EST 1992
Article 3800 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb17.163838.5550@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Feb16.182212.7126@psych.toronto.edu>
Date: Mon, 17 Feb 1992 16:38:38 GMT
Lines: 43

michael@psych.toronto.edu (Michael Gemar) writes:
> In article <1992Feb14.152243.6535@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
[...]
> >All I can add here is that the sort of work I refered to above takes
> >belief to exist a priori, and generally models it by various
> >truth-functional modal logics.  Recurring problems with this model,
> >such as requiring agents to hold the same truth-value for logically
> >equivalent beliefs, seems counter-intuitive.  My suspicion is that,
> >eventually, such accounts of belief will fail for this sort of 
> >reason.  However, I don't have a better model to suggest at the
> >moment.
> >
> >Is this the sort of thing you had in mind?
> 
> It appears to be a framework of a response to my concerns, although I'd
> like to see it fleshed out more before I commit myself, since it seems
> somewhat vague.

My account is vague becuase I'm not conversant in the fine points of these
theories, and because I doubt that the current work in language planning
fully encompasses the notion of belief.  In the general case (within this
area), a belief is any representation that can be assigned a truth value.
Examples would be the game-theoretic semantics of Hintikka, and the
autoepistemic logic discussed by Appelt.  (I don't have the exact 
references at hand, but FELIX should be able to run with this, sorry).

Interpreting truth strictly within the framework of logic has its
difficulties, such as the consistency problem I mentioned above, and
the fact that it encourages `run-away' (pointless) inference.  Resulting
systems behave kind of like logical-positivists gone mad.  A better
model would, I think, treat truth more generally as an `agreement' with
experience; the "will to truth" if you like.  This might also provide
a handle on how beliefs can be formed.

I can't say I'm committed to this notion either, since it would take
many years to work out properly, but what the hell?

				Cam
--
      Cameron Shelley        | "Proof, n.  Evidence having a shade more of
cpshelle@logos.waterloo.edu  |  plausibility than of unlikelyhood.  The
    Davis Centre Rm 2136     |  testimony of two credible witnesses as
 Phone (519) 885-1211 x3390  |	opposed to that of one."    Ambrose Bierce


