From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Thu Feb 20 15:20:45 EST 1992
Article 3731 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb14.152243.6535@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Feb13.234630.1092@psych.toronto.edu>
Date: Fri, 14 Feb 1992 15:22:43 GMT
Lines: 57

michael@psych.toronto.edu (Michael Gemar) writes:
> In article <6182@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
[...]
> >Moreover, they don't have to be able to say exactly what 
> >functional organization is right in order to be able to
> >rule out extremes.  There's nothing in this to rule out
> >*only* table lookup.  The idea is: table lookup is wrong;
> >not: only table lookup is wrong.
> 
> But *why* is table lookup an "extreme"?   This is the problem that I have.
> There seems to be some sort of implicit assumption about what kind
> of functional organization is required to generate belief.  What I
> am trying to uncover is what that assumption is.  If there *isn't* one
> any deeper than "well, it's just *obvious* that table lookup can't
> generate beliefs," then you are merely ruling them out ad hoc.       

I suspect that simple table lookup is simply an approximation (of
degree N, let's say) to a more elaborate, functional account.  In a
parallel case, there was a paper in last year's ACL proceedings (which
I don't have handy) about `unrolling' context-free grammars into
regular language grammars.  The effect was essentially to approximate
the context-free grammar by encoding performance limits directly. 
Thus, the `unrolled' regular languages are not equivalent to the
context-sensitive ones (of course), but parse a `reasonable' subset of
the same strings. 

A brute force, table lookup scheme would have the same attributes: it
would make a better approximation of the complexity it emulates as its
size N approaches infinity.  But, a table such that N -> infinity can,
I think, be considered an asymptotic, or extreme, case.  The more 
general case would be to leave belief maintenance as a component of
`competence', as opposed importing it piecemeal into a `performance'
model.  At least this is the trend, as I see it, coming into natural
language planning work (or at least I hope so, as I'm arguing for it
in my thesis :-).

> I recognize that functionalism may not have an exact answer as to what
> beliefs are.  But it certainly must have some idea of what they *aren't*.
> What I want to know is what that idea is...

All I can add here is that the sort of work I refered to above takes
belief to exist a priori, and generally models it by various
truth-functional modal logics.  Recurring problems with this model,
such as requiring agents to hold the same truth-value for logically
equivalent beliefs, seems counter-intuitive.  My suspicion is that,
eventually, such accounts of belief will fail for this sort of 
reason.  However, I don't have a better model to suggest at the
moment.

Is this the sort of thing you had in mind?

				Cam
--
      Cameron Shelley        | "Proof, n.  Evidence having a shade more of
cpshelle@logos.waterloo.edu  |  plausibility than of unlikelyhood.  The
    Davis Centre Rm 2136     |  testimony of two credible witnesses as
 Phone (519) 885-1211 x3390  |	opposed to that of one."    Ambrose Bierce


