From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Thu Feb 20 15:21:13 EST 1992
Article 3779 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb16.182212.7126@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb13.234630.1092@psych.toronto.edu> <1992Feb14.152243.6535@watdragon.waterloo.edu>
Date: Sun, 16 Feb 1992 18:22:12 GMT

In article <1992Feb14.152243.6535@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>> In article <6182@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>[...]
>> >Moreover, they don't have to be able to say exactly what 
>> >functional organization is right in order to be able to
>> >rule out extremes.  There's nothing in this to rule out
>> >*only* table lookup.  The idea is: table lookup is wrong;
>> >not: only table lookup is wrong.
>> 
>> But *why* is table lookup an "extreme"?   This is the problem that I have.
>> There seems to be some sort of implicit assumption about what kind
>> of functional organization is required to generate belief.  What I
>> am trying to uncover is what that assumption is.  If there *isn't* one
>> any deeper than "well, it's just *obvious* that table lookup can't
>> generate beliefs," then you are merely ruling them out ad hoc.       
>
>I suspect that simple table lookup is simply an approximation (of
>degree N, let's say) to a more elaborate, functional account.  In a
>parallel case, there was a paper in last year's ACL proceedings (which
>I don't have handy) about `unrolling' context-free grammars into
>regular language grammars.  The effect was essentially to approximate
>the context-free grammar by encoding performance limits directly. 
>Thus, the `unrolled' regular languages are not equivalent to the
>context-sensitive ones (of course), but parse a `reasonable' subset of
>the same strings. 
>
>A brute force, table lookup scheme would have the same attributes: it
>would make a better approximation of the complexity it emulates as its
>size N approaches infinity.  But, a table such that N -> infinity can,
>I think, be considered an asymptotic, or extreme, case.  The more 
>general case would be to leave belief maintenance as a component of
>`competence', as opposed importing it piecemeal into a `performance'
>model.  At least this is the trend, as I see it, coming into natural
>language planning work (or at least I hope so, as I'm arguing for it
>in my thesis :-).
>
>> I recognize that functionalism may not have an exact answer as to what
>> beliefs are.  But it certainly must have some idea of what they *aren't*.
>> What I want to know is what that idea is...
>
>All I can add here is that the sort of work I refered to above takes
>belief to exist a priori, and generally models it by various
>truth-functional modal logics.  Recurring problems with this model,
>such as requiring agents to hold the same truth-value for logically
>equivalent beliefs, seems counter-intuitive.  My suspicion is that,
>eventually, such accounts of belief will fail for this sort of 
>reason.  However, I don't have a better model to suggest at the
>moment.
>
>Is this the sort of thing you had in mind?


It appears to be a framework of a response to my concerns, although I'd
like to see it fleshed out more before I commit myself, since it seems
somewhat vague.

- michael



