From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Feb 11 15:26:09 EST 1992
Article 3624 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb10.213422.4256@aisb.ed.ac.uk>
Date: 10 Feb 92 21:34:22 GMT
References: <1992Feb6.051835.21146@bronze.ucs.indiana.edu> <1992Feb6.185713.11504@psych.toronto.edu> <1992Feb6.222128.18717@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 52

In article <1992Feb6.222128.18717@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Feb6.185713.11504@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>Well, I'm still a bit confused.  Are you an instrumentalist in
>>*practice*, but not in *theory*?  If so, what reason do you give for
>>saying that a humungous lookup table, which produces the right
>>behaviour, *doesn't* have beliefs.  If the answer is something like
>>"it doesn't have the appropriate functional relations", then do you 
>>have a working definition of what these functions are that isn't
>>simply motivated by ruling out lookup tables?          

It is not necessary to have a worked-out definition to know that
some things do not qualify. 

>       In that paper I make a brief
>attempt at characterizing the physical/functional explanation of why we
>say the things we do about qualia (not only why we say we have them,
>but why they seem so strange and mysterious), and come to the
>conclusion that it's because of the fact that later processing in the
>brain only has access to *informational* states in earlier processing,
>i.e. only to certain raw differences in state that make a causal
>difference, rather than to underlying physical states, or to distal
>causes in the environment; because of this, the system is simply thrown
>into various different states without having an explanation as to why.
>When you ask the system how it makes a distinction, e.g. between
>differently-coloured objects, it doesn't have any good answer available
>apart from something along the lines of "they're just different,
>qualitatively".

This sounds reasonable to me, despite my disagreement w/ DC about
qualia in, say, thermostats.  But note that this says qualia involve
lack of access to certain things, not that whenever there is a lack
of access there are qualia.

>You can run an explanation like that without invoking *actual* qualia
>at all.  But then if one believes in actual qualia, it seems
>necessary to at least say that the properties of qualia cohere with the
>explanatory basis of why we say/think we have qualia, as otherwise
>the things we say wouldn't reflect the properties of qualia at all.

That also sounds reasonable.

>That's another reason why I was led to the view that the basis of
>qualia is information-processing, and in fact that one might expect
>qualia to arise from even the simplest kinds of information-processing.

This seems wrong.  Even if the basis were info-processing, that
would not make me expect that qualia would arise from even the
simplest kinds of information-processing.  I'd have to have some
additional reason to suppose that even the simplest would do.

-- jd


