From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Fri Jan 31 10:26:41 EST 1992
Article 3232 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan29.001107.20084@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan23.015152.510@psych.toronto.edu> <1992Jan23.214130.27931@bronze.ucs.indiana.edu> <1992Jan26.174822.12526@psych.toronto.edu>
Date: Wed, 29 Jan 92 00:11:07 GMT
Lines: 99

In article <1992Jan26.174822.12526@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>As I point out to Drew McDermott, I'm not so sure that such systems *would* be
>so improbable, given the enormous number of descriptions under which a given
>physical system can be described, and the infinite possibilities for grouping
>matter into different groups.

I think you're just not really considering how complex the functional
organization would have to be.  Remember, it wouldn't just have to
reproduce the actual function that goes on in a program, but it
would have to have all the counterfactual function right too -- i.e.
*if* this part were to be in such-and-such a different state, then the
rest of the system would change in such-and-such a way, etc.  In any
case, this discussion is pretty pointless without hard figures for
complexity, but I don't see any reason to believe that it's remotely
plausible that this complexity could arise randomly and instantaneously
in nature.

>>On the other hand, I don't think that panpsychism is so unreasonable.  I
>>think it's quite likely that thermostats have conscious states, if only
>>of a very limited kind.
>
>Well, I must admire you for your commitment to the consequences of your 
>position, if nothing else.  I would be very interested in knowing if this is
>a general view, if other strong-AI supporters apart from you and McCarthy
>actually believe this.

No, it's a very uncommon view as far as I can tell.  Note that it's a 
different claim to McCarthy's -- he thinks that thermostats have
beliefs, whereas I think that they have conscious experiences.  I'm
making no claim about cognition or intelligence, just one about
consciousness.  (If you asked me about thermostat cognition, I'd
say that it's fair to say that thermostats have the capacity to
represent, but only in a limited way that's probably not enough to
warrant the full ascription of belief.)

>However, this commitment seems to me to lead to *practical* panpsychism, which
>you seem to think unlikely, and not just the theoretical possibility of
>panpsychism, which you obviously support.

Well, let's distinguish the variety of panpsychism which claims
that there are intelligent cognitive processes everywhere -- which
I don't accept -- from the one that says there are conscious
experiences everywhere.  I accept the latter, but with the caveat
that these conscious experiences are extremely simple, not 
remotely close to the complex consciousness possessed by humans.

>A system such as a thermostat is
>very simple, and I do not at all find it hard to believe that various         
>arrangements of matter in the natural world have the same functional
>arrangement.  For that matter, various systems in the human body have such
>functional arrangements.  Does your immune system have beliefs?  Does your
>liver have conscious states, if only of a limited kind?   

I think that wherever there is information processing, there are qualia
(the philosophers' technical term for conscious experiences, and
probably a better term to use, as it carries fewer connotations of
intelligence).  Where there is complex information processing (as in
a human mind), there are complex qualia.  But insofar as the liver
and the immune system process information, there are qualia there.

>I am quite willing to admit that this reductio argumentation is not in and
>of itself a logical argument against Strong AI.  I do believe that such a 
>view renders concepts such as "consciousness", "thought", and "mind" virtually
>meaningless, however.  It is *this* aspect of Strong AI which I find the
>most worrisome.

I don't accept that this renders "consciousness" meaningless.  I should
stress that this isn't a reductive analysis of consciousness, along
the lines that e.g. Dennett might give -- i.e. "look, consciousness
is such a trivial thing that even thermostats might possess it".  Rather,
I'm starting from a strong realist position about consciousness, 
taking it to refer to the really mysterious part of the mind -- the
subjectivity, the "what it is like to be", in Nagel's phrase -- and
then saying that it may well turn out that surprisingly and
counterintuitively, even thermostats possess this to a very limited
extent.  

If you want to talk about what makes humans and thermostats different,
you can still talk about cognitive processes, or intelligence; or
you can talk about complex consciousness, which thermostats certainly
don't have (they have at most three distinguishable conscious states).

I don't think this is an immediate consequence of strong AI; certainly
most strong AI supporters wouldn't accept this.  I do think that it
follows as a plausible conclusion from certain considerations and
arguments about the relationship of consciousness to its functional
embodiment, but I don't want to get into these here.  I also find
that after living with the idea for a while, it's not such a
counterintuitive idea.  I don't see any reason to deny consciousness
to the information-processing in dogs, birds, or flies, although
the consciousness gets progressively simpler as the processing gets
simpler.  Ascribing very limited conscious states to thermostats
seems to be a natural extension.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


