From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Fri Jan 31 10:26:55 EST 1992
Article 3257 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan29.164812.2514@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan23.214130.27931@bronze.ucs.indiana.edu> <1992Jan26.174822.12526@psych.toronto.edu> <1992Jan29.001107.20084@bronze.ucs.indiana.edu>
Date: Wed, 29 Jan 1992 16:48:12 GMT

Well, it appears that David has out-panpsychismed me...

In article <1992Jan29.001107.20084@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan26.174822.12526@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>As I point out to Drew McDermott, I'm not so sure that such systems *would* be
>>so improbable, given the enormous number of descriptions under which a given
>>physical system can be described, and the infinite possibilities for grouping
>>matter into different groups.
>
>I think you're just not really considering how complex the functional
>organization would have to be.  Remember, it wouldn't just have to
>reproduce the actual function that goes on in a program, but it
>would have to have all the counterfactual function right too -- i.e.
>*if* this part were to be in such-and-such a different state, then the
>rest of the system would change in such-and-such a way, etc.  In any
>case, this discussion is pretty pointless without hard figures for
>complexity, but I don't see any reason to believe that it's remotely
>plausible that this complexity could arise randomly and instantaneously
>in nature.

Yes, I agree that without hard figures it is difficult to judge.  I'm
willing for now to drop the plausibility argument, and just argue
possibility.

>>>On the other hand, I don't think that panpsychism is so unreasonable.  I
>>>think it's quite likely that thermostats have conscious states, if only
>>>of a very limited kind.
>>
>>Well, I must admire you for your commitment to the consequences of your 
>>position, if nothing else.  I would be very interested in knowing if this is
>>a general view, if other strong-AI supporters apart from you and McCarthy
>>actually believe this.
>
>No, it's a very uncommon view as far as I can tell.  Note that it's a 
>different claim to McCarthy's -- he thinks that thermostats have
>beliefs, whereas I think that they have conscious experiences.  

I would be interested in you spelling out what you see to be the
consequences of this difference.  As I interpret it, "belief" requires
consciousness, and conscious experience would at least imply some
beliefs.

> I'm
>making no claim about cognition or intelligence, just one about
>consciousness.  (If you asked me about thermostat cognition, I'd
>say that it's fair to say that thermostats have the capacity to
>represent, but only in a limited way that's probably not enough to
>warrant the full ascription of belief.)
>
>>However, this commitment seems to me to lead to *practical* panpsychism, which
>>you seem to think unlikely, and not just the theoretical possibility of
>>panpsychism, which you obviously support.
>
>Well, let's distinguish the variety of panpsychism which claims
>that there are intelligent cognitive processes everywhere -- which
>I don't accept -- from the one that says there are conscious
>experiences everywhere.  I accept the latter, but with the caveat
>that these conscious experiences are extremely simple, not 
>remotely close to the complex consciousness possessed by humans.
>
>>A system such as a thermostat is
>>very simple, and I do not at all find it hard to believe that various         
>>arrangements of matter in the natural world have the same functional
>>arrangement.  For that matter, various systems in the human body have such
>>functional arrangements.  Does your immune system have beliefs?  Does your
>>liver have conscious states, if only of a limited kind?   
>
>I think that wherever there is information processing, there are qualia
>(the philosophers' technical term for conscious experiences, and
>probably a better term to use, as it carries fewer connotations of
>intelligence).  Where there is complex information processing (as in
>a human mind), there are complex qualia.  But insofar as the liver
>and the immune system process information, there are qualia there.

I would be *very* interested in a framework of an explanation at how
information processing produces qualia, since this would be a solution
to a problem for which even Fodor admits Functionalism has no answer.
As far as I can see, there could easily be information processing
without qualia.  Do you have an argument, or is this merely an act
of faith?

>>I am quite willing to admit that this reductio argumentation is not in and
>>of itself a logical argument against Strong AI.  I do believe that such a 
>>view renders concepts such as "consciousness", "thought", and "mind" virtually
>>meaningless, however.  It is *this* aspect of Strong AI which I find the
>>most worrisome.
>
>I don't accept that this renders "consciousness" meaningless.  I should
>stress that this isn't a reductive analysis of consciousness, along
>the lines that e.g. Dennett might give -- i.e. "look, consciousness
>is such a trivial thing that even thermostats might possess it".  Rather,
>I'm starting from a strong realist position about consciousness, 
>taking it to refer to the really mysterious part of the mind -- the
>subjectivity, the "what it is like to be", in Nagel's phrase -- and
>then saying that it may well turn out that surprisingly and
>counterintuitively, even thermostats possess this to a very limited
>extent.  

Well, if saying that thermostats are conscious doesn't render the notion
empty, I don't know what does...


But seriously, would you also accept that rocks have consciousness?  
After all, they behave similarly to a thermostat - if the temperature
goes up, they change state (expand), if the temperature goes down,
they change to a different state (contract), and if the temperature
stays the same, then they stay in the same state.  Note that, at the 
very least, this is what the main "information processing" component of
the thermostat does (usually a bimetallic strip).  If this is the case,
then *every* hunk of matter is *at least* as "conscious" as a thermostat, 
and has *at least* the same "information processing" capacity.   Now
we *definitely* have panpsychism!

>If you want to talk about what makes humans and thermostats different,
>you can still talk about cognitive processes, or intelligence; or
>you can talk about complex consciousness, which thermostats certainly
>don't have (they have at most three distinguishable conscious states).
>
>I don't think this is an immediate consequence of strong AI; certainly
>most strong AI supporters wouldn't accept this.  I do think that it
>follows as a plausible conclusion from certain considerations and
>arguments about the relationship of consciousness to its functional
>embodiment, but I don't want to get into these here.  I also find
>that after living with the idea for a while, it's not such a
>counterintuitive idea.  I don't see any reason to deny consciousness
>to the information-processing in dogs, birds, or flies, although
>the consciousness gets progressively simpler as the processing gets
>simpler.  Ascribing very limited conscious states to thermostats
>seems to be a natural extension.

Well, David, I think you win the Panpsychism Award.  Honestly, I don't see
how this notion leaves us with any utility for mental terms.  I would be
interested in reading your view on this. 

- michael



