From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!news.cs.indiana.edu!bronze!chalmers Wed Feb  5 11:56:22 EST 1992
Article 3414 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb2.210341.4666@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Feb1.235203.28395@psych.toronto.edu> <1992Feb2.050613.28988@bronze.ucs.indiana.edu> <1992Feb2.191322.23599@psych.toronto.edu>
Date: Sun, 2 Feb 92 21:03:41 GMT
Lines: 35

In article <1992Feb2.191322.23599@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>I agree with the last sentence.  However, I still think that most
>researchers in the AI field ascribe to "Strong-AI" and not "Weak-AI",
>that when the human functions are modeled, phenomenal states are
>actually produced.  As far as AI work *explaining* phenomenal states, I
>don't think that that is its goal - rather, its philosophical *assumption*
>is that reproducing the psychological states reproduces the phenomenal
>states.  

That's more or less what I said.  I agree with the strong AI claim
that getting the psychology right will also get the phenomenology
right; I just don't think that this explains phenomenal states, nor
need such explanation be a goal of AI.

>In general, I agree that AI in general does not attempt to *explain*
>qualia.  But I do think that its claim to, *in principle*, produce
>minds with qualia (consciousness) is the most interesting feature
>of the position, both to its proponents and critics.  Otherwise, it
>would be just as controversial as weather modeling. 

Again, I tend to agree with this (though computational reproduction
of behaviour is not exactly uncontroversial); the point, again, is that
production is different from explanation.  My original point was more
or less that AI/cogsci is concerned with getting the psychology right.
The question of whether, once one has got the psychology right, one has
got the phenomenology right isn't really in the domain of AI/cogsci at
all, as it's not the kind of thing that can be settled by empirical
studies or computational models.  Of course, that doesn't mean that
AI practitioners can't or shouldn't have opinions about it.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


