From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Mar  9 18:34:25 EST 1992
Article 4185 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Definition of understanding
Message-ID: <1992Mar2.031342.27459@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Feb25.175012.8924@oracorp.com> <1992Feb28.022105.28548@bronze.ucs.indiana.edu> <1992Feb28.165550.13014@psych.toronto.edu>
Date: Mon, 2 Mar 92 03:13:42 GMT
Lines: 72

In article <1992Feb28.165550.13014@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:

>But Dave, surely "maybe things get radically different when they get
>really complex" is just no argument at all.

Of course it's not much of an argument, but then it's responding to
something that's not an argument, but an appeal to intuition.  As I
said in the last post, the points about complexity are just an attempt
at weakening the force of those intuitions.  Presumably most people
think that 10 neurons can't produce consciousness, but that a billion
might.  So it's not implausible that complexity might make a difference
when we're talking about symbols rather than neurons.

Moving to meta-discussion for a moment:

Anyone who's had much involvement with it knows that the Chinese-room
arguments always come down to "The system [or virtual person]
understands" on one side, versus "That's ridiculous!" on the other,
and that further progress is usually very difficult to make.
I suggest a moratorium on any discussions that simply tread over
this old ground, as the last couple of weeks have.  It's amazing
how these Chinese-room discussions drag things down to the lowest
common denominator.  After a month or two of moderately interesting
discussions, treating complex topics in a reasonably sophisticated
way, the group becomes trivial and deathly dull in a flash.  It's
not unlike yelling "fire!" in a crowded lecture hall.

If the Chinese room is to be discussed at all, I suggest that
discussants be required to give some (hopefully novel) *argument* in
support of their position, over and above the usual vehement
assertions.  (And over and above (1) "But the system *behaves* right"
on the pro-AI side, or (2) "But the person has *memorized* the other
system" on the anti-AI side.)  I gave such an argument a while ago
with the "fading qualia" thought-experiment (it should be clear
enough how this applies to the memorization case).  If there are
substantive arguments on the anti-AI side, I'd like to see them.

>As I suggested earlier,
>try the same project with an artifical language of, say, five symbols
>and three rules. Then try it with ten symbols and six rules. Try it
>with, say, all of the propositional calculus. That's a pretty complex
>artificial language that is syntactically specififed. Is there any hint
>that something mysterious is going on; that consciousness might slowly
>be welling up of it own volition (pun intended)? No. Not one iota.

Well, this doesn't prove much.  Presumably the standard view in AI
is that any system constructable from ten symbols/six rules, or
even the whole propositional calculus, is far too simple to
support any consciousness, whether implemented on a normal computer
or by memorization.

Of course, if you ask me, liberal about qualia as I am, I think
that such systems do possess limited consciousness.  But as is the
case with all consciousnesses apart from our own, you can't
know about these directly, so empirical evidence doesn't count for
much.

Finally, these discussions would be much improved if people eschewed
the use of the word "understanding", which only seems to cause
problems because of its ambiguity.  Even anti-AI people can concede
that all "understanding" comes down to is (1) a certain functional
capacity, e.g. for appropriate behaviour, and (2) a certain
phenomenology, i.e. associated conscious experiences.  In the
Chinese-room case the former is not in dispute, so the dispute is
obviously about the latter.  If people stuck to talking about
consciousness or qualia, instead of "understanding", things might be
much clearer, and there would be fewer irrelevant arguments.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


