From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Mar 24 09:55:04 EST 1992
Article 4404 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <1992Mar11.211030.357@bronze.ucs.indiana.edu>
Date: 11 Mar 92 21:10:30 GMT
References: <1992Mar9.171606.6886@psych.toronto.edu> <6374@skye.ed.ac.uk> <1992Mar11.201637.21875@psych.toronto.edu>
Organization: Indiana University
Lines: 68

In article <1992Mar11.201637.21875@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:

>Ah, once again I leap into the breach! Jeff, you're absolutely right. IF
>you're phraseology convinces the hordes that have been unconvinced by me,
>then terrific. It is very straightforward. The symbols (by definition!)
>have no meanings. That's what makes the Chinese room qualify as a Turing
>Machine. It doesn't matter how many symbols you have. It doesn't matter how 
>complex the rules are. IT doesn't matter if you stick the machine inside
>a robot. It doesn't matter what else you do. If the symbols acquire 
>explicit referents (which, BTW, is only PART of meaning), you no longer
>have a Turing machine.

This is wonderful!  Who ever thought the argument would be so easy?

But to come back down to earth: (1) It's ridiculous to say that the
possession of meaning by a machine's symbols automatically implies
that the machine isn't a TM (by definition?).  (2) As I've said any
number of times, there's no reasons why the symbols manipulated
inside the TM (i.e. the computational tokens) must themselves be
the bearers of meanings.  The bearers of meaning may be much more
complex, high-order structures (think e.g. of patterns of activity
in a connectionist network).  To presume otherwise is to commit
a conflation of representations with computational tokens under
the ambiguous word "symbol".

>Now, keeping that constraint in mind, consider
>predicate calculus. It's pretty complex. It's infinite, in fact. No 
>matter how you execute it, however, the symbols have no meanings until
>you explicitly give them some. Dave Chalmers says that predicate calculus
>isn't complex enough; that maybe if you get even more complex understanding,
>consciousness, and maybe even qualia result. Okay. Let's try 15th order
>modal predicate calculus -- a formal language even MORE complex
>than any ordinary language I can think of (proof: you can derive things
>in 15th order modal predicate calculus that you can't derive in, say,
>English, or Chinese). The terms STILL have no meaning. In fact, they're
>defined (just as in any Turing machine) not to have meaning. If there's
>no meaning, then there can't be understanding. Whew.

Chris, are you on acid today?  Let's see: (1) I didn't say anything
about "understanding" or "semantics", I was only talking about
consciousness.  To my mind semantics is a quite separate issue, and
much simpler.  (2) 15th order predicate calculus will suffer from the
same kinds of problems as 1st-order predicate calculus.  Why limit
the kinds of mechanisms used in cognition to those that mirror, or
straightforwardly extend, the basic structure of language?  Why stick
to systems whose entire raison-d'etre is to provide a framework for
deduction, when so little human reasoning is deductive?  To presume
that AI is limited to this kind of method is to set up a straw
person.  (3) I haven't said anything about whether a predicate-calculus
based system could have its own semantics, but if pressed I'd say
probably, especially if it was causally hooked-up in the right way to
the external world, but not the rich kinds of semantics that humans have,
due to their much richer cognitive mechanisms; but the important condition
for the CR, in any case, is consciousness.  (4) "The terms STILL have
no meaning" once again commits the conflation assumption that I mentioned
above.

Anyone who finds it bizarre that symbol-manipulation could produce
(a) meaning (b) consciousness has to face up to the fact that it's equally
bizarre that neuron-firing could produce meaning or consciousness.
Yet it does.  So arguments from counter-intuitiveness carry very little
weight, as we already know that something very counter-intuitive is
going on.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


