From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo Tue Mar 24 09:55:27 EST 1992
Article 4437 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: Re: The Systems Reply I
Organization: Department of Psychology, University of Toronto
References: <6374@skye.ed.ac.uk> <1992Mar11.201637.21875@psych.toronto.edu> <1992Mar11.211030.357@bronze.ucs.indiana.edu>
Message-ID: <1992Mar12.213046.7088@psych.toronto.edu>
Date: Thu, 12 Mar 1992 21:30:46 GMT

In article <1992Mar11.211030.357@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Mar11.201637.21875@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>
>But to come back down to earth: (1) It's ridiculous to say that the
>possession of meaning by a machine's symbols automatically implies
>that the machine isn't a TM (by definition?).  

I was sloppy. Let me rephrase. If there is any refernce to the meanings
of the strings in the rules that are executed by the system, then it
is not a Turing Machine.

>(2) As I've said any
>number of times, there's no reasons why the symbols manipulated
>inside the TM (i.e. the computational tokens) must themselves be
>the bearers of meanings.  

You mean apart from the fact that the Chinese characters themselves must
be interpreted if the "system" is to understand Chinese like a native?
I think you're out to lunch here, Dave. Higher order don't buy didley here.
>
>Let's try 15th order
>>modal predicate calculus -- a formal language even MORE complex
>>than any ordinary language I can think of (proof: you can derive things
>>in 15th order modal predicate calculus that you can't derive in, say,
>>English, or Chinese). The terms STILL have no meaning. In fact, they're
>>defined (just as in any Turing machine) not to have meaning. If there's
>>no meaning, then there can't be understanding. Whew.
>
>Chris, are you on acid today?  

Not for years now.

>15th order predicate calculus will suffer from the
>same kinds of problems as 1st-order predicate calculus.  Why limit
>the kinds of mechanisms used in cognition to those that mirror, or
>straightforwardly extend, the basic structure of language?  Why stick
>to systems whose entire raison-d'etre is to provide a framework for
>deduction, when so little human reasoning is deductive?  To presume
>that AI is limited to this kind of method is to set up a straw
>person.  

I disagree. If this person's straw, it's becasue the person IS straw.
AI's limited to formal systems. Perhaps sad, but true.

>Anyone who finds it bizarre that symbol-manipulation could produce
>(a) meaning (b) consciousness has to face up to the fact that it's equally
>bizarre that neuron-firing could produce meaning or consciousness.

Bizarre, perhaps, but we have iron-clad evidence that it does (or, at least,
that it's involved. We have no such evidence for symbol-manipulation.
Just a lot of claims as redefinitions of everyday words.

Okay, now I'll stop.
-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


