From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!spool.mu.edu!yale.edu!yale!mintaka.lcs.mit.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Mon Mar  9 18:34:04 EST 1992
Article 4152 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!spool.mu.edu!yale.edu!yale!mintaka.lcs.mit.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb28.211025.26278@oracorp.com>
Date: 28 Feb 92 21:10:25 GMT
Article-I.D.: oracorp.1992Feb28.211025.26278
Organization: ORA Corporation
Lines: 59

christo@psych.toronto.edu (Christopher Green) writes:
(in response to Paul Barton-Davis)

>> This shows that you don't fully grasp the Systems Reply. When you
>> address the question "do you understand chinese" to the man who has
>> learnt the rules, what are you addressing ? You claim that the system
>> is a part of him, but in what way ?

>More obscuratism from the artificial intelligentisa. In the very
>simple and obvious sense that there is no system at all apart from the
>activity of his own mind.

Like Paul said, you don't fully grasp the Systems Reply.
Why don't you try to understand it before dismissing it as
"obscurantism"? What is the point of such name-calling?

> If you really want your argument to rely wholly on the very dubious
> assumption that there are, somehow, two minds running around inside
> the man's head, feel free,

Two points: (A) I don't find the idea dubious, at all. (B) That there
may be more than one mind inside one physical body is not an
assumption of Strong AI, it is a *consequence*. The Strong AI position
is that mind is computational. From this assumption, it clearly
follows that any number of minds can occupy one body, since it is
clear that one can run any number of programs on one computer.

> but the utter tendentiousness of the claim is patently obvious to
> everyone not committed a priori to the belief that computers JUST
> GOTTA have minds.

Chris, I suppose you realize that you have crossed the line from
reasoned argumentation to simply insulting those who think differently
from you. To give a counter-exampleto your statement, I don't have any
particular commitment to AI; I am not involved in any AI research, I
have no dreams of owning a sentient computer, I don't care whether
computers have minds or not. The only reason I am interested in the
computational theory of minds is that it is the most reasonable
explanation I have heard for the way *human* minds work.

> In short, its nothing short of an ad hoc shoring up of a failing
> research program strictly in the sense outline by Lakatos a
> quarter-century ago. It has all the symptoms: the claim has no
> empirical consequences whatsoever, and it complicates matters to no
> end apart from salvaging a flagging hypothesis.

The question of empirical consequences cuts both ways: if you claim
that two systems can have identical outward behavior, but one
understands while the other doesn't, then that claim has no empirical
consequences.

I don't understand why you (or Searle) bother to argue about Strong
AI. You don't take it seriously enough to even bother understanding
its assumptions, and yet, for some reason, you take it seriously
enough to publish supposedly reasoned articles refuting it.

Daryl McCullough
ORA Corp.
Ithaca, NY


