From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!sunic!seunet!kullmar!pkmab!ske Tue Mar 24 09:57:38 EST 1992
Article 4625 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!sunic!seunet!kullmar!pkmab!ske
>From: ske@pkmab.se (Kristoffer Eriksson)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply (was: Definition of understanding)
Message-ID: <6706@pkmab.se>
Date: 18 Mar 92 21:46:23 GMT
References: <1992Feb24.223405.28054@psych.toronto.edu> <1992Feb25.011840.24663@beaver.cs.washington.edu> <1992Mar16.230945.3769@psych.toronto.edu> <6388@skye.ed.ac.uk> <1992Feb25.184610.5199@psych.toronto.edu>
Organization: Peridot Konsult i Mellansverige AB, Oerebro, Sweden
Lines: 353


  I am reposting this. I think it didn't reached outside Europe, due to some
  failure on the way. It also seems fairly clear from the arguments that are
  still being used, that most people have not read this (although I won't
  claim to have said anything very revolutionary). Some of the points seem
  to have been independantly rediscovered, though. If any-one saw it the
  first time, tell me, otherwise I may repost more lost articles.

Original-Message-ID: <6628@pkmab.se>
Original-Date: 29 Feb 92 12:29:25 GMT
Original-Summary: A try to move the discussion a bit forward, also introducing some aspects that no-one noticed yet


OVERVIEW:

1. SYSTEMS AND EMPIRICAL THEORIES
2. MULTIPLE LEVELS OF THEORIES OR SYSTEMS
3. SYSTEMS IN THE CHINESE ROOM
4. WAYS TO HAVE MORE THAN ONE MIND
5. AFTERWORD

Some parties want to discount the Systems Reply. I argue that they do that
too soon.

I also try to show that several of the defenders of the Systems Reply
take on a too limited picture of how the Systems Reply can be examplified,
restricting themselves to only one of several possibilities.

Also have in mind, that my primary purpose is to put down the Chinese Room,
and in particular the internalized Chinese Room, as an argument against
Strong AI. Only after having achieved that, might I concede to discuss
the correctness of Strong AI in itself, although perhaps with some of
the same arguments. For now, all I have to do is to show that the Chinese
Room argument is not safe against the Systems Reply. Furthermore, lest
any-one be confused, I don't do even that in this one article. In this
article, I just concentrate on the non-arguments against the Systems Reply
that are preventing further progress.

And I'm sorry that I have to waste a screen-full of text just to explain
what this article is about, but judging from the debate this far, I'm
sure some-one (or most) would attribute some other purpose to me if I
didn't.


In article <1992Feb25.184610.5199@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>In article <1992Feb25.011840.24663@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>>
>>This shows that you don't fully grasp the Systems Reply. When you
>>address the question "do you understand chinese" to the man who has
>>learnt the rules, what are you addressing ? You claim that the system
>>is a part of him, but in what way ? 
>
>More obscuratism from the artificial intelligentisa. In the very simple and
>obvious sense that there is no system at all apart from the activity of
>his own mind.

>If you really want your argument to rely wholly on the very dubious 
>assumption that there are, somehow, two minds running around inside
>the man's head, feel free, but the utter tendentiousness of the claim
>is patently obvious to everyone not committed a priori to the belief
>that computers JUST GOTTA have minds.

You obviously still don't have a clue on understanding the Systems Reply.
You are here dismissing the core of the Systems Reply out of hand, without
showing any signs of understanding it at all. You insultingly call it
"obscuratism", "utter tendentiousness", "patently obvious" and "ad hoc",
without even realising that there is a serious argument burrowed in it.
I find that quite presumptous. How about trying to understand the argument,
before dismissing it? Read on.

1. SYSTEMS AND EMPIRICAL THEORIES

> In short, its nothing short of an ad hoc shoring up of a failing research
>program strictly in the sense outline by Lakatos a quarter-century ago.
>It has all the symptoms: the claim has no empirical consequences whatsoever,
>and it complicates matters to no end apart from salvaging a flagging
>hypothesis.

You finish off here by comparing it to adding non-observables to a physical
theory, in order to save it from observations contradicting it. I suppose
you mean that talking about "two minds in one head" (not a totally accurate
characterization of the Systems Reply), is to add ad hoc non-observable
entities to save the "theory" of strong AI.

But it is not an ad hoc addition. It is not an addition at all. It is a
central part of the Systems Reply to view each "system" by itself, and
arranging that the Chinese Room is executed inside someone's head in stead
of in an actual room containing that head, should not make any difference
in the number of logical systems that are involved.

Apparently, your claim is that the Chinese Room vanishes, as a system of
its own, when executed in someones head, even though it goes through exactly
the same execution steps.

Do you hold the converse view too, that the man vanishes as an individual,
when he is working inside the room, only leaving the room as a system?

I think that what matters in the Systems Reply is systems defined by their
logical function rather that by physical boundaries, and there are ways by
which one system can provide the platform for executing another logically
distinct system, without actually *becoming* that other system.

Just take a look at multitasking computers. They execute several different
tasks at once, that may perform completely distinct functions, and may or
may not interact with each other. One task may be a word processing "system",
and another one an accounting "system". A third task is the operating system,
that administrates the other tasks. Surely you don't considere them
indistinguishable, even though there is absolutely no physical (or rather
spatial) boundarys that separate them?

Perhaps the word "system" can be a bit misleading, but I don't think it is
unreasonable. Personally, I might say "process" in stead, drawing attention
to the activity of execution, rather than the hardware.

Of course, the computer hardware can also count as a system here, independant
of the tasks it is executing. The interesting part is to note that one
system, the hardware, is executing other logical systems, the tasks. Further-
more, the tasks consists of interpreters, executing further other systems.

Do you want to say that the idea of separating these logical systems "inside"
the computer hardware has "no empirical consequences whatsoever", as you do
in the same situation with a human as the hardware?

Well, in that case you are right! The partitioning into logical systems has
no consequences, a prio, at all. It is perfectly doable to describe the
hardware only, and predict all empirical effects from that base alone, if
we are going to talk about predictive theories here. You can also view the
entire world that way, only concentrating on quark interactions to predict
everything. However, if you do it that way, you loose much of the natural
structure of what is happening, and much of the ability to really grasp
what is happening, in meaningful chunks, and you face a computational
problem in actually calculating a prediction based on all those low-level
details.

Grouping all of reality, including computational entities, into larger or
smaller entities, is something we do all the time, to cope with the immense
details of the world. I don't see that that should be particularily
questionable in the case of what is going on inside a head. Provided, of
course, that one does not group together entities that actually don't
influence each other, or separate parts that do influence each other, and
provided that one derives the correct properties for them. We put them into
theories that let us make convenient predictions about them. There are many
ways they may have empirical consequences, but in the particular case where
you compare a theory that ascribes one single mind to the head and one that
ascribes two (and similar cases), it is fully possible that you manage to
get the same predictions both ways. However, one of them probably better
reflects the natural structure of that head, and therefore will be much
more convenient.

2. MULTIPLE LEVELS OF THEORIES OR SYSTEMS

It is important to note that we often differentiate different "levels" in
the reality we want to describe, and for good reasons. On one level, we
describe reality as quantum particles. On another level we describe the same
reality as atoms, molecules, and chemical reactions. On yet another level we
have biology. On another one, we may reach psychology. In principle, it
should be possible to derive the higher levels from the lower ones. It
would not be practicle to junk all but the lowest level, though, since it
does not deal directly with the questions we want to ask about the other
levels. The higher levels provide shortcuts to those answers, and can also
be investigated independently.

I want to claim that this is but another face of what I've already said
about systems that execute other systems and so on. (And I mention that
only in order to put the discussion about systems into perspective.)

I hope I have by now provided some justification for talking about systems,
and about systems in systems, without loosing you all on the way. This is
getting a bit unstructured, so I think I'll stop here, in stead of adding
more confusion, as far as that objective goes.

3. SYSTEMS IN THE CHINESE ROOM

The way I want to apply this to the Chinese Room man, is that, if he has
consciously memorized the rule book of the Chinese Room, and is carrying
it out when he receives a piece of Chinese, and the rules are similar to
a program, refering to internal variables and such used during the
computation, that mean nothing special to the man, then what is happening
is completely analogous to a computer (the brain) running an interpreter
(the mans mind, system 1), that is interpeting the rules in the rule book,
carrying out their instructions step by step. The rules specify another
system, system 2, and the process of executing them constitutes that
system, and quite independantly of whether they are executed in someone's
mind or on other hardware. This second system provides for itself some
kind of understanding of Chinese, possibly complete with some kind of
mind of its own. Or if that is too controversial, at least it provides
something that to the external world acts exactly as if it did just that.

Whatever understanding the second system has, it keeps it to itself, not
available by introspection to the man (unless he has been provided with
some way to translate the state of that system into something
comprehensible). The man can only view the internal state of the second
system (its variables and the current rule), which he is keeping on behalf
of the second system, and which is meaningless to him, so he does not
experience himself as understanding anything of the Chinese. I think
everyone agrees that far.

The second system though, acts, or guides the man to act, as if it/he does
understand Chinese. I think everyone agrees on that too.

In particular, shouldn't a Chinese question given to the man (and the second
system), asking whether the system understands Chinese (and experiences
itself as understanding it), be answered affirmative? Otherwise there seems
to be some flaw in the rules of the Room.

If, as I have argued, it is reasonable to identify this second system inside
the man, then I can't see any other conclusion than that that system does
understand Chinese, while the man on his own does not.

If one does not single out a second system inside the man, then the facts
still are that the man acts externally as if he does understand Chinese,
but says that he does not, when asked in English, and one can then
reasonably say that the man is in a sense mistaken when answering in
English. He is correct in that he does not experience himself as under-
standing Chinese, but he is wrong in that he, when confronted with Chinese
actually exhibits understanding, as seen by external observers, and even
confirms that he does understand, when asked in Chinese; much like a split
personality. As far as I can see, this should hold true, whether you believe
in multiple systems in a head or not. But I think it is much more
comprehensible to simply say that the man, by following the rules, is
creating a second system that does understand what he himself does not
understand.

Perhaps that second system is really conscious in its own right (as long as
the man keeps it running), and does have real experiences, or perhaps it
does not and just manages to produce the same external behaviour anyway,
through the help of the man, but I can not see that that question can be
answered by questioning the man in English whether he experiences under-
standing of Chinese (or that the man himself can determine it by asking
himself that question) or not. I think that can be determined only by
examining the second system against whatever criteria one might have for
conscious systems. And that is further complicated if Searle simply does
not accept the second system itself testifying that it does understand, and
Searle of course can not check it by introspection, since one can't
introspect on the second system, if one self is the first system.

(Personally, I think such a system, executed in someone's mind, might be
at least as handicapped by not having senses of its own as a computer
implementation would be, or even more. Also, it would be handicapped by
low speed, and so on.)

In using the Chinese Room man as a counter-argument against the Systems
Reply, I am intrigued by why one would think that the fact that the man
does not experience understanding himself, would in any way contradict
the Systems Reply. That seems to me to be a completely unfounded assumption
about the Systems Reply. The Systems Reply says that the Chinese Room
system understands, not the man, and it of course defines systems in
relation to the programs they are executing, not the hardware that is
executing them.

Of course, I'm sure that the pro-Searly side for some reason views it as
self-evident that only the man's own awareness counts, but that is not
sufficient to use it against the Systems Reply. Thus the two sides in
general fail to communicate with each other.

I want to point out that the Systems Reply does not depend on any mis-
understanding of what "understanding" means, although I see that several
debaters on the Systems Reply side do try to complicate the issue of
understanding, as alleged by the pro-Searle side. For my part, I have
concentrated on the simple experience of understanding, throughout this
argument. The only complication (which is big enough), is the question of
*what* is it that is having the experience of understanding, per above.

(There is, though, some confusion if you try to solve the problem without
accepting all of the Systems Reply.)

4. WAYS TO HAVE MORE THAN ONE MIND

Now I want to discuss the different ways by which a head could contain more
than one mind, because there is more than one; a fact that no-one seems to
have noticed in the previous discussion.

4.1

What I have used throughout my argument this far, is the interpretive
approach. I hope you know what interpretation means in connection to
computers. This approach is the one that really uses the idea of systems
(or theories) on different levels, where each level explains the next higher
level. In this case, the man really has only one mind on the normal level
of minds in humans (that is, nothing strange is going on there), just above
the neuron level. There are no partitions of his brain that belong to some
other mind. In stead, the Chinese Room "mind", is created by the man's own
mind consciously following (executing) the rules one by one, which he has
memorized, and keeping track of that minds current states (as represented
according to the rules, not as experienced by the Chinese Room mind itself),
and performing in/out for it.

The interpretative approach is the one that to me seems to be the intended
one for Searle's argument, but I could be suffering from incomplete
information.

With this approach, I don't see that it really should make any difference
whether the man keeps track of the Chinese Room state in his head, or on
paper, or by pulling levers in a physical Room. It is still the man who
drives the execution, and using a set of incomprehensible variables. Also,
the place where to find the rules shouldn't make that uch of a difference
either, as long as they are used as a long list of explicitely stated
rules. If the man starts to learn the rules in such a way that he performs
them automatically, without examining the explicit rules, then we start
reaching the next approach. But I wouldn't call that "memorizing".

4.2

Then there is what one could call the compilative approach, which strangely
seems to be the one everyone else has tried to explain this far. In this
approach, there actually are multiple more or less independant minds in
the same brain (or the brain acts as if there is). How this would come
about in practice, I don't know, but after all, it does seem to happen in
psychically sick people. I don't find it entirely unreasonable, though,
that the brain (that is, parts of it) could be trained to automatically
perform the equivalents of the operations that the Chinese Room rules
specify without conscious intervention (of the original consciousnes), just
as one can automate most mental operations that one performs often enough.
The rules would be internalized on the same level as the mans ordinary mind,
a kind of automatic compilation.

In this case, the Chinese Room understanding and consciousness might be as
genuine as the man's English dittos (or perhaps not), which may be the
reason why this is the version that has been touted in the discussion this
far, as a counter-argument to Searle. It may be somewhat more direct in
trying to disprove Searle than the interprative approach, but at the same
time does not use all of the Systems Reply.

4.3

Then I can envisage a third approach too, where the Chines Room rules are
formulated in such a way that they don't work with meaningless (to the man)
variables, but in stead connects to the basics of the man's own mind. A rule
could for instance instruct the man to raise his own anxiety level, calm down,
become hungry, think of sunny days, modify his confidence in some particular
belief, counstruct some particular belief, and so on, in stead of
instructing him to raise some variable that, unbeknowest to him, represents
anxiety level in a second system, and so on.

This way there would be only one mind, total, and it would hopefully actually
experience that which the Chinese Room is supposed to experience, perhaps
even understanding. Propably a side-effect in this case would be that the
man actually soon learnt Chinese. I don't think this is how the Chinese
Room rules are supposed to work, though, and I won't try to determine
whether the man would answer yes or no when asked in English whether he
understands Chinese.

5. AFTERWORD

Propably I haven't managed to convert any Searle proponents, but I hope
(or I did, the first time) it does provide a little more understanding of
what you are up against, and that it provides a base that you can question,
and let me clarify further.

-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske


