From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!umn.edu!spool.mu.edu!agate!soda.berkeley.edu!gwh Mon May 25 14:06:44 EDT 1992
Article 5798 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!umn.edu!spool.mu.edu!agate!soda.berkeley.edu!gwh
>From: gwh@soda.berkeley.edu (George William Herbert)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Summary: The "Central Self" concept flawed
Keywords: Central Self LONG
Message-ID: <vf3kbINN5fo@agate.berkeley.edu>
Date: 21 May 92 02:58:19 GMT
Article-I.D.: agate.vf3kbINN5fo
References: <1992May20.191738.18644@mp.cs.niu.edu> <1992May20.221931.20652@news.media.mit.edu> <RYAN.92May20195459@eas.gatech.edu>
Sender: gwh@soda.berkeley.edu (George William Herbert)
Reply-To: gwh@lurnix.com
Organization: U.C. Berkeley CS Undergraduate Association
Lines: 134
NNTP-Posting-Host: soda.berkeley.edu

In article <RYAN.92May20195459@eas.gatech.edu> ryan@eas.gatech.edu (Ryan Mulderig) writes:
>>>> On 20 May 92 22:19:31 GMT, minsky@media.mit.edu (Marvin Minsky) said:
>[...]
>MM> Right on.  And we can go a step further; the idea of the "brain" as a
>MM> unit is equally defective.  Each part of your brain is immersed in a
>MM> virtual reality, whose attribute are computer by another computer
>MM> called "the rest of the brain and the rest of the world".  Really
>MM> guys.  Are you ever going to question the fatal assumption that foulds
>MM> the history of philosophy: that idea of a Singel Central Self, which
>MM> "means" and "understands" and looks out through its eyes and "sees"
>MM> the world?  Gosh, I'm tired of complaining about this.
>
>I think considering the conception of the "Single Central Self" to be
>a fatal assumption fouling the history of philosphy to be unwarrented.
>The conception of a self is part of what seperates higher order
>animals from lower order animals. I do not belive that an amoeba has a
>concecption of itself as seperate from its enviroment. I think my dog
>does, and I am sure I do. Without this conception of a central self we
>would have no basis underwhich to face the universe as a person. I do
>not subscribe to the mind/body duality that has caused some problems
>in western philosphy (but also allowed many advances), but I do hold
>that my body/mind system are independant in some respects from the
>surrounding universe, if only by virtue of my conception that there is a
>seperation. While that may sound if I am begging the question, if I
>can consider myself (whatever consider and myself may actually be
>taken to mean) and act on this consideration, then that can be argued
>to necessary and sufficent for me to be considered seperate from my
>enironment.	

Immediately after Ryan posted this, he and I had a loooong and amusing
discussion about it on ICB.  I'll try and summarize my arguments and
some of his and where we went with it all.  Any errors I make in
transcribing his intentions are mine (I didn't log the discussion).
	I'm going the long way around because I think that the discussion
shed some valuable light on the preconceptions and mindsets of both
sides.

	The summary of his initial position was that it's reasonable to 
believe in the "central self" or "I" as one person might see themselves
as being an accurate representation of reality.

	I started to rebut this by arguing that a number of things that
a human being percieves its own "central self" or "I" as doing are in
fact done by sections of the brain operating in relative isolation,
almost as a black box.  For instance, visual perception is mostly handled
at an automatic level by the visual cortex without interaction with
the rest of the brain.  Language is processed in several interconnected
but distinct areas, again fairly seperate from the rest of the brain.
	I then took this argument to the conclusion that since
you can demonstrate independent units at a lower level than "central
self" the concept of "central self" as a tool for modeling or understanding
the mind as a whole is limited.
	His initial counter to this was that this was simply untrue; a
person "sees" someone and reacts to them, they see themselves as a 
coherent whole.  He argued that a collection of independently operating
entities could combine to produce a single unified self.  Indeed,
that's what people see themselves as.
	I countered that this was untrue, that many things that people
think they're doing in their "central self" are handled in black-box
manner by some part of the brain w/o interacting with other parts;
for instance, visual processing regognizes a face from the input from
the optic nerve, and passes on the message "That's Joe's Face" to
the rest of the brain.  You or I think we recognize Joe, but in truth
it's done with between no and little concious input in the back
of the brain.
	After random arguing back and forth, I suggested a thought
experiment of a parrot with kluged-in interfaces to human language
centers and visual cortex.  I pointed out that those items would
continue to parse language and identify joe, though the parrot would
continue to respond and treat people like a parrot.  He complained
that the parrot wouldn't know what to do with the inputs.
	He continued to push the point that it's valid to treat even
a system like that as a collective whole.  As an example, he pulled
out a computer with a lisp interpreter out as an example.  It doesn't
matter what's below the interpreter, as long as it runs.
	This moved towards an argument over at what level of behaviour
looking at a system was really significant.  His position was that
he can use a lisp interpreter just fine w/o understanding how the gates
work on the chips in the machine.  We slowly argued around what level
of understanding of a system is significant, which took about an hour,
and came to some interesting resolutions.
	My final position was that AI research trying to emulate human
behaviours had to know the intermediate levels of operation of the mind/
brain/whatever, not just the bottom ones, to put together a machine to
call intelligent.  Since we know how a human acts at the highest level
and at the neurobiology level, we need to understand the middle
levels (how large neural nets work and how people are wired) to emulate
people.
	His final position was that we don't need to understand the 
intermediate positions; we're well on our way to solving a lot of the
problems that I "black boxed" early on (visual processing, language
processing), and string all these solutions together with a central
processor that "learns" and we have a valid solution to the machine
intelligence problem.  He rightly pointed out that I'd been focusing
too hard on human-emulating solutions.
	 However, I think  my basic position still stands.  At the 
best we will never understand humans without filling in the intermediate
levels of understanding of our own operation.  It's possible that
at worse we won't be able to tie the individual elements together
well enough, or build a central learning program, without emulating
how people already do it.
	From that viewpoint, we have to know how the middle levels
of the operation, and if we're striving to understand those levels
we can't fall in the trap of presuming the top level's view of things
is right.
	Of course, he's right also, in that from his viewpoint it's 
likely that the intermediate levels won't be necessary to further AI,
and that we can treat the human perception of how we operate as valid.
Which I agree is true; I don't talk to people trying to communicate
with their language center... "I" am talking to "them".
	I also believe in the limitations of my own world view in
that regard 8-)  Just because I act that way doesn't mean that it
explains what's really going on or that I believe that I'm right
because that's what I see.
	If he's right and we can do AI from looking at the problems from
the top down, his view of the central self problem remains valid.
If not, then we'll have to explore further and realize the limitations
of that theory.  It will never be "correct" in that it's glossing over
the actual details, but it will remain valid for explaining behaviour
to a certain extent in any case.

	I don't think that either of us changed our minds basically,
but I do feel that we both expanded them a bit.  It helps to argue realtime
with someone every now and then to sharpen your perception of the real
issues involved, which were not what I thought the issues were when 
I started the discussion.

PS: I did tell him that it was generally a bad idea to argue with Minsky,
mostly because Minsky knows these arguments backwards and forwards 8-)
He didn't believe me when I told him.  I am wondering if he will in
a short while 8-)

-george william herbert
gwh@lurnix.com  gwh@soda.berkeley.edu  gwh@ocf.berkeley.edu  gwh@gnu.ai.mit.edu


