Newsgroups: comp.ai.philosophy,comp.ai,comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!gyro
From: gyro@netcom.com (Scott L. Burson)
Subject: Minsky's new article
Message-ID: <gyroCyw3Jx.8sn@netcom.com>
Organization: Netcom Online Communications Services (408-241-9760 login: guest)
References: <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu> <gyroCysG7u.8Hs@netcom.com> <HPM.94Nov5101751@cart.frc.ri.cmu.edu>
Date: Mon, 7 Nov 1994 08:57:33 GMT
Lines: 120
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:21738 comp.ai:25005 comp.robotics:15094

In article <HPM.94Nov5101751@cart.frc.ri.cmu.edu> hpm@cs.cmu.edu writes:
>
>It is equally reasonable to extrapolate that the physical laws that
>explain what we understand about the workings of life and mind extend
>to the parts we haven't managed to examine yet.  If so, then mind can
>be simulated on computers, and fully intelligent and super-intelligent
>machines are possible.  Then we can expect that humans will be
>surpassed in mind by robots as they have in strength by power
>machinery and in speed by cars and planes.  That is a reasonable
>extrapolation, if not unassailable.  Additional factors could spoil
>the simple analysis.  (But I bet not)

Well, to that I would say this.  "Mind" is not a unidimensional thing.
Computers have long ago surpassed humans in ability to perform rapid
arithmetic, for instance, yet they have not begun to challenge humans' ability
to discern subtle patterns, to take a single but immense example.  Will they?
I don't know.  But the I've spent studying AI and psychology have left me with
a very great respect for the brain.  I will not be surprised if computers
somehow never manage to bridge that particular gap.  -- Which is not to say
there's no point in trying.

>>>quote by Vannevar Bush on the impossibility of ICBMs...
>> this quote is about a supposed technological impossibility, not a
>> physical one (physics being the relevant science in this case).
>
>You think artificial intelligences are physically impossible?
>If not, what's your point?

I meant to be referring to the idea of replacing one's brain with a computer.

But since I'm on the topic, I should show the rest of my hand.  Although I
think computers will someday do some very impressive things, I don't think
they will ever be conscious beings like people, with desires, imagination,
emotions.

To me this is so obvious as hardly to need stating, yet I run across many
people -- Marvin, I believe, among them -- who see no reason that these things
should not be possible.  Well, they're welcome to their opinion, but to me
it's obvious that consciousness is not a physical but a metaphysical property.
Marvin says in his article that arguments for the existence of a metaphysical
dimension to consciousness are circular, but I think there's a perfectly good
argument: experiences (qualia, as the philosophers say) are not physical
things.

It's an old argument, but for those who aren't familiar with it I'll give a
brief example.  I look at my screen right now, and there's a little yellow
rectangle that represents the cursor.  Now I know that yellow light has
certain physical properties, and that it stimulates the visual receptors in my
eyes in a certain way, and the receptors in turn send certain signals along
the optic nerve to the visual cortex, etc.; but nowhere in that purely
physical description of a sequence of events is there room for the
*experience* of yellowness.

To me, it's that simple.  I know there are people who have heard that argument
and don't accept it, but I don't know what to say to them.  I have the
impression, without having read Penrose's books, that he is in part trying to
answer various objections that might be raised to that argument, but I don't
know how successful he is, and anyway it's the kind of question that people
seem rarely to be persuaded on in either direction.

Anyway, just to finish sketching out the picture I have -- as I said, I think
computers will become far more useful, responsive, and easy to control than
they are now.  I expect they will someday be able to handle complex
sensorimotor tasks like walking and driving cars, and to accept commands by
voice.  But I don't think they will ever be able to decide on their own what
they should do.  I think that Clarke and Kubrick in _2001_ tapped into a very
profound truth: if a machine is placed in charge of anything, it will screw
up.

>>I think the opposite point is actually much easier to make.  Anyone who uses
>>computers in 1994 simply has to be impressed with their profound, infuriating
>>stupidity.
>
>As anyone in the 1920s would be impressed by the ludicrous
>ineffectiveness of rockets as transportation of any kind, never mind
>space travel.  At that time they had risen no more than a few hundred
>feet, at best.  More often they made big, unexpected explosions at the
>launch pad.

So?

You keep presenting arguments of the form "skeptics have been wrong in the
past, and so they are wrong now".  But of course this doesn't begin to follow.

There are also plenty of examples of things that people have thought were
right around the corner for years now, that still haven't happened.  Many of
them are in the medical field: a cure for cancer, for instance.  This is
relevant: we don't now begin to understand the human body, brain included.

>>But replacing the brain with a piece of artificial hardware?  No.  Not soon,
>>not ever, and I think it *does* sound silly.  Whether such articles as
>>Marvin's really have negative consequences for AI funding I have no idea, but
>>I think it's a valid concern for Sean to be raising.
>
>Like I said, the idea breaches a mental dichotomy, which is why you
>feel that gut reaction.  If you were smart, you would stand back from
>your emotional response and look at the idea dispassionately.  There
>is nothing known that physically makes it impossible.  Then you slowly
>train yourself to get used to it emotionally.  Then you teach others.
>That's exactly what the early space pioneers did for their equally
>disturbing idea.  Don't be a wimp, ruled by stone-age mental blocks.

The irony here is that I think it is you who are ruled by the mental blocks of
an obsolete paradigm.  The only way I can imagine that it would be plausible
to you that the human brain could be replaced by a machine would be if you
believe that we *are* machines -- just wet ones rather than silicon-based.  In
fact, other people I've talked to about this sort of thing, who hold views
that seem similar to yours, have told me that they agree with that statement
that we are fundamentally machines.

And to my way of thinking, the only way anyone could think of him/herself as a
machine is to have a world view that says that *everything* is a machine;
without that axiom, thinking of oneself as a machine would be very difficult
for any number of reasons, of which the "qualia" argument given above is but a
single example.  But it is the nature of a world view that it does not admit
contrary evidence; and there are a lot of people in this world who have grown
up with the mechanistic Cartesian world view, and who are very attached to it.

-- Scott

