Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!rsg1.er.usgs.gov!jobone!newsxfer.itd.umich.edu!europa.eng.gtefsd.com!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD02vM6.B1@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <CzsHMy.B9n@gpu.utcc.utoronto.ca> <CzuAD4.4K6@cogsci.ed.ac.uk> <CzzuEu.F48@gpu.utcc.utoronto.ca> <D01LqA.I9q@cogsci.ed.ac.uk>
Distribution: inet
Date: Wed, 30 Nov 1994 11:22:54 GMT
Lines: 105
Xref: glinda.oz.cs.cmu.edu sci.skeptic:96661 comp.ai.philosophy:22889 sci.philosophy.meta:15143

In article <D01LqA.I9q@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <CzzuEu.F48@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>>In article <CzuAD4.4K6@cogsci.ed.ac.uk>,
>>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>>If the teletype TT can determine whether something is conscious
>>>or not, what is it about the teletype TT that does the trick?
>>>
>>The same thing which you use to detect that other human beings have mental
>>life. Or are deciding about it on the basis of their facial contortions,
>>body language etc? Dont't you make conclusions about people's mental life
>>on the basis of letters, for instance?
>
>Rather than trying to make out that I'm evil, why don't you
>say what it is that you think is important in the teletype
>TT?

This sort of thing is one of the reasons it is so hard to have a conversation
with you, Jeff.  Rather than answer Socratic questions that might lead you
somewhere, you resist them and go off in some other direction, or just repeat
yourself, just as when I ask you whether you understand the is/ought dichotomy,
and instead of responding, you comment that my claim that it is the threat
of punishment that makes it unreasonable to kill people is "an interesting
ethical position".  If you *were* to consider the is/ought dichotomy (also
known as the fact/value dichotomy), you would realize that I was distinguishing
*reason* (which deals with facts) from *ethics* or moral choices (which deal
with values).  Even if you disagree with the is/ought dichotomy, you would have
at least understood that *I* was considering it, and would not have so blatantly
missed my point.

In this case, you make a fool out of yourself by falsely accusing Andrzej of
"making out that I'm evil"; what is evil about deciding whether people are
conscious or have mental life based upon facial contortions or body language?
We certainly make *some* judgements based on those without being evil.  Just
how *do* you determine whether others, such as Andrzej or I, have a mental
life?  That was his question.  Why do you refuse to answer it?  If you say,
"why, I watch for facial contortions", then Andrzej knows that he has to
carefully narrow his inquiry to those individuals whose faces you can't see
(actually, he did that by referring to letters; why do you so stubbornly
resist and refuse to answer his question?).  Perhaps you have some other
answer, and he would have to respond to it.  But the answer he expects, the
good faith one that will allow the dialog to continue, is "I read what they
write, and make judgements based upon it; I put them through a Turing Test of
sorts".  For instance, you have judged me to be arrogant.  You have done that
solely from reading texts, not from observing body language, or reading my
printout, or examining my internal structure, or checking to see whether I
have an "internal dialog" (whatever that means) or checking to see whether I'm
conscious (I'm pretty sure you said, roundabout, that one of the ways, aside
from the TT, to determine whether a machine is conscious, is to look and see
whether it is conscious).  Doesn't the claim that I'm arrogant imply some sort
of cohesive mental life, a personality, a self with awareness and some ego to
respond to an apparent criticism?  So how did you do it?  How did you come to
this conclusion?  What magic property of texts allows us to come to such
conclusions simply by reading them?  And do we really have to answer that
question before accepting the fact that we can and *do*?

>>>If I recall correctly, the TTT is still confined to externally visible
>>>behavior.  That is, it doesn't include anything about internal workings.
>>
>>Because we judge other people without going into their internal workings. If
>>you say you are not prejudiced, why are you making the test for AI tougher?
>
>Now what are you suggesting I'm prejudiced against?  Machines?

"He called me prejudiced!  Prejudiced against machines!  How dare he do that!
I'm not evil!  I'm not prejudiced!  Arrf!  Arrf!"

I suppose it's a lot easier to get worked up into a rabinous froth than to
actually answer the question.

Of course, we are all prejudiced against machines, because they have yet to
prove themselves.  People generally display intellect, machines generally do
not.  Whenever we have intelligent conversations, it almost always turns out
to be a human at the other end, and in those rare cases such as Eliza as chess
playing programs where it isn't, it has been easy to see how we have been
misled by not escaping the very narrow limitations of those programs.  So of
course we are prejudiced.  But we shouldn't let that leak into our measurements
and methods.

>I want the test for AI to be as good a test as we can devise.
>The best test, not the easiest one.

Again you don't answer the question; why make the test harder than the one you
use for humans?  A program with the physical limitations of both Hawking and
Keller and the intellect of Quayle would still be impressive, would it not?
But you seem to want to look at its listing.  Why?  What will that tell you?
What is there to be found there that indicates intelligence or consciousness?
The variable names?  What if the program wins the Obfuscatory C contest?  Does
that affect whether it is conscious?  We can look at programs to see whether
they do the right thing when we know what algorithm is necessary, but what
algorithm is necessary for consciousness?  That only works for very localized,
pure problems.  After doing 26 years of systems programming, I have learned
(over and over) that the final proof is in the pudding.  You also want to look
for "internal dialog".  What is that?  Logging intermediate results to a file?
Subvocalization?  (I'm sure the engineers can add it if you need it.)  From
the speculations of Dennett and Hawkins, one might conclude that "internal
dialog" is an unnecessary artifact of evolution, one of those things that a
non-blind watchmaker never would have included.  Why do you want to require
it?  What good is it?  What has it got to do with consciousness?  It may not
even be nearly as universal in humans as you imagine.  I find that the more
familiar I am with a subject, the more confident I am about it, the less such
dialog occurs.  I usually think "what am I going to say next?" when I'm
prevaricating.  Is that what you want from AI?
-- 
<J Q B>
