Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <D0Cv18.14B@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <CzzuEu.F48@gpu.utcc.utoronto.ca> <D01LqA.I9q@cogsci.ed.ac.uk> <jqbD02vM6.B1@netcom.com>
Distribution: inet
Date: Mon, 5 Dec 1994 20:46:20 GMT
Lines: 197
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97207 comp.ai.philosophy:23183 sci.philosophy.meta:15313

In article <jqbD02vM6.B1@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <D01LqA.I9q@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>In article <CzzuEu.F48@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>>>In article <CzuAD4.4K6@cogsci.ed.ac.uk>,
>>>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>>>If the teletype TT can determine whether something is conscious
>>>>or not, what is it about the teletype TT that does the trick?
>>>>
>>>The same thing which you use to detect that other human beings have mental
>>>life. Or are deciding about it on the basis of their facial contortions,
>>>body language etc? Dont't you make conclusions about people's mental life
>>>on the basis of letters, for instance?
>>
>>Rather than trying to make out that I'm evil, why don't you
>>say what it is that you think is important in the teletype
>>TT?
>
>This sort of thing is one of the reasons it is so hard to have a conversation
>with you, Jeff.

Out of a long article, you pick a few things that you can easily
make out to be ridiculous.  You make no attempt to finf anything
of value in what I say or even to find a way to interpret it as
anything other than silly or extreme.

Now, is it surprising that we can't have a conversation?  I find it
hard to believe it's even your *intention* to have a conversation.

>  Rather than answer Socratic questions that might lead you
>somewhere, you resist them and go off in some other direction, or just repeat
>yourself, just as when I ask you whether you understand the is/ought 
>dichotomy, and instead of responding, you comment that my claim that it
>is the threat of punishment that makes it unreasonable to kill people
>is "an interesting ethical position".  If you *were* to consider the 
>is/ought dichotomy (also known as the fact/value dichotomy), you would
>realize that I was distinguishing *reason* (which deals with facts) 
>from *ethics* or moral choices (which deal with values).  Even if you
>disagree with the is/ought dichotomy, you would have at least understood
>that *I* was considering it, and would not have so blatantly
>missed my point.

Why do you assume it's I who have to be led somewhere rather than,
say, you?

Your whole attitude -- with is/ought, etc -- seems arrogant and
patronizing.  And it continues here.  You assume I missed your
point, but in fact I could see what you were getting at.

>In this case, you make a fool out of yourself by falsely accusing Andrzej of
>"making out that I'm evil";

It was hyperbole.  I had in mind things like this:

  This particular evidence was picked by Turing, so as to isolate ourselves
  from human biases. Otherwise, you would be suggesting that classifying
  someone as 'conscious' depends on what he/she looks like, whetehr he/she
  has acceptable body laqnguage etc. This brings out in force a mutlitude of
  cultural biases (if someone is black, can he/she be really conscious? Or
  makes totally inappropriate gestures and body sounds? How about severely
  deformed humans?).

But those are not the only possibilities, and I have never
suggested any such criteria.

>Just how *do* you determine whether others, such as Andrzej or I,
>have a mental life?  That was his question. 

It is possible, you know, to ask questions without suggesting what
the answer must be.

BTW, I have answered that question many times.  I decide on the basis
of behavior and similarity of physical mecahnisms (chiefly the brain
nervous system).  But I'm entirely willing to accept that other kinds
of entities could be conscious.

> Why do you refuse to answer it?

I don't refuse to answer it.

>   But the answer he expects, the
>good faith one that will allow the dialog to continue, is "I read what they
>write, and make judgements based upon it; I put them through a Turing Test of
>sorts".  

Is that what he's expecting?  He expects me to agree with the TT?  But
then he thinks the alternative is "suggesting that classifying someone
as 'conscious' depends on what he/she looks like, whetehr he/she as
acceptable body laqnguage etc. This brings out in force a mutlitude of
cultural biases..."

>For instance, you have judged me to be arrogant.  You have done that
>solely from reading texts,

But I've also judged that you're a human, not a machine, and alien,
or a dog.  When only humans are likely sources of some text, it's
possible to reach many conclusions that might otherwise be available.

>                            not from observing body language, or reading my
>printout, or examining my internal structure, or checking to see whether I
>have an "internal dialog" (whatever that means)

It means such things as thinking to oneself "what an idiot that
Dalton is."

> Doesn't the claim that I'm arrogant imply some sort
>of cohesive mental life, a personality, a self with awareness and some ego to
>respond to an apparent criticism?  So how did you do it?  How did you come to
>this conclusion?  What magic property of texts allows us to come to such
>conclusions simply by reading them?  And do we really have to answer that
>question before accepting the fact that we can and *do*?

Now you touch on an interesting question here.  At least I find it so.
Of course we have to strip off the tendentious "magic".  But what is
is about your output, as opposed to (say) sentences selected at random,
that shows you're self-aware?  Also, how is it that one can argue that
certain animals are self-aware even though that cannot pass the TT?
What criteria are we using in that case?  Are they better or worse
than the verbal criteria of the TT?

But before any of that can be discussed, it seems that I have to
affirm the TT.  Why is that?

>Of course, we are all prejudiced against machines, because they have yet to
>prove themselves.  People generally display intellect, machines generally do
>not.  Whenever we have intelligent conversations, it almost always turns out
>to be a human at the other end, and in those rare cases such as Eliza as chess
>playing programs where it isn't, it has been easy to see how we have been
>misled by not escaping the very narrow limitations of those programs.  So of
>course we are prejudiced.  But we shouldn't let that leak into our 
>measurements and methods.

So where, according to you, do I let it leak in?

>>I want the test for AI to be as good a test as we can devise.
>>The best test, not the easiest one.
>
>Again you don't answer the question; why make the test harder than the one you
>use for humans?  

But I've answered this again and again!  In virtually every discussion
of the TT someone makes this point.  We use the TT on humans!  How can
you require a harder test for machines?  And so on, again and again.
Does no one ever get tired of this?  I sure am.  That's why I'd like
to move on.

>A program with the physical limitations of both Hawking and
>Keller and the intellect of Quayle would still be impressive, would
>it not?

Yes, very impressive.  Of course, I have never said anything to
the contrary.

>But you seem to want to look at its listing.  Why?  What will that tell you?

Something about how it works.

>What is there to be found there that indicates intelligence or consciousness?

That's not something anyone yet knows enough to answer.  Indeed,
everything about "how it works" may turn out to be irrelevant in the
end.

But, to return to the terminator example, suppose we look at how
the terminator is constructed and programmed.  Perhaps we can find
out that the input to certain decisions goes through the terminator's
visual system.  Why is that kind of conclusion ruled out?

>The variable names?  What if the program wins the Obfuscatory C contest? 

If all you want to do is make fun, why do you reply to me at all?

>     You also want to look
>for "internal dialog".  What is that?  Logging intermediate results to a file?
>Subvocalization?  (I'm sure the engineers can add it if you need it.)

I'm happy for them to add it.  My point about internal dialogue is
that a teletype TT might not be able to determine whether certain
of these things were happening or not.

> From
>the speculations of Dennett and Hawkins, one might conclude that "internal
>dialog" is an unnecessary artifact of evolution, one of those things that a
>non-blind watchmaker never would have included.  Why do you want to require
>it?  

I don't require it.  It's just an example of something that might
or might not be present and might be detected by some tests and not
by others.

>What good is it?  What has it got to do with consciousness?  It may not
>even be nearly as universal in humans as you imagine.

You know hardly anything about me, so don't think you know what I
imagine.

-- jd
