Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!casaba.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD0Ds24.GHo@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <CzzuEu.F48@gpu.utcc.utoronto.ca> <D01LqA.I9q@cogsci.ed.ac.uk> <jqbD02vM6.B1@netcom.com> <D0Cv18.14B@cogsci.ed.ac.uk>
Distribution: inet
Date: Tue, 6 Dec 1994 08:39:40 GMT
Lines: 482
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97271 comp.ai.philosophy:23233 sci.philosophy.meta:15338

In article <D0Cv18.14B@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <jqbD02vM6.B1@netcom.com> jqb@netcom.com (Jim Balter) writes:
>>In article <D01LqA.I9q@cogsci.ed.ac.uk>,
>>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>>In article <CzzuEu.F48@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>>>>In article <CzuAD4.4K6@cogsci.ed.ac.uk>,
>>>>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>>>>If the teletype TT can determine whether something is conscious
>>>>>or not, what is it about the teletype TT that does the trick?
>>>>>
>>>>The same thing which you use to detect that other human beings have mental
>>>>life. Or are deciding about it on the basis of their facial contortions,
>>>>body language etc? Dont't you make conclusions about people's mental life
>>>>on the basis of letters, for instance?
>>>
>>>Rather than trying to make out that I'm evil, why don't you
>>>say what it is that you think is important in the teletype
>>>TT?
>>
>>This sort of thing is one of the reasons it is so hard to have a conversation
>>with you, Jeff.
>
>Out of a long article, you pick a few things that you can easily
>make out to be ridiculous.  You make no attempt to finf anything
>of value in what I say or even to find a way to interpret it as
>anything other than silly or extreme.

I pick out those things that are problematic.  Do you really want me to spend
a lot of time posting "Hey, Jeff, that made a lot of sense and was a valuable
contribution to the health of the net?"  Forget it, it won't happen.
I pick out those things that illustrate the points I want to make.  Why should
it be otherwise?  As far as your "no attempt" claim, that is simply a lie.
You may prefer the term "hyperbole"; I don't.

>Now, is it surprising that we can't have a conversation?  I find it
>hard to believe it's even your *intention* to have a conversation.

Given these sorts of moronic and childish responses, no, I don't think
we can have a conversation.  As I just said,  " This sort of thing is one of
the reasons it is so hard to have a conversation with you, Jeff."  In the
posting you are responding to, I *tried* to have a conversation with you.
But you simply ridicule it and then repeat the same exact sins.

>>  Rather than answer Socratic questions that might lead you
>>somewhere, you resist them and go off in some other direction, or just repeat
>>yourself, just as when I ask you whether you understand the is/ought 
>>dichotomy, and instead of responding, you comment that my claim that it
>>is the threat of punishment that makes it unreasonable to kill people
>>is "an interesting ethical position".  If you *were* to consider the 
>>is/ought dichotomy (also known as the fact/value dichotomy), you would
>>realize that I was distinguishing *reason* (which deals with facts) 
>>from *ethics* or moral choices (which deal with values).  Even if you
>>disagree with the is/ought dichotomy, you would have at least understood
>>that *I* was considering it, and would not have so blatantly
>>missed my point.
>
>Why do you assume it's I who have to be led somewhere rather than,
>say, you?

So much for trying to have a conversation.  I assume no such thing, Jeff.  You
are welcome to take the Socratic role when you think it appropriate.  Many
intellectuals take such a role as a method of discourse (you know, like, having
a conversation).  Presumably we both *can* (as opposed to *have to*) be led
places.  It is you who are arrogant to imply that you need not or should not
be so led. The question is, why do you resist?  A question you have just
begged.  It seems to me like like bad faith.

>Your whole attitude -- with is/ought, etc -- seems arrogant and
>patronizing.  And it continues here.  You assume I missed your
>point, but in fact I could see what you were getting at.

I did not assume it, I concluded it, because anything else seems inconsistent
with your posting.  You fail to respond to my statement with any substance;
you merely throw ad hominems at me about being arrogant, patronizing,
having attitude, etc.  So much for having a conversation.  If you were
interested in a conversation, you might have explained how a response of "that's
an interesting ethical position" to my statement about reasonableness is
consistent with having seen what I was getting at.  Certainly you can admit
that it *seems* at odds with that, and therefore bears some need for
explanation.  It appears to me that you either didn't really see my point,
or pretended that you didn't.  Either seems like bad faith.  There may be
other possibilities, but I just don't see them.

>>In this case, you make a fool out of yourself by falsely accusing Andrzej of
>>"making out that I'm evil";
>
>It was hyperbole.  I had in mind things like this:
>
>  This particular evidence was picked by Turing, so as to isolate ourselves
>  from human biases. Otherwise, you would be suggesting that classifying
>  someone as 'conscious' depends on what he/she looks like, whetehr he/she
>  has acceptable body laqnguage etc. This brings out in force a mutlitude of
>  cultural biases (if someone is black, can he/she be really conscious? Or
>  makes totally inappropriate gestures and body sounds? How about severely
>  deformed humans?).
>
>But those are not the only possibilities, and I have never
>suggested any such criteria.

Then you could just say so, and specify what the other possibilities are.  If
this is merely hyperbole, it is hyperbole that makes it difficult to carry on
a conversation.  Note that it certainly stopped Andrzej in his tracks.
There are people who use these sorts of criteria, and to suggest that we
should all know that you don't because you have never claimed to, and that
to imply that you might is to call you evil, is really rather arrogant.

>>Just how *do* you determine whether others, such as Andrzej or I,
>>have a mental life?  That was his question. 
>
>It is possible, you know, to ask questions without suggesting what
>the answer must be.

Yes, I suppose it is.  So what?  Does the fact that someone does suggest the
answer require that we start throwing around ad hominems and false
accusations?  If you don't like the fact that the answer was suggested, why
not just ignore the suggestion?  Or answer the question, and *then* ask that
the answer not be suggested.  Though why you should be so sensitive to certain
rhetorical techniques when you employ so much innuendo and hyperbole and ad
hominems yourself baffles me.

>BTW, I have answered that question many times.  I decide on the basis
>of behavior

What sort of behavior?  How does it compare to the TT?

> and similarity of physical mecahnisms (chiefly the brain
>nervous system).

How does that apply to me or Andrzej?  In the context of this conversation,
that method cannot be used.

>But I'm entirely willing to accept that other kinds
>of entities could be conscious.

Nice to know, but not relevant to the issue at hand.

>> Why do you refuse to answer it?
>
>I don't refuse to answer it.

Pardon me, perhaps you answered it once upon a time.  I suppose I should
start keeping track of every question I have answered during my years on the
net, and be sure to answer each one no more than once.

What I meant was, why did you refuse to answer it *this time*, in *this
context*, if you had a desire to have a conversation?  Not doing so is
one of the reasons the conversation (with Andrzej) ended.

>>   But the answer he expects, the
>>good faith one that will allow the dialog to continue, is "I read what they
>>write, and make judgements based upon it; I put them through a Turing Test of
>>sorts".  
>
>Is that what he's expecting?  He expects me to agree with the TT?

He's expecting you to tell the truth, Jeff.  What is the truth about
whether "you make conclusions about people's mental life on the basis of
letters" and how you do so?  Is it not a TT of sorts?  Tell the truth, Jeff.

Your position seems to be "I don't agree with the TT.  No way in the world
will I say something that would indicate that I agree with the TT, no matter
how squirrelly I have to be to avoid it."

What is "the TT", Jeff?  That is not a proposition, not something with which
one can agree or disagree.  Such abuse of the language makes it difficult to
know what you mean.

>But
>then he thinks the alternative is "suggesting that classifying someone
>as 'conscious' depends on what he/she looks like, whetehr he/she as
>acceptable body laqnguage etc. This brings out in force a mutlitude of
>cultural biases..."

Perhaps he does.  So what?  Perhaps he has omitted the possibility of being
so squirrelly as to avoid an answer.  How does one judge the mental life of
letter writers?  Not by listings.  Not by similarity of physical
mechanisms (unless one simply *assumes* that all letter writers are human,
which is the ultimate question begging).  You said earlier "behavior".
Just what sort behavior is that, Jeff?

>>For instance, you have judged me to be arrogant.  You have done that
>>solely from reading texts,
>
>But I've also judged that you're a human, not a machine, and alien,
>or a dog.  When only humans are likely sources of some text, it's
>possible to reach many conclusions that might otherwise be available.

But they are conclusions about mental life, are they not?  So you do use
a TT of sorts to make such judgements when humans are the only sources of
texts, do you not?  This appears to be "agree[ing] with the TT", in the
sense used above.  Andrzej asked you about letter writers.  These presumably
were human letter writers.  So you finally gave the "good faith" answer
that was expected, the one that would allow the dialog to continue.
Inadvertantly perhaps.  In another context perhaps.

Some people have asserted that there are several AI programs active on the
net.  Now, assume for a moment that this is true.  Even pretend that we have
reached the state of the art where AI programs that can pass TT are possible.
That rules out a 100% certainty that I am human, which seems to be your only
basis for judging that I am, at least far as you just articulated.  Now, do
you suddenly want to change the criteria you use to determine whether someone
is arrogant, or not?  After all, your old criteria, as you articulate them,
applied to these new circumstances, might lead you to conclude that some AI
program is "arrogant", based solely on texts.  That seems to be something you
want to avoid.

>>                            not from observing body language, or reading my
>>printout, or examining my internal structure, or checking to see whether I
>>have an "internal dialog" (whatever that means)
>
>It means such things as thinking to oneself "what an idiot that
>Dalton is."

You certainly seem to enjoy begging the question.  What does "thinking to
oneself" mean?

BTW, Jeff, if that thought occurs to me, is that my fault?  What might I do
to prevent such thoughts (other than not read your postings)?

>> Doesn't the claim that I'm arrogant imply some sort
>>of cohesive mental life, a personality, a self with awareness and some ego to
>>respond to an apparent criticism?  So how did you do it?  How did you come to
>>this conclusion?  What magic property of texts allows us to come to such
>>conclusions simply by reading them?  And do we really have to answer that
>>question before accepting the fact that we can and *do*?
>
>Now you touch on an interesting question here.  At least I find it so.
>Of course we have to strip off the tendentious "magic".  But what is
>is about your output, as opposed to (say) sentences selected at random,
>that shows you're self-aware?

Its interesting that you label "magic" as tendentious, since *I* don't think
it's magic; it strikes me that those who have a problem with the TT treat texts
as magic.  But the TT is simply an inductive process like any other.
If we see one text that is consistent with our model, however vague and fuzzy,
of how a self-aware being might act, we might think that it is coincidence,
or a program with a small repertoire of such texts in its database.  But if
we see quite a few, it becomes more likely that what is producing them satisfies
our model, especially if they are responses to questions carefully framed to
test this.  As we receive more and more such texts, it becomes more and more
likely that the entity is "self-aware", whatever-in-the-heck that might mean.
Responses that claim a self and talk about that self in accurate terms are
pretty powerful.  "But is it *really self-aware*?" can be answered by a) there
are the usual inherent limits of certainty of inductive processes and b) tell
me more about what you mean by that, and then we can ask more probing questions.
The appropriate responses are unlikely to be produced "at random".  My
bottom-line answer to your question (I'm willing to answer yours) is
"appropriate content".  Hopefully I have explained "appropriate" well enough
above to avoid an accusation of question begging.

>Also, how is it that one can argue that
>certain animals are self-aware even though that cannot pass the TT?

Perhaps someone somewhere has claimed that, if something cannot pass the TT,
then it must not be self-aware/intelligent/conscious/understanding etc., but
that does not seem like a defensible position, and I certainly haven't seen it
put forth in c.a.p.  Be careful not to confuse statements with their
converses.  I might claim that something that passes a sufficiently probing TT
is almost certainly self-aware, but I certainly would never claim that
something that fails to pass a sufficiently probing TT is almost certainly
not self-aware.  After all, some self-aware entities are not even attached to
teletypes.  TT is an excellent probe for *suitable entities*.  However, clearly,
if intelligence is defined in terms of problem solving abilities, and those
problems are not restricted to the linguistic realm, there can be intelligent
entities that cannot pass the TT, but I don't see why that means we should
"bag TT" or "disagree with TT" or fall into any other such absolute.  We should
can "passing the TT" as "the definition of intelligence", but that appears to
me to be a strawman.  (Maybe Andrzej actually holds that position, because he
does sometimes use the word "define", but I don't think that's his actual
position.  Perhap he can clarify.)

>What criteria are we using in that case?  Are they better or worse
>than the verbal criteria of the TT?

Apparently people use behavioral reactions to mirror images as criteria,
but given the controversy, these criteria don't seem as reliable as verbal
criteria; I suppose that makes them "worse".  But I am not familiar with the
details, so I'm the wrong one to ask.

>But before any of that can be discussed, it seems that I have to
>affirm the TT.  Why is that?

I have no idea what "affirm the TT" means.  But I do expect you to affirm that
linguistic expressions (texts) can be and often are used to judge such issues
as self-awareness, intelligence, understanding, and mental life.

>>Of course, we are all prejudiced against machines, because they have yet to
>>prove themselves.  People generally display intellect, machines generally do
>>not.  Whenever we have intelligent conversations, it almost always turns out
>>to be a human at the other end, and in those rare cases such as Eliza as chess
>>playing programs where it isn't, it has been easy to see how we have been
>>misled by not escaping the very narrow limitations of those programs.  So of
>>course we are prejudiced.  But we shouldn't let that leak into our 
>>measurements and methods.
>
>So where, according to you, do I let it leak in?

When you apply different criteria to humans and machines.

>>>I want the test for AI to be as good a test as we can devise.
>>>The best test, not the easiest one.
>>
>>Again you don't answer the question; why make the test harder than the one you
>>use for humans?  
>
>But I've answered this again and again!  In virtually every discussion
>of the TT someone makes this point.  We use the TT on humans!  How can
>you require a harder test for machines?  And so on, again and again.
>Does no one ever get tired of this?  I sure am.  That's why I'd like
>to move on.

I don't remember seeing your answer; I certainly don't know what it is.
Certainly not a consistent one I can keep track of.  You say things about
"best test", and "additional criteria".    But then you say things like
"I don't accept TT" and it's a good thing if lots of people on c.a.p
don't defend TT.  This seems to argue that TT not only isn't a good test,
it isn't a test at all.  By my model of intelligence, as fuzzy as it is,
TT seems like a good test in that we can make ourselves arbitrarily certain
that something that *does* manage to pass it is intelligent.  If you respond
"but some intelligent things might not pass it" I'd say, yup, that's a
problem.  What do you propose?  How does incessant naysaying of the TT
get you your better test?  You keep saying "maybe there's something else".
Is that interesting?  Only if you can hint at what those somethings are.
But then you mention listings and looking for "internal dialog", without
explaining how those help answer the question, or why they are necessary.
Then you say they aren't necessary, that you don't challenge programs with
those, but that if I present you with a TT-passing program, *then* you will
tell me what challenges you propose.  It's like trying to grab onto Jello.
You say you want to move on.  To where?  You spoke of leading and being led.
Lead on.  Please. 

>>A program with the physical limitations of both Hawking and
>>Keller and the intellect of Quayle would still be impressive, would
>>it not?
>
>Yes, very impressive.  Of course, I have never said anything to
>the contrary.

How would you know that it had the intellect of Quayle, and thus is impressive,
without examining the texts it produces?  Just on my say-so?  I didn't say
anything about its behavior or its physical mechanisms, did I?  I believe
that the sum of your statements is indeed "contrary".

>>But you seem to want to look at its listing.  Why?  What will that tell you?
>
>Something about how it works.

But, but but, ... again you beg the question.  We are talking about
tests of intelligence.  We all want to know how things work.  Gee whiz.
How does knowing how it works *tell us whether it is intelligent*, when
we don't have a model of what sort of workings produce intelligence?

>>What is there to be found there that indicates intelligence or consciousness?
>
>That's not something anyone yet knows enough to answer.  Indeed,
>everything about "how it works" may turn out to be irrelevant in the
>end.

Then why do you propose looking at listings as a potential test of intelligence?
If you cannot say what is to be found there that is relevant, then how is
this any better a place to look than in my mailbox?  You must have *some*
reason to think that listings might tell us something about intelligence.
Can you try to articulate it?  I can imagine all sorts of listings that
are obviously not intelligent (particularly, short ones with analyzable
behavior), but among those that are potentially intelligent, how can we
distinguish which are and which are not?  I pointed out that there are
configurations very similar to Searle that aren't conscious, and you actually
questioned this (this did astound me and triggered something like the internal
dialog you ascribe to me above).  The obvious ones are dead or in comas.
Among those configurations much like Searle that are conscious, there are
many that have slightly different beliefs, memories, or thought processes.
Without an incredibly detailed model of exactly what brain layouts do or do not
have an intelligent result, how could we possibly determine which of these
Searle's is intelligent simply by examining the brain?  Without a similar such
model for programs, how could we possibly determine which programs result in
intelligence simply by examining their listings?  I can't even tell which
listings yield functional operating systems without executing them and
observing the results, and I have a damn good model for that.  I don't think
that intelligence is the right sort of thing to look at listings for; it's
too damn dynamic.  Why do you think it is the right sort of thing, or even
might be the right sort of thing?  Lead on!

>But, to return to the terminator example, suppose we look at how
>the terminator is constructed and programmed.  Perhaps we can find
>out that the input to certain decisions goes through the terminator's
>visual system.  Why is that kind of conclusion ruled out?

That kind of conclusion is not ruled out (if I understand what you are
referring to by "that"; I sometimes find your use of pronouns rather
confusing), but it is nothing like whether it is intelligent, any more than
observing that some people have functional optic nerves tells us whether they
are intelligent.  Also, this is the converse of what we are actually
interested in.  Suppose that the terminator is constructed this way.  So what?
This tells us how it is constructed, not whether it is intelligent.  Suppose
that we determine by some other means (the TT?) that it is intelligent.  Can
we then conclude that anything constructed this way is intelligent?  Doesn't
seem that way to me.  Perhaps we just stumble upon a terminator, find it
acting intelligently (TT?  general behavior?), and look to see how it is
programmed.  But surely *someone* programmed it, and thus we (the AI
community) don't need to look inside because we already have a model of how to
build intelligent robots.  Unless we evolved it through genetic algorithms or
somesuch, and want to look inside the ones that work (according to some
measure other than the insides) to see what produces working robots.  But this
doesn't tell us what it is about the insides that make it work.  We need to
already know that.  We need a model.

>>The variable names?  What if the program wins the Obfuscatory C contest? 
>
>If all you want to do is make fun, why do you reply to me at all?

I'm sorry you cannot tell the difference between important questions and fun.
Understanding my point requires a little thinking.  (Perhaps later you will
claim that you really did understand my point after all.)  Those are serious
questions.  If, for example, intelligent programs are evolved, looking at them
may be no better than looking at Obfuscatory C programs.  You are
used to listings of programs with meaningful variable names.  I've spent
enough time looking at generated code to know how hard it is to understand
even simple programs that have machine-generated variable names.  Just what
sort of programs do you expect to produce TT passers?  It is possible that TT
passers will require multiple levels of complex preprocessing, where the output
of each level is highly encoded tables.  Just try looking at the final listing
to determine "how it works".

That's the long form of what I meant by "What if the program wins the
Obfuscatory C contest?".  Fun indeed.  Of course, I spent the time to type that
simply because I want to ridicule and pick on poor little Jeff.  Because I
"make no attempt to finf [sic] anything of value in what [you] say or even to
find a way to interpret it as anything other than silly or extreme.".  As
opposed to what you do.  Fun indeed.

>>     You also want to look
>>for "internal dialog".  What is that?  Logging intermediate results to a file?
>>Subvocalization?  (I'm sure the engineers can add it if you need it.)
>
>I'm happy for them to add it.

Is that a joke?  I wouldn't be very happy to have engineers incorporate
unnecessary mechanisms.

>My point about internal dialogue is
>that a teletype TT might not be able to determine whether certain
>of these things were happening or not.

And why is that important?  In order for it to be important, you first
have to indicate why it is relevant to intelligence.  A teletype TT might not be
able to determine whether the subject has black hair, but do we care?  It seems
a little bit "evil" (your hyperbolic meaning; oh, I have to tread so carefully)
to bring up the issue of internal dialog.

>> From
>>the speculations of Dennett and Hawkins, one might conclude that "internal
>>dialog" is an unnecessary artifact of evolution, one of those things that a
>>non-blind watchmaker never would have included.  Why do you want to require
>>it?  
>
>I don't require it.  It's just an example of something that might
>or might not be present and might be detected by some tests and not
>by others.

But what relevance does it have to intelligence?  What sort of tests?
What does its presence or absence tell us, other than that it is present or
absent?  If you integrate the presence or absence of internal dialog into
your "best test of AI", it sure *sounds* like a requirement.  You play such
a cat and mouse game.  If you want to lead, lead damn it.  Explain to us
why you think "internal dialogs" are an issue that we should even consider
attending to.

>>What good is it?  What has it got to do with consciousness?  It may not
>>even be nearly as universal in humans as you imagine.
>
>You know hardly anything about me, so don't think you know what I
>imagine.

Jeff, you can be such a <insulting term of your choice>.  Rather than
answering the relevant questions, you indulge in belligerent huffiness.  What
you imagine about internal dialog is somewhat evidenced by what you say about
it.  You appear to assume that I have such internal dialog, and gave a
possible example above.  I do not know what you imagine, and never said I did.
I can only guess.  Based upon your texts.
-- 
<J Q B>
