Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <D01LqA.I9q@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <CzsHMy.B9n@gpu.utcc.utoronto.ca> <CzuAD4.4K6@cogsci.ed.ac.uk> <CzzuEu.F48@gpu.utcc.utoronto.ca>
Distribution: inet
Date: Tue, 29 Nov 1994 18:51:45 GMT
Lines: 219
Xref: glinda.oz.cs.cmu.edu sci.skeptic:96615 comp.ai.philosophy:22835 sci.philosophy.meta:15122

In article <CzzuEu.F48@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>In article <CzuAD4.4K6@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>..........
>>I'm not so sure the TT is all that's available.  We at least
>>have access to more information that is revealed by the ordinary,
>>teletype TT.  So why is one particular kind of evidence picked
>>out as the bird in the hand?  (Which version of the TT do you
>>say is the bird, BTW, or is it such tests in general?)
>>
>This particular evidence was picked by Turing, so as to isolate ourselves
>from human biases. Otherwise, you would be suggesting that classifying
>someone as 'conscious' depends on what he/she looks like, whetehr he/she
>has acceptable body laqnguage etc.  This brings out in force a mutlitude of
>cultural biases (if someone is black, can he/she be really conscious? Or
>makes totally inappropriate gestures and body sounds? How about severely
>deformed humans?). 

Have I ever suggested that such criteria be used?  BTW, this is the
kind of think I have in mind by saying the TT is fiercely defended.
It looks like you might be trying to tie me to racism and other nasty
prejudices.  Perhaps that's not your intention, but if it's not I wish
you'd make that clear.

Sure, there's a danger of bias.  There's also a danger of bias
in the teletype-based test (as you point out) though a reduced one.
But that doesn't mean we can't or shouldn't look at how things
work.  Moreover, we know that biases you list should be rejected 
as criteria of consiousness, and that knowledge will inform
our selection of non-TT criteria.

My usual example of a possibility is two programs that both generate
TT-passing behavior but differ internally so that e.g. one generates
consciousness while the other doesn't, or, say, one conducts an
internal dialogue while the other doesn't.

Someone could, I suppose, object to a bias against certain algorithms
(table lookup, perhaps).  But I am supposing that we would have good
reasons for our conclusions.  We wouldn't just say "it's inconceivable
that a table lookup program could be conscious".  And the same applies
in other cases.  I'm not suggesting that we go with appearances or
give into prejudices.  Indeed, if it's impossible to have good reasons
for anything but TT-criteria, that's fine with me, provided that the
impossibility is shown.

Finally, the issue of bias can be moved back a step.  I assume the
concern is that we might wrongly undervalue some entity because
it fails some test.  But it might also be wrong to undervalue
something just because it's not conscious.

>       Obviously Turing was very
>conscious of how much people are influenced by (even subconscious) emotional
>biases in supposedly `objective scientific' pursuits. I do not think you are
>taking this into account enough.

Well, I disagree.

>>Indeed, I'd like to see more discussion of (particulars of) the TT
>>rather than less.
>>
>To be able to discuss particulars of the TT one has to have more precise
>defintions of what one wants to establish. Since you are refusing to discuss
>definitions (of consciousnes, understanding, etc), you are in no position 
>to ask for particulars of TT.

I'm asking for a discussion, not for particulars.  I would have
thought both "sides" would be interested in determining what was
singificant in the TT and in what (if any) other critieria it
might be useful to consider.

In any case, what you've just offered is not an argument that the
TT is correct.  I think it's clear that we do not yet know whether
the TT is correct or not.  Nor are we yet in a position to give
precise definitions of consciousness, understanding, etc.  If
we must decide about some entity, and the TT is the best we can
do at that time, then that's fine with me.  But since we seem
to agree on that point, we can set it aside.  

I'm happy BTW to discuss definitions, but not to assume the entire
burden of proof.  In any case, I can't do the next 100 years of
neuroscience, AI, etc before my next e-mail message.

>>I mean such things as whether something is conscious, whether it
>>has an internal dialogue (e.g. like when I think to myself "what
>>should I say next?"), and what emotions it has (or can have).
>>
>Weren't we talking about giving TT to some external (to you, me and others)
>entity? 

Yes.

>What you talking about above cannot be established from outside, so
>it is useless and a waste of time in discussing if and how TT is appropiate.

How do you know it can't be established from outside?  We may never
know for sure, but there are all kinds of things we can't know for
sure.  That doesn't stop us from drawing conclusions that we regard
as reliable in practice.

Besides, if you take the view that consciousness and so forth
cannot be established from outside, that rules out the TT as a
test for such things.  It hadn't seemed that that was your position,
but I may have misunderstood you.

>>Here's another example.  In the 1st Terminator film, the terminator
>>sometimes gets an internal visual display of options for what to say.
>>Now, most of us at least aren't wired up so that that kind of thing
>>goes through our visual system in that way.  So this is a difference
>>between our mental life and his.  Moreover, it's likely that the
>>difference is because of some hardware/software differences betweem
>>us and terminators; and it may be that some differences of that sort 
>>(I don't say between us and terminators specifically) cannot be detected
>>by certain behavioral tests.  (Maybe, for instance, the teletype TT 
>>isn't enough.)
>>
>Could people know from outside (except movie audience) that the Terminator
>gets this visual display? 

Why not?  It looks to me like something we might well find out by
seeing how the terminator is wired up and programmed.

>If not, it is completely irrelevent.

I intended it as an example of something we might be able to find
out but perhaps not by certain tests of externally visible behavior.
(E.g. maybe it cannot be detected by the teletype TT.)

But if it's something we can't find out at all, that's also
an interesting conclusion, though not one I'm yet prepared
to make.  

> The example
>also shows the dangers of using human biases if 'soemthing' is not 
>sufficiently like us. If you what to make a part of definition of consciousness
>to be "what humans have", that's fine, but then you might have to accept
>for instance a defintion of intelligence as determined by IQ test constructed
>with a specific cultural bias.

How does it show these things?  It wasn't an example of intelligence
or not, nor of consciousness or not.

>>If I were faced with some TTT- (or even TT-) passing entity, I'd
>>look to see what birds were to hand, sure.  Indeed, this kind of
>>issue comes up right now, in a somewhat different form, when it
>>comes to animal consciousness.  However, if we have an explanation
>>that does not involve consciousness, it may make sense to prefer
>>it.  Part of the debate about animal consciousness takes that form.
>>(See, for instance, Kennedy, _The New Anthropomorphism_.)
>>
>No one however, as far as I know, suggests making decision on basis of
>physical differences between human and ape's brains.

My point here was a different one, namely that when explanations
that do not involve consciousness are available it may make sense
to prefer them.

>>If the teletype TT can determine whether something is conscious
>>or not, what is it about the teletype TT that does the trick?
>>
>The same thing which you use to detect that other human beings have mental
>life. Or are deciding about it on the basis of their facial contortions,
>body language etc? Dont't you make conclusions about people's mental life
>on the basis of letters, for instance?

Rather than trying to make out that I'm evil, why don't you
say what it is that you think is important in the teletype
TT?

>>If I recall correctly, the TTT is still confined to externally visible
>>behavior.  That is, it doesn't include anything about internal workings.
>
>Because we judge other people without going into their internal workings. If
>you say you are not prejudiced, why are you making the test for AI tougher?

Now what are you suggesting I'm prejudiced against?  Machines?

I want the test for AI to be as good a test as we can devise.
The best test, not the easiest one.

>>It seems to me that at present we on our strongest ground when dealing
>>with entities that are most similar to us: animals, especially
>>mammals, and better yet primates.  This is a different approach
>
>Then dolphins have no chance, they shouldn't even apply, right?

Why are you taking such an extreme view of what I said?

I didn't say they had no chance, only that at present our conclusions
are on firmer ground when considering animals more simular to us.
I fully expect that we'll be able to do better in the future.

>>Animals don't (so far) pass the TT or the TTT, but I think that *at
>>present* at least physical similarity (to humans) gives us stronger
>>grounds than if we stick to externally visible behavior.  This is not
>
>Since we do not have any scientific grounds to claim a connection between 
>physical similarity and presence (or not) of consciousness, you are clearly 
>accepting a role of cultural biases - not a very scientific approach :-(.

We know, though perhaps not absolutely and for sure because of the
other-minds problem, that humans are conscious; and we have good
reasons to suppose that consciousness is realized in the brain.  
And we may be finding out what some of the relevant features of the
brain and nervous system are.  It's not unreasonable to suppose 
that animals that are similar physically may also be similar mentally.
(Behavioral evidence can also be considered, of course.)

This is not in any way an argument that other sorts of entities
cannot be conscious or similar to us in other aspects of mental
life.  But we know less about those entiries (which we've never
seen) than we do about animals.

>Fine. However, as far as I can see, your reluctance to throw the towel in is
>based on emotional grounds, and not any empirical evidence. 

Throw in the towel and admit what?  That we already know the TT
is sufficient?  Why should I admit something that's not the case.

-- jd
