From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:49:50 EDT 1992
Article 5498 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <6691@skye.ed.ac.uk>
Date: 8 May 92 21:50:08 GMT
References: <1992May1.185606.31991@mp.cs.niu.edu> <6648@skye.ed.ac.uk> <1992May4.181702.13708@mp.cs.niu.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 160

In article <1992May4.181702.13708@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>
> Sigh.
>
> Why is it that people will not read what I say, but instead insist on
>putting words in my mouth, and then going ahead and criticizing their own
>invention which they have incorrectly imputed to me.

Why is it that people leap to the conclusion that other people have
not read what they wrote?  Or is this accusation just a bit of net.rhetoric?

BTW we agree about some things (see the end), so I'm not sure
why we're disagreeing about some other things.  Your message came
into an exchange between be and Antun Zirdum, and I may have
assumed you agreed with more of his position than is in fact
the case.

>In article <6648@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <1992May1.185606.31991@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>>
>>>  Perhaps you should tell us what you think constitutes "thought" or what
>>>would be "thought" in a computer.
>>
>>I'm sorry, but I'm not going to play the definition game.  If you're
>>interested what "thought" should mean, you might try looking in the
>>philosophical literature.
>
>  There is an example.  I did not ask for a definition.  I know how well
>you like to refuse giving definitions.  I didn't ask 'what "thought" should
>mean' but only asked for an informal idea of what you would consider to
>be "thought" in a computer.

That looked a lot like asking for a definition to me.

>  Face it.  Your "I'm not going to play the definition game" is nothing
>but a deliberate obfuscation to conceal the fact that you don't have
>the faintest idea what you are talking about.

If you think that, why not just ignore my articles?

>Prove me wrong by discussing what would constitute computer thought.  Maybe
>the discussion will be enlightening.

It isn't an issue on which I have anything I want to say.

If you think it's important, it's always open to you to say
something.  And if you don't want to say anything either, we
can just drop it.

>>             Since many things can be done w/o thought, why can't
>>passing the Turing Test be one of them?
>
>  Do we have to make the Turing Test into a religious war.  The Turing Test
>is just an ad hoc test, since at the time Turing proposed it (and for
>that matter, at the present time) nothing better was available.  Wouldn't
>it be better to defer judgement on this until something actually passes
>a full Turing Test?

I'd be glad to.  I think we'll be in a much better position to
answer questions about computer understanding once we have some
programs that appear to produce it and can see how they work.

>>                       What I had in mind was that a human could think
>>A-0 while producing behavior B, or could think A-1 while producing B,
>>and so on, where the A-i are thoughts (or multi-thoughts).  That is,
>>you can't tell what someone is thinking by how they behave.
>
>  I don't see where computers need be different in this respect.  There
>are many different ways to produce the same output in a computer.

I haven't said computers are diffferent in this respect.

>>>>(Suppose a computer had been turned off and when booted claimed
>>>>to have been thinking all the while.  Would you believe it?
>>>>Are you convinced that it's behavior would have to show it
>>>>had been turned off?)

>>In any case, how about answering the questions?  Would you believe it?
>>Are you convinced that it's behavior would have to show it had been
>>turned off?
>
>  I did answer the question.

Questions.

Not directly, and from what you did say I couldn't tell how you'd
answer them.  For instance, you decided to change the example by
adding a disk replacement.

>I will comment in more detail.  If a computer were turned off, then on reboot
>claimed to have been thinking all the time, including the time it was
>turned off, and if there had been no infusion of external data (a disk
>transfusion), I would probably treat this as confusion. [...]
>                             On the other hand, if the computer
>claimed to be thinking the whole time only in the sense that it was
>completely unaware of the time gap, I would treat that as quite
>unsurprising.

So I take it the answer is that you would not believe it.

That is, you have some better evidence than its behavior.

>  Would the computer's behavior show that it had been turned off?  Only in
>the sensed that it would not be aware of events that occurred while it was
>turned off.  This need not be much different from a person who went into
>a brief coma, then on recovery was unaware that there had been any
>interruption of consciousness.

So the answer is that its behavior would not show it had been
turned off (instead of, say, just not paying attention -- why
invoke something as drastic as a coma?).

So again the behavior does not show what really happened.

>>For instance, to use your Chess example, are you sure you can distinguish
>>between a player who looks at the board a while and forms a judgement
>>w/o thought and one who looks at the board for the same time while
>>thinking?
>
>  Of course you can't.  This is why the Turing Test is not allowed to be
>a limited test.  Everything must be allowed.  Almost anything can be
>faked by a human, or by a machine.  But the more extensive the test, the
>lower the probability that it could be all faked.
>
>  When a computer first passes an extensive Turing Test, you can bet that
>lots of people will go through its programming with a fine tooth comb to
>see whether this was fakery or real intelligence.

I am very glad to hear you say that, because that's one of my
main disagreements with people on the net.  That is, I think
it can matter how the program works, while they think it cannot.

>>What I am suggesting is that (1) it may turn out that as a matter
>>of fact humans produce certain behavior (eg, passing the TT) with
>>the aid of though while ai programs produce similar behavior w/o
>>the aid of thought; and (2) merely looking at the i/o behavior
>>is not enough to tell what's going on inside.
>
>  Then let's stop the silly arguments, and wait till a computer passes
>the TT.  Then let's look inside and see whether it was really faking or
>not.

I agree that we should look inside.  I would also say that we may
not be in a position to reach conclusions about machine understanding
until we have some programs to consider.

But I don't think we have to give up all other attempts to answer the
question in the meantime.  It's an interesting philosophical problem,
or at least I think it is.

>  My main reason for doubting that it would be faked is that to
>successfully fake an extensive TT would require a computer program of
>unimagineable combinatorial complexity, and I consider that unlikely in
>the extreme.

Maybe.  Or maybe following rules (as we're supposed to imagine
in the Chinese Room) will count as "faking".  This may become
clearer once we know more about programs, more about humans, etc.

-- jd


