From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:48:54 EDT 1992
Article 5395 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Keywords: AI Searle Dickhead Barf
Message-ID: <6648@skye.ed.ac.uk>
Date: 4 May 92 13:59:27 GMT
References: <1992Apr11.053605.28116@ccu.umanitoba.ca> <6637@skye.ed.ac.uk> <1992May1.185606.31991@mp.cs.niu.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 91

In article <1992May1.185606.31991@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>In article <6637@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>There are already a number of things computers can do without
>>thought that involve thought in humans.
>
>  Perhaps you should tell us what you think constitutes "thought" or what
>would be "thought" in a computer.

I'm sorry, but I'm not going to play the definition game.  If you're
interested what "thought" should mean, you might try looking in the
philosophical literature.

>  I suspect the real problem is that humans can do many things without
>thought, which require thought in computers.  To put this in perspective,
>I am treating a computer chess program as using thought (or the computer
>equivalent), but being limited by not being able to make the snap
>judgements of evaluation which humans do intuitively without thought.

It's _a_ real problem, but not the only one.

Indeed, it's an interesting question how many things that sometimes
involve thought can also be done without thought (by humans or by
computers).  Since many things can be done w/o thought, why can't
passing the Turing Test be one of them?

>>Moreover, the simple fact is that a human can have all kinds of
>>different thoughts while producing the same behavior.  It is
>
> This is hardly a problem.  Computers do lots of multitasking.

You misunderstood me.  What I had in mind was that a human could think
A-0 while producing behavior B, or could think A-1 while producing B,
and so on, where the A-i are thoughts (or multi-thoughts).  That is,
you can't tell what someone is thinking by how they behave.

>>(Suppose a computer had been turned off and when booted claimed
>>to have been thinking all the while.  Would you believe it?
>>Are you convinced that it's behavior would have to show it
>>had been turned off?)
>
>  The same question could be posed to a human who claims to have
>spent time in thought.

Only if you can turn them off and reboot them and they nonetheless
claim to have been awake and thinking the whole while.  Something
like this happens for dreams, but it now seems reasonably clear
that dreams take place over time while asleep and are not just
instantly constructed as fake memories on waking.

In any case, how about answering the questions?  Would you believe it?
Are you convinced that it's behavior would have to show it had been
turned off?

>  Behavioral tests cannot prove that thought
>was absent, because they cannot distinguish between absence of
>thought and the existence of purely worthless thought.

Do you think they can prove that thought is _present_?  How?

For instance, to use your Chess example, are you sure you can distinguish
between a player who looks at the board a while and forms a judgement
w/o thought and one who looks at the board for the same time while
thinking?

>  But useful
>thought would be detectable by behavioral tests, since the knowledge
>base would change.  Yes, this would allow a computer to be switched off,
>have a disk replaced by once containing more information while it
>was turned off, then when turned on claim it had been thinking all
>the time.

I see that you have decided to change the example by adding a disk
replacement.  This makes it seem that you think the computer couldn't
behave as if it had been thinking unless there really was a change in
its information base.  Well, why not?  Why can't we program the
computer to fake it?

What I am suggesting is that (1) it may turn out that as a matter
of fact humans produce certain behavior (eg, passing the TT) with
the aid of though while ai programs produce similar behavior w/o
the aid of thought; and (2) merely looking at the i/o behavior
is not enough to tell what's going on inside.

Note that we can distinguish between what we can find out, with
the aid of various tests and other observations, and what is in
fact the case.  (Unless you want to be an anti-realist about
this, in which case we'd have to address wider philosophical
questions first.)

-- jd


