Newsgroups: comp.ai.games
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.ultranet.com!news.sprintlink.net!howland.reston.ans.net!Germany.EU.net!EU.net!sun4nl!hermes.bouw.tno.nl!usenet
From: sst@bouw.tno.nl (Tako Schotanus)
Subject: Re: The turing test
Message-ID: <1995Mar22.101521.26528@hermes.bouw.tno.nl>
Sender: usenet@hermes.bouw.tno.nl (USEnet Postmaster id)
Nntp-Posting-Host: ruudnix
Organization: TNO Bouw
X-Newsreader: WinVN 0.92.6+
References: <3k9p54$j05@newssvr.cacd.rockwell.com> <3kd50e$sp9@newsbf02.news.aol.com> <3kkdf3$dab@newssvr.cacd.rockwell.com>
Date: Wed, 22 Mar 1995 10:15:21 GMT
Lines: 89

In article <3kkdf3$dab@newssvr.cacd.rockwell.com>, csmccue@cacd.rockwell.com (Craig S. Mc Cue) says:
>
>In article <3kd50e$sp9@newsbf02.news.aol.com>, mickwest@aol.com says...
>>
>>csmccue@cacd.rockwell.com (Craig S. Mc Cue) wrote:

[Stuff deleted]

>>what you watched on tv last night, what pastrami tastes like, how to hang
>>wallpaper, current diplomatic and social problems, programming problems,
>>math problems, art appreciation, freeway driving stratergies, tales from
>>you childhood, responses to cross-examination, etc..
>
>To which the computer could correctly respond "I didn't watch TV last night. 
>I have never eaten pastrami. I have never hung wallpaper. etc" Some Amish 
>and Hasidic Jews have never driven freeways, visited art museums, watched TV 
>or hung wallpaper or have eaten pastrami -- does this make them 
>non-intelligent? Computers with VERY LARGE databases could come up with 
>responses to each of these questions. Does that make them intelligent? No, 
>it only makes them a huge filing cabinet of stock responses to every 
>possible experience of the human condition. 
>

I agree, AI is not here to make a simulation of the human mind (to do
that really well I think the program would have to experience life,
you'd have to "raise" it like you raise a child)

But your VERY LARGE DB of responses seems like the normal anti-chinese
room argument with which I disagree. To restate the experiment:

You put a person (non-chinese speaking) in a room that has a terminal
connected to the outside world. Questions will be put to him/her in
chinese. The one asking the questions has to find out if the the person
in the room speaks chinese.

The thing is that they gave the person in the room a book with ALL
possible questions somebody could ask and the corresponding answers
to those questions.

Some people say that the person inside the room will appear to be
speaking chinese but it won't really UNDERSTAND chinese. Therefore
a computer APPEARING to be intelligent won't really BE intelligent.

To me this is reasoning "in absurdum" because first of all:
- a book like that that contains ALL questions and answers would have
  to be a metaphysical entity: a META book, secondly
- even if we say for argument's sake that such a book exists, it would
  have to be a book that addapts itself to the situation because giving
  the same answer all the time would be a good marker that you're not
  really talking to a human/something intelligent. But thirdly and
- most importantly: imagine that the above requirements are met and
  you're talking and talking and talking (in chinese) to somebody in
  a room and after hours and maybe days of conversation about any
  subject immaginable you finally decide: "hey! that was a nice LOOOONG
  talk! I think I like that person in the room". What would be the
  difference to YOU that the person in the room doesn't understand
  chinese? NONE whatsoever!! To YOU the person in the room DOES speak
  chinese and to YOU there's nothing to prove otherwise. So the whole
  question of wheter the person(+book) understands chinese is moot.
  You can compare it with the models used in physics: a model doesn't
  have to be an exact replica of reality, it just has to be a good
  reflection of it.

>>
>>Computers, not having human style mind, can only simulate human
>>intelligence.
>>
>>If a computer appears to be intelligent, then it is intelligent (though
>>not human), at least to the person observing it. 
>
>If an F/A-18 1553 simulator appears to be flying to the subsystems it is 
>talking to, then it IS flying (something wrong here).

No there's not. If the subsystems are the important factor here and the
only thing important to them is the data they get from the simulator
than to them the simulator IS flying (or better the fact that it
doesn't even really exist is totaly irrelevant)

Compare: Humans asking a "intelligent" computer a question want an
"intelligent" answer. The computer is the simulator here: it simulates
(human) intelligence. To the humans the fact wether the computer is
or isn't intelligent is totaly irrelivant IF the computer gives them
the data they want (the intelligent answer) !

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
_/ Tako Schotanus                TNO Building and Construction Research _/
_/ Phone : +31 15 842393 Fax : +31 15 122182  E-mail : sst@bouw.tno.nl  _/
_/ My employer is required,by Dutch law,to disagree with whatever I say _/
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
