From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc Tue Jun  9 10:07:37 EDT 1992
Article 6132 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Subject: Re: Hypothesis: I am a Transducer (Formerly "Virtual Grounding")
Message-ID: <1992Jun7.002032.614@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <1992Jun5.045522.19139@news.media.mit.edu> <1992Jun5.130022.26367@cs.ucf.edu> <1992Jun5.190920.26879@neptune.inf.ethz.ch>
Date: Sun, 7 Jun 1992 00:20:32 GMT
Lines: 38

>>The central problem of AI - Searle's anyway - is that a machine behaving
>>intelligently may not be conscious - have qualia etc. etc.  

This statement came as a real shocker.  To me, someone who plans to do
AI work some day and who thinks about it alot, the central problems of
AI are getting a machine to use and understand language, building a
robot that can learn to use tools to build things, and so on....  I'm
willing to bet quite a bit that these problems won't be solved by
people who expend most of their "mental energies" to solve Searle's
puzzle.

>>                                                            Even Searle 
>>would agree that it is possible to build a zombie - use a humongous LUT 
>>if all else fails.

Assume we have built a "zombie" -- let's say it's something like Data
from Star Trek: TNG, only lets say that it is a bit weaker, that it
lies on occasion, that it goes to great length tos win the affection
of someone of the opposite sex, that it commits many of the sins one
reads about in the Old Testament and so on.  Now let's say someone
like Searle comes along and convinces us that while quasi-Data is in
all other ways indistinguishable from humans, he is not conscious, he
doesn't have qualia, and that his (positronic?) "brain" states are not
grounded; all because [insert whatever reasons you wish.]  Now let's
assume that we are convinced by this quasi-Searle.  So what?  How is
Mr./Ms.  quasi-Searle's statement that interesting?  Should the fact
that we don't use a certain of words in relation to quasi-Data, change
our behavior to it?  Let's assume that a latter day Dennett come along
and proves irrefutably that we can use the same "mentalistic" idioms
towards quasi-Data as we would to humans.  So now we make a minor
adjustment to our linguistic conventions -- but the fact remains that
the machine we se before us is still understood in the same way as it
was before we changed our linguitic habits: our conception of the
mechanism that is behind this marvel hasn't changed, the jokes we tell
to him don't change, the way we listent to his violin playing doesn't
change and so on.  So if the arguments about Qualia and all that had
no appreciable affect either way, why did we get so worked up in the
first place?


