From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!jvnc.net!darwin.sura.net!cs.ucf.edu!news Tue Jun  9 10:07:49 EDT 1992
Article 6149 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!jvnc.net!darwin.sura.net!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Newsgroups: comp.ai.philosophy
Subject: Re: Hypothesis: I am a Transducer (Formerly "Virtual Grounding")
Message-ID: <1992Jun8.134537.468@cs.ucf.edu>
Date: 8 Jun 92 13:45:37 GMT
References: <1992Jun7.002032.614@news.media.mit.edu>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
Lines: 48

In article <1992Jun7.002032.614@news.media.mit.edu> nlc@media.mit.edu (Nick  
Cassimatis) writes:
> >>The central problem of AI - Searle's anyway - is that a machine behaving
> >>intelligently may not be conscious - have qualia etc. etc.  
> 
> This statement came as a real shocker.  To me, someone who plans to do
> AI work some day and who thinks about it alot, the central problems of
> AI are getting a machine to use and understand language, ... etc
>
Sorry.  I thought the "philosophical" in front of AI was understood. 
For philosophical purposes, I think Searle would accede that machines
can someday be built which understand language, built things etc.
> I'm
> willing to bet quite a bit that these problems won't be solved by
> people who expend most of their "mental energies" to solve Searle's
> puzzle.
It seems to me important to establish, if possible, what the fundamental
limits are.  We already know time should not be wasted on the halting 
problem.
> 
> >>                                                            Even Searle 
> >>would agree that it is possible to build a zombie - use a humongous LUT 
> >>if all else fails.
> 
> Assume we have built a "zombie" -- let's say it's something like Data
> from Star Trek: TNG, 
Dangerous to argue from science fiction, but it is a good source of
analogies and metaphors, but I don't think Data is a zombie.  He lacks
emotion but then so did Spock in the Star Trek: TFG.  

If the ship's computer were given an anthromorphic interface, its 
simulacra would be intelligent, but not conscious.  They would be zombies.   
Come to think of it the actors on the Holodeck are generated by the 
computer - hence they are zombies.

Think of questions to the computer : "Computer, correlate positronic
emission anomalies with unusual Romulan activity in sector 6." which
require intelligence.  Then compare Data's questions, "Jordy.  Why do
you laugh?" which show consciousness.  It is interesting that the
writers have made Data the only one of his kind; the secret of
the conscious (?) robot died with his creator.


--
Thomas Clarke
Institute for Simulation and Training, University of Central FL
12424 Research Parkway, Suite 300, Orlando, FL 32826
(407)658-5030, FAX: (407)658-5059, clarke@acme.ucf.edu


