From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Fri Jan 31 10:26:39 EST 1992
Article 3230 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan28.221347.8794@aisb.ed.ac.uk>
Date: 28 Jan 92 22:13:47 GMT
References: <11976@optima.cs.arizona.edu> <42349@dime.cs.umass.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 25

In article <42349@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>In article <11976@optima.cs.arizona.edu> David Gudeman writes:
>>   In article  <42304@dime.cs.umass.edu> Joseph O'Rourke writes:
>>   
>>   ]1. Understanding (grasping meanings of) is impossible without
>>   ]   consciousness.
>>   ]
>>   ]2. It is possible that consciousness does not require biological tissue.
>>   ]
>>   ]3. As a result of a deep Turing Test -like conversation with a machine,
>>   ]   you have to admit that it seems the machine grasps meanings.
>>   
>>   I'm willing to accept all of those premises.

One problem with (3) is that it often seems that something grasps
meanings without us necessarily concluding that it really does.  
For instace there are a number of programs that sometimes seem to 
understand natural language (eg, Eliza, NL interfaces to databases),
and people often think their pets understand what is said to them.

On the other hand, if computers might be able to understand, I don't
see why they'd have to pass the Turing Test before we'd admit it.
Suppose they could use language but not well enought to pass.

In my opinion, how they work will be as important as what they say.


