From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!rpi!batcomputer!cornell!rochester!yamauchi Tue Nov 26 12:31:54 EST 1991
Article 1539 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10560 sci.philosophy.tech:1082 comp.ai.philosophy:1539
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!rpi!batcomputer!cornell!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Searle (was Re: Daniel Dennett (was Re: Commenting on the posting))
Message-ID: <YAMAUCHI.91Nov24022756@magenta.cs.rochester.edu>
Date: 24 Nov 91 07:27:56 GMT
References: <1991Nov14.223348.4076@milton.u.washington.edu>
	<1991Nov15.160741.5495@husc3.harvard.edu> <11749@star.cs.vu.nl>
	<15015@castle.ed.ac.uk>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester
Lines: 35
In-Reply-To: cam@castle.ed.ac.uk's message of 19 Nov 91 18:27:26 GMT
Nntp-Posting-Host: magenta.cs.rochester.edu

In article <15015@castle.ed.ac.uk> cam@castle.ed.ac.uk (Chris Malcolm) writes:
  
>In article <11749@star.cs.vu.nl> peter@cs.vu.nl (Grunwald PD) writes:

>>Searle says: 'Machines can never be conscious (or even intelligent (!?)) because
>>		they inherently lack 'semantics', which is necessary for
>>		consciousness and inherently only available to human beings '

>Searle is commonly supposed to have said this. In fact, Searle is not so
>foolish. He has said quite clearly that machines _can_ think, since we
>are biological machines, _but_ that a machine could not think _solely_
>by virtue of _syntactic_ manipulations. That is his point in a nutshell.

Hmmm...  How does Searle define "syntactic"?  I tend to agree that we
will make little progress towards systems with human-like intelligence
by concentrating solely on systems that perform linguistic
manipulation, but...

In his answer to the "Robot Reply", Searle seems to believe that any
robotic system controlled by a Turing-equivalent computer would be
incapable of "thinking".  And in his reply to the Churchlands (in
Scientific American) he seems to extend these objections to neural
networks and robots controlled by neural nets.

He does admit that humans are machines, but he never says what it is
about humans that gives them the "semantics" that other machines
"lack".  From reading his essays, I received the distinct impression
that he believes this has something to do with the specific chemical
composition of the human brain...
--
_______________________________________________________________________________

Brian Yamauchi				NASA/Caltech Jet Propulsion Laboratory
yamauchi@cs.rochester.edu		Robotic Intelligence Group
_______________________________________________________________________________


