Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornell!travelers.mail.cornell.edu!news.kei.com!newshost.marcam.com!zip.eecs.umich.edu!umn.edu!gold.tc.umn.edu!woln0001
From: woln0001@gold.tc.umn.edu (Christopher A Wolney)
Subject: Searle vs. "Strong" AI
Message-ID: <D5D9Bu.6HC@news.cis.umn.edu>
Sender: news@news.cis.umn.edu (Usenet News Administration)
Nntp-Posting-Host: gold.tc.umn.edu
Organization: University of Minnesota, Twin Cities
X-Newsreader: TIN [version 1.2 PL2]
Date: Mon, 13 Mar 1995 06:42:38 GMT
Lines: 37


 John Searle, in his paper "Minds, Brains, and Programs", consistently argues
against "Strong" AI, or the concept that machines will "be able to have minds" or will "become a mind".  Throughout the paper, this point is hammered home,
and at the end of the text he even takes six different replies to his claim
and demonstrates why they are "incorrect" as well.

 Among these is the "brain simulator reply" which states that an exact
model of the human mind should be able to "understand" which is basically
the argument here in the paper--can a machine "understand"? (The concept that
a computer can "think" is validated and conceded by Searle in the paper...

 I believe a computer is capable of understanding.  At this point in time, I believe that machines are capable of "thinking" as many would, but this 
"understanding" may take a more complex computer, which would be quite complex
(I picture a massively parallelized and distrubted system, like the hunan
brain itself...) and not possible at this time.

 Does anyone have any more papers or texts which are applicable to validating 
that machines can "understand" (counter to the "Chinese Room" argument by
Searle)?  I would like to gain more info on this, since I am going to use this
as the subject for a paper I am writing for a "AI"-type philosophy class.
It seems to me that Searle is taking a naive approach with the whole topic--
some of the concepts seem to be reaching, and the technology at the time
wan't as good as it is now... 

 Also, any questions anyone has that can get me to think more on the subject
(as well as any points) would be appreciated.  I'd like to kick back at the 
coffee house for a few hours in thought before I attack the position.

 Mabye Searle even reads this newsgroup... In which case, what is your
position's Achillies Heel? :)

Subject: Searle vs. "Strong" AI
Newsgroups: comp.ai.philosophy




