From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!silver.ucs.indiana.edu!lcarr Mon Oct 19 16:59:10 EDT 1992
Article 7274 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!silver.ucs.indiana.edu!lcarr
>From: lcarr@silver.ucs.indiana.edu (lincoln carr)
Subject: Re: Simulated Brain
Message-ID: <Bw4vK8.2H3@usenet.ucs.indiana.edu>
Sender: news@usenet.ucs.indiana.edu (USENET News System)
Nntp-Posting-Host: silver.ucs.indiana.edu
Organization: Indiana University
References: <1992Oct12.221609.15695@news.media.mit.edu> <1992Oct12.224008.16222@news.media.mit.edu> <1992Oct13.085347.13831@klaava.Helsinki.FI>
Date: Wed, 14 Oct 1992 22:52:55 GMT
Lines: 44

In article <1992Oct13.085347.13831@klaava.Helsinki.FI> amnell@klaava.Helsinki.FI (Marko Amnell) writes:

>but they [computers] still wouldn't be conscious in the way we are
[even if they exhibited the same behaviour as human beings].

I think that you want any future artificial system that we call
intelligent to have experience in the same way that humans have
experience.  For example, if we follow, say, Kant's model of how
humans take in information, a computer would perceive in space and
time, classify concepts according to modality, etc.  Why would even
this be impossible?  It would seem that any machine that takes in
information from some kind of sensory apparatus, processes it, and
produces behavior that is indistinguishable from a putatively
intelligent being must be intelligent.  Hence the Turing Test.  I
whole-heartedly agree with previous postings along this strain that
say, in one way or another, if one tries to go beyond behavior in
evaluating intelligence, on faces the problem of other minds, e.g.,
how does one really know that others have minds?  

It seems to me that Searle deliberately couches his terms in a way
that discounts the possibility of machine intelligence. In a followup
to his Chinese room argument, he states that nothing intelligent
merely does symbol processing, computers (as we define them now)
merely do symbol processing, so computers cannot be intelligent.  Why
couldn't we call a computer intelligent if it takes in sense data, of
a sort, processes it, and produces a response that is
indistinguishable from that of a human being?  Although Searle never
explicitly defines "intelligence" or "understanding," he seems to do
so, in a sense, when he makes claims like "no intelligent being merely
does symbol processing."  I once asked Searle, at a talk that he gave,
if he had a minimum standard of intelligence, but he didn't.  The
beauty of the Turing Test is precisely in the fact that one doesn't
need a precise definition of "intelligence" or "understanding."
Turing went around the issue and merely advised one to compare a
machine to something that everyone agrees is "intelligent."

 


-- 
*******************************************************************************
Lincoln R. Carr, Computer Scientist-Philosopher    lcarr@silver.ucs.indiana.edu
"Treat all rational autonomous moral agents, whether in the form of yourself
or another, never as means solely, but always as ends in themselves."


