From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!aunro!ukma!wupost!zaphod.mps.ohio-state.edu!think.com!ames!haven.umd.edu!uflorida!mosquito.cis.ufl.edu!fre Tue Nov 26 12:31:00 EST 1991
Article 1448 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10286 sci.philosophy.tech:1031 comp.ai.philosophy:1448
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!aunro!ukma!wupost!zaphod.mps.ohio-state.edu!think.com!ames!haven.umd.edu!uflorida!mosquito.cis.ufl.edu!fre
d
>From: fred@mosquito.cis.ufl.edu (Fred Buhl)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <32905@uflorida.cis.ufl.EDU>
Date: 20 Nov 91 21:59:20 GMT
References: <1991Nov19.101612.5603@husc3.harvard.edu> <15018@castle.ed.ac.uk> <1991Nov19.183901.5640@husc3.harvard.edu>
Sender: news@uflorida.cis.ufl.EDU
Organization: UF CIS Dept.
Lines: 44

I've enjoyed the discussions on this group immensely -- until now,
that is.  It seems that now, with each new posting, the quality of
argument deteriorates, while the vitriol increases.  So much has been
said now that I doubt if _any_ further rational discussion on this
topic will occur.  Perhaps that's not the goal; perhaps responding to
ad-hominem attacks with other ad-hominem attacks is.  If the latter,
I'd ask that discussion of this type be moved to alt.flame or some
other group of that nature.  It's difficult to respond to inflammatory
statements with calmness and courtesy (my blood has boiled at various
points in this discussion), but I think it's the only way to keep this
discussion at something approaching a rational level.  Some restraint,
Gentlemen, please!

Please forgive the above lecture.  Now, back to business:

I liked Chris Malcolm's Searle-in-a-nutshell -- that seems to me to be
_exactly_ what he's saying.  My problem with Searle is what he
_doesn't_ say, at least not clearly to me -- what are his requirements
for allowing a system to be said to have "understood" something?  When
Searle visited UF last year, I had a chance to ask him some questions
after his lecture.  Trying to summarize what I'd read and just heard,
I asked him, "Understanding requires Consciousness?" and he agreed.  I
also asked him, "No symbol-manipulation system will ever be
Conscious?" and he agreed.  I then asked him whether or not the brain
wasn't a symbol-processing system, (the symbols being neural pulses)
but couldn't quite understand the refutation he gave -- wish I'd taped it.

The problem is that Searle can't describe what specific properties are
required for his definition of Consciousness, only what sort of
systems _aren't_ Conscious.  Until he gives such definitions (and
that's probably impossible at this stage to do), I don't see much
point in discussing his ideas, since (IMHO) he's not properly defined
his terms yet.  Most of the argument about Searle's ideas, I think,
have to do with problems in the definitions of terms.  (I also have
grave doubts about the system he's describing _ever_ passing the
Turing test, with no short-term memory or learning ability, but that's
_another_ problem).

(I know, _another_ Searle/Chinese room post.  Just what we need. Sigh.)
---------------------------------------------------------------------------
Fred Buhl, Grad Student            A proud member of the Union of
UF Computer Science Dept. - AI     Unconcerned Scientists.       
fred@reef.cis.ufl.edu              "Ants are smart.  _Really_ smart." 
---------------------------------------------------------------------------


