From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!spool.mu.edu!olivea!uunet!psinntp!scylla!daryl Wed Feb 26 12:54:36 EST 1992
Article 4017 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!spool.mu.edu!olivea!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb25.175012.8924@oracorp.com>
Date: 25 Feb 92 17:50:12 GMT
Article-I.D.: oracorp.1992Feb25.175012.8924
Organization: ORA Corporation
Lines: 37

Christopher Green writes:

> Please tell me what you find "good" about Hofstadter & Dennett's
> reply [to Searle's rebuttal to the Systems Reply]. I have it here
> in front of me and it seems to boil down to "no human could ever
> memorize all those symbols and rules." From a philsophical perspective,
> this is no argument at all.

Although I agree with Hofstadter that Searle's arguments are wrong, I
also agree with you that Hofstadter's reply is worthless (if you have
characterized it accurately). It reminds me of a discussion I once had
as an undergraduate at Northwestern. Not quite understanding the
implications of the Theory of Relativity, I asked the TA a question
that started with "If I were on a rocket that accelerated from rest
to 90% of the speed of light in 1 second..." His answer was "That's
impossible; acceleration like that would kill you."

> Searle is *giving* his opponents that a human could accomplish this
> astounding feat (just as he gives them the possibility that a language
> could be reduced to a finite set of rules; a matter which leads to all
> sorts of confusion). The point is that *even under these improbable
> conditions* -- conditions which work to the advantage of the
> strong-AIist -- you can still show that the system has no
> understanding.

I disagree with you that Searle is giving anything to the Strong AI
position by making these concessions. As Searle describes it, Strong
AI is the philosophical position that any machine that "implements the
right program" must understand in the same sense a human does. That
is, Strong AI is logically in the form of an implication: If machine A
implements the right program, then machine A understands. It isn't
making a concession to Strong AI to assume the antecedent in order to
explore the consequences.

Daryl McCullough
ORA Corp.
Ithaca, NY


