From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Mar  9 18:33:33 EST 1992
Article 4109 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Definition of understanding
Message-ID: <1992Feb28.022105.28548@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Feb25.175012.8924@oracorp.com>
Date: Fri, 28 Feb 92 02:21:05 GMT
Lines: 56

In article <1992Feb25.175012.8924@oracorp.com> daryl@oracorp.com writes:

>Although I agree with Hofstadter that Searle's arguments are wrong, I
>also agree with you that Hofstadter's reply is worthless (if you have
>characterized it accurately). It reminds me of a discussion I once had
>as an undergraduate at Northwestern. Not quite understanding the
>implications of the Theory of Relativity, I asked the TA a question
>that started with "If I were on a rocket that accelerated from rest
>to 90% of the speed of light in 1 second..." His answer was "That's
>impossible; acceleration like that would kill you."

This misses the point that Searle's answer to the "virtual person"
reply is entirely based on intuition (as are the answers made by
pro-Searlians in this thread), more or less consisting of "Two
minds in one head?  That's ridiculous!".  So anything that serves
to weaken the intuition serves a purpose.

Hofstadter's point, I take it, is that the intuitions that Searle is
appealing to get their strongest support if we think about *real
people*, for whom it seems ridiculous that the memorization of a
bunch of rules could produce another mind.  But people with the
capacity to memorize a whole Chinese-room rule set would be vastly
different to anything in our experience: they would essentially be
using only half their brain, with the other half sitting empty as
storage capacity.  By memorizing the rule-set, they are effectively
*doubling* the complexity present in their brains.  It begins to
seem much less ridiculous that the sudden acquisition of a brain's
worth of complexity could produce a new phenomenology.

Along these lines, elsewhere Michael Gemar writes:

>You may be technically correct, but it seems to elude many people that
>Searle is merely *assuming* the antecedent for purposes of argument, and
>that the possibility that this assumption is true is by no means assured.

That doesn't work.  The whole point of the memorization argument is
to produce a counterexample to the strong AI thesis, i.e. to produce
an implementation for any program, such that no understanding is
present.  If no such "memorization" implementation could exist, that
part of the argument is made entirely worthless.  (I think one could
still try to run it through using counterfactual beings with a vast
unused memory capacity.  But the point is that it certainly has to
be much more than an "assumption for the purposes of argument.")

>Actually, Hofstadter's original reply comes after the original article in
>BBS, and consists primarily of jumping up and down and yelling a lot.

The reply in question is in _The Mind's I_, and is more substantial.
I don't agree with all of it, but it beats the usual Chinese Room fare
(the Chinese-room discussion in the literature, apart from that in BBS,
is almost uniformly terrible).

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


