From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!cs.uoregon.edu!ogicse!das-news.harvard.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Tue Mar 24 09:56:18 EST 1992
Article 4506 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!cs.uoregon.edu!ogicse!das-news.harvard.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Chris Green's Ad Hoc Argument
Message-ID: <1992Mar17.124645.5285@oracorp.com>
Date: 17 Mar 92 12:46:45 GMT
Article-I.D.: oracorp.1992Mar17.124645.5285
Organization: ORA Corporation
Lines: 27

Chris,

I don't understand what you think is ad hoc about either the Systems
Reply or the assertion that in Searle's memorization response, a
person executing the rules would produce a second mind that
understands. As I said in several postings (that were in response to
your earlier postings) the computational theory of the mind holds as
its basic premise that a minds *is* a particular kind of computation.
The Systems Reply and the multiple-minds rebuttal to Searle's response
both follow logically from this basic premise. What could be farther
from "ad hoc"?

On the other hand, in Searle's argument he seems to be constructing
principles as he goes along. Does the existence of one mind per brain
follow from anything else Searle has said? Does it follow from any
theory of mind?

In my opinion, the situation is exactly the opposite of the way you
are portraying it; Searle's argument seems completely ad hoc, and the
AI position has a definite coherence to it. I don't mind Searle being
ad hoc, particularly, since calling an argument "ad hoc" is more of an
aesthetic judgement than a logical one. But I do object to the wart
hog calling the cat ugly.

Daryl McCullough
ORA Corp.
Ithaca, NY


