From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:58:14 EST 1992
Article 4684 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <6479@skye.ed.ac.uk>
Date: 23 Mar 92 19:59:43 GMT
References: <1992Feb14.180030.48911@spss.com> <6208@skye.ed.ac.uk> <1992Feb27.231046.15534@bronze.ucs.indiana.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 32

In article <1992Feb27.231046.15534@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <6208@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>I don't know.  But some people seem to think brain simulation is
>>a key move in an argument against Searle.  Evidently, they're
>>relying on some property that brain simulations have that other
>>program's don't.  So why would their conclusions against Searle
>>apply more generally?
>>
>>All I'm asking is that they answer that question.  Perhaps all they
>>can conclude is that programs can understand only if they're brain
>>simulations. Don't you want to know how general the conclusion is?
>
>This is silly.  Searle's argument, if it is correct, establishes a
>universal claim: for no program P is implementing P sufficient for
>mentality.  If a counterexample is found, then not only is the
>conclusion wrong, but the entire argument is wrong.

It should be clear that arguments can often be repaired when the
range of counterexamples is sufficiently narrow.  I am suspicious
when the range is so narrow.  And as far as I am concerned whether
_Searle_ is right or wrong is a relatively minor question.  AI
is no better off (except politically) if Searle is wrong but an
argument similar to his works.

Moreover, disproof by counterexample may not give much insight into
why an argument is incorrect.  Again, merely beating Searle isn't of
much interest to me.  If he's wrong, I'd like to know why; and if the
answer is just "brain simulations are a counterexample", I'd like to
know what's so special about brain simulations.

-- jeff


