From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:51 EST 1992
Article 4557 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: What comes after the Systems Reply?
Message-ID: <6428@skye.ed.ac.uk>
Date: 18 Mar 92 16:38:44 GMT
References: <1992Mar16.171520.15584@psych.toronto.edu> <1992Mar17.020503.9967@bronze.ucs.indiana.edu> <1992Mar18.035719.3394@psych.toronto.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 65

In article <1992Mar18.035719.3394@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>In article <1992Mar17.020503.9967@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>
>[stuff by me about Fodor, Dretske, Putnam, and Millikan deleted]

>>These are just the kinds of theories that Michael Gemar would scream
>>something like "but that's a *syntactic* notion of information"
>>at (as I saw in a recent post).  

>Well, I'll let Michael speak for himself, but I think he's on my side
>on this. He's not  constitutionally against all current theories
>of semantics (except inasmuch as they all fail, but we all know that).
>He's against people who don't know any of this material passing judgment
>on Searle, as though knowing how to program were the only important
>thing in the world.

I'm going to say something in support of Green, Gemar, Gudeman,
Zeleny, et al, but not because I always agree with them (I don't)
or because I'm convinced AI is impossible (I'm not).

In my opinion, it's the "anti-AI", "pro-CR", side that has the more
interesting things to say.  I want to know whether and how machines
can be made intelligent, and these people are saying what the
obstacles are.

I enjoy reading some people on the other side as well, people such as
McDermott, Chalmers, McCullough, and (if he's on that side) O'Rourke;
but on the whole the pro-AI argument seems to be stuck at the Systems
Reply and the Turing Test.  I don't think there's much to be said
about the Systems Reply beyond "Searle is just the CPU, so how would
he know?", and the Turing Test is just a way to avoid facing the
question of what mechanisms are required (because -- they say --
anything that generates the right behavior will do).

Anyone who wants to raise any difficulty for AI has to face the
Turing Test shock troops, the Verificationist Reserve, and then
the Definition Gang.  Even if we leave out the endless cycle of
misunderstanding, I don't think it's surprising that the discussion
doesn't make much progress.

I think we should set some of this stuff aside.

The people who think everything is too ill-defined for meaningful
discussion can go on thinking that, but unless they want to produce
some definitions themselves (or find them in the literature), I
think they should confine their arguments on this point to occasional
reminders that we still haven't defined anything.

Those who think that anything with the right behavior automatically
understands, either because they think that's all understanding means
for anyone who's not a skeptic about other minds, or because they
think any other notion is unverifiable and hence meaningless, or for
some other reason, can go on thinking that; but they should also
accept that for other people it's still an open question.  If they
want to convince these other people that behavior can get us from
"maybe the system understands" to "the system understands" (for
example), they should go about it by trying to show that the behavior
can only occur when there is "real understanding" (as I think Daryl
McCullough sometimes tries to do); for otherwise the two sides are
just not interested in the same issues.

For my part, I will no longer attempt to answer arguments that fall
in the areas where I think further dispute is more or less pointless.

-- jeff


