From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert Tue Mar 24 09:57:06 EST 1992
Article 4579 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: comp.ai.philosophy
Subject: Re: What comes after the Systems Reply?
Message-ID: <1992Mar18.230538.9494@mp.cs.niu.edu>
Date: 18 Mar 92 23:05:38 GMT
References: <1992Mar17.020503.9967@bronze.ucs.indiana.edu> <1992Mar18.035719.3394@psych.toronto.edu> <6428@skye.ed.ac.uk>
Organization: Northern Illinois University
Lines: 93

In article <6428@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In my opinion, it's the "anti-AI", "pro-CR", side that has the more
>interesting things to say.  I want to know whether and how machines
>can be made intelligent, and these people are saying what the
>obstacles are.

  Perhaps this exemplifies the miscommunication that is occuring.  But from
my perspective, it looks as if the pro-CR side is not saying what the
obstacles are - instead, they are saying that the whole project is doomed
to failure and they have proof.  It is a little difficult to discuss how
to make machines intelligent when the other side of the discussion seems
to say that such discussion is pointless.

>Anyone who wants to raise any difficulty for AI has to face the
>Turing Test shock troops, the Verificationist Reserve, and then
>the Definition Gang.  Even if we leave out the endless cycle of
>misunderstanding, I don't think it's surprising that the discussion
>doesn't make much progress.

  A large part of the "endless cycle of misunderstanding" is caused by the
pro-CR crowd.  Let's face it, the CR argument is unproductive.

  When you bring up the CR argument, you are essentially saying to
the pro-AI types:  "What you are doing is garbage and here is a final
proof that it is nonsense.  Oh, and by the way, I admit that the proof
has a few terms that are not precisely defined, but these have well known
meanings and I refuse to define them".  This naturally provokes the
reply "Your proof is invalid, and any proof which does not define its
terms is anti-scientific".  You may not like the response, but it is a
natural response to your provocation.  To stop the response, you need only
stop provoking it.

>The people who think everything is too ill-defined for meaningful
>discussion can go on thinking that, but unless they want to produce
>some definitions themselves (or find them in the literature), I
>think they should confine their arguments on this point to occasional
>reminders that we still haven't defined anything.

  I have not seen very many claims from the pro-AI side that "everything
is too ill-defined for meaningful discussion".  On the other hand I fully
support the assertion that everything is too ill defined to consider
the CR argument anything approaching a proof.  If the pro-CR side would
stop claiming that CR proves anything, and instead use it only as a guide
to suggesting specific problems in some AI approaches, perhaps we could have
a more productive discussion.

>Those who think that anything with the right behavior automatically
>understands, either because they think that's all understanding means
>for anyone who's not a skeptic about other minds, or because they
>think any other notion is unverifiable and hence meaningless, or for
>some other reason, can go on thinking that; but they should also
>accept that for other people it's still an open question.  If they

 You (more generally the pro-CR group) over simplify the views of the
anti-CR group, and then defame them on the basis of that over simplification.
For example, I happen to believe that anything with the right behavior
understands.  But I believe that, only because I cannot conceive of any way
of generating the correct behavior without first solving the problem of
understanding.  I certainly do not believe I can work on generating the
behavior and the understanding will take care of itself -- I happen to
believe that any such approach is doomed to fail.  Yet when I post anything
you jump to accuse me of just those types of things I do not believe.
You need to be a little more generous in your interpretation of what
might motivate those on the pro-AI side.

>want to convince these other people that behavior can get us from
>"maybe the system understands" to "the system understands" (for
>example), they should go about it by trying to show that the behavior
>can only occur when there is "real understanding" (as I think Daryl
>McCullough sometimes tries to do); for otherwise the two sides are
>just not interested in the same issues.

 It is hard to see how such a discussion would be productive.  I already
believe that the behavior cannot occur without real understanding, and the
anti-AI group already believe that you can't have true AI without real
understanding.  The difference between these two views is sufficiently
small that it would seem a waste of effort to try to argue it.

>For my part, I will no longer attempt to answer arguments that fall
>in the areas where I think further dispute is more or less pointless.

 Then I hope you consider the "you can't get semantics out of syntax" to
be among those pointless areas.  Such claims are unproductive unless the
meaning of the terms "syntax" and "semantics" can be narrowed.  Instead
look at specific things that AI people call semantics, and attempt to
demonstrate that these are not really semantics.  This may help get to
a more specific and meaningful discussion.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


