Newsgroups: comp.ai.nat-lang,alt.cyberspace,alt.internet,alt.net-scandal,comp.ai,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!uunet!in1.uu.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Eliza (was Re: Are there non-humans lurking on Internet/Usenet?)
Message-ID: <D43wJq.HnA@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3he94f$jgp@mp.cs.niu.edu> <D3srEx.C88@cogsci.ed.ac.uk> <jqbD3vIo5.JCK@netcom.com>
Date: Thu, 16 Feb 1995 18:55:01 GMT
Lines: 192
Xref: glinda.oz.cs.cmu.edu comp.ai.nat-lang:2918 comp.ai:27543 comp.ai.philosophy:25608

In article <jqbD3vIo5.JCK@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <D3srEx.C88@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>
>>This discussion shows well one of the reasons it can be difficult to
>>convince someone that Eliza has no understanding: anything someone
>>says on the subject can be treated as ill-defined, thus changing
>>the discussion from one about whether Eliza understands to a fruitless
>>dispute about what various words mean.  I have never seen any ai.phil
>>discussion resolved in that way.
>
>Why are you so intent on convincing people that Eliza has no understanding

I am not even slightly interested in convincing anyone on the net
that Eliza has no understanding.  As for those off the net, who knows?

>when you demonstrably do not yet understand what understanding is? 

And you do, eh?

> You seem
>as perturbed by the idea that anyone might harbor a thought that Eliza might
>understand, as Searle seems perturbed by the idea that anyone might harbor
>a thought that some computer, by virtue of executing an algorithm, might
>understand. 

I'm not especially perturbed by it.  Thinking Eliza might understand
is far less perturbing than, say, Michael Howard's views on the right
to silence.  (A UK issue.  MH is the home secretary.  The right to
silence has recently been modified, some would say abolished, so that
a failure to asnwer questions can count against you.  It's too soon
to say what this will amount to in practice.)

>Why are you guys so dang *sure* of yourselves when you cannot
>rigorously define the terms,

What's this "you guys"?  Who are you imagining I agree with?

And who, among philosophers or AI researchers who discuss these
issues, does rigorously define their terms?  Who might I look to as a
model?  I can't think of anyone off hand, but maybe you can do better.

Anyway, definitions are among the things that have to be developed,
not the starting point.

>or even demonstrate that you have thought deeply about them?

By quoting a dictionary, perhaps?  Or maybe by "winning an argument"?

>This idea that you really *should* be able to convince people
>of these things if only they would do this or avoid doing that seems 
>extremely arrogant to me.

It might at least be possible to understand why they think as they
do.  Anyway, I don't think I *should* be able to convince people
if only...   I don't expect them to be convinced by me, especially
in cases where I'm not even offering any arguments.

I do think people *should* have an open mind, though.

>>Initially, one might think that the question of what "understand" means
>>could just as well be used against claims that Eliza does have some
>>understanding as against claims that Eliza has none, but that turns
>>out not to be the case.
>
>It is hardly surprising given that "some" and "none" have such different
>semantics.  

But in both cases, one might demand that "understanding" be defined
and refuse to go further until that happened.

However, you do have a point.  Well spotted.

>It is easy to provide definitions of "understand" that let
>*some* understanding leak in. 

True, but why should we agree to those definitions?  It's also
easy to provide definitions of "belief" so that thermostats have
beliefs or so that they don't.  But that doesn't really settle the
question of whether thermostats have beliefs.  For instance, moves
like this result:

  If someone insists in making consciousness constitutive of the
  notion of belief, I'll just starting talking about "schmeliefs"
  instead.  [Dave Chalmers in sci.philosophy.tech, 22 Sep 92]

> It is quite a bit harder to provide definitions
>of "understand" that shut out *all* understanding.  If you want such a
>definition, you are going to have to do the hard work.  I'm waiting to
>see you do that sort of hard work.  

Why wait for me?  If you don't care whether or not Eliza understands,
why bother with the issue at all?  And if you do care, why wait for me
to do the work?

Note BTW that I don't especially want a definition at all, because
I don't think starting with definitions is the right way to proceed.

>   Simply implying that Eliza doesn't
>have any understanding without struggling to produce models that allow us
>to conclude that is cheating.

And so is coming up with definitions that make it trivial that Eliza
understands.

>>It's fairly easy to produce definitions
>>of "understanding" that refer only to externally observable behavior
>>and then to argue that Eliza has enough of the right sorts of
>>behavior to count as having some understanding.  And even without such
>>a definition it's possible to insist that all acceptable definitions
>>must provide publically observable criteria.
>>
>>This makes things difficult for anyone who thinks understanding is
>>not just a matter of publically observable criteria or who thinks
>>that we don't yet know what the right criteria are.
>
>You are right; it is difficult.  Too difficult for you?

Yes, and for anyone else as well.  If the definition must refer only
to externally observable behavior, then someone who thinks understanding
is not just a matter of publically observable criteria can't give a
definition of the required sort that they also think is correct;
and similarly for someone who thinks the right criteria aren't
yet known.

>>So these guys can be held off forever.  
>>All useful discussion stops right there, and no further progress
>>can be made.
>
>So you are saying are having trouble winning an argument. 

What argument do you think I'm trying to win?

In any case, your question fits well with the general nature of
comp.ai.phil (and many other newsgroups).   They're adversarial,
and they're treated as a ground for debate.  They're for arguments
and who wins them.  That's not always the best way to get to the
truth.

>Y'know,
>life is tough.  I am still waiting for a cogent argument of *any* sort
>that shows that Eliza has no understanding.  

Why don't *you* try to produce one?  After all, what's so bad about
looking at both sides of the issue?

>You can use any sort of
>definition you want, or even eschew definitions altogether.  But you can't
>just point at "understanding" and point at Eliza and imply that we ought
>to know better that Eliza doesn't understand.  Well, you can, but that is
>not what I mean by "cogent".

But that wasn't an argument, and so of course not a cogent one.

>>The same thing can be done with any other word we might try in
>>place of "understanding".  "Consciousness", "intelligence",
>>and so forth all suffer the same fate.
>
>Yes.  Maybe that either indicates that there is a problem with your position,
>or that you have not yet found the appropriate exposition that supports your
>position.  You need an *argument*, some grounds for showing that you are
>right, beyond mere posturing involving agreeing with people's "reasonable"
>claims and talking about what people "ought to know better".

You seem to think you know what my position is, and perhaps you do.
But if so you haven't shown much sign of it here.

I've found a number of expositons that I mostly agree with or feel
are on the right track (which does not mean I know what the right
track is) or find insightful.  I suppose one might say they support
my position, though that isn't exactly the right way to put it.
There are also various things I could say myself (finding them in 
that sense).  Some of them I have said in the past on the net,
though not very recently.

But it's one thing to have expositons and another to present them
in comp.ai.phil.  I would prefer somewhere less adversarial, because
then I'd be more likely to get something useful in return.  But the
main problem is time.  People write books on these issues.  Even
a mere paper is much longer than almost all news articles.  And then
there are the inevitable misunderstandings, and perhaps various
hostile misinterpretations or distortions, to deal with.  (That's
a problem faced, from time to time, by people of many different
views, not just by me and those I agree with.)

Now, you say I need an argument, some grounds for showing I am right.
I'm not sure that I *am* right, and it's very difficult to *show* that
any position in philosophy of mind or philosophy of AI is right.
If your aim it to show that you're right, then you're almost bound
to fail.  It's the wrong aim to have, in my view.

-- jeff
