Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD02xHw.27H@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <CzFqn2.92t@cogsci.ed.ac.uk> <3b5d05$d2o@news-rocq.inria.fr> <Czzrvs.A1u@gpu.utcc.utoronto.ca> <D01FA6.DuK@cogsci.ed.ac.uk>
Distribution: inet
Date: Wed, 30 Nov 1994 12:03:31 GMT
Lines: 58
Xref: glinda.oz.cs.cmu.edu sci.skeptic:96662 comp.ai.philosophy:22890 sci.philosophy.meta:15144

In article <D01FA6.DuK@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <Czzrvs.A1u@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>>In article <3b5d05$d2o@news-rocq.inria.fr>,
>>Mikal Ziane (Univ. Paris 5 and INRIA)  <ziane@monica.inria.fr> wrote:
>>......
>>>
>>>My point was precisely that I do not think TT is a very good definition
>>>of intelligence and I think that this is what CR suggests albeit clumsily.
>>
>>It probably is not, but Turing thought that it was the best we could do and 
>>not much chaged since then. Or perhaps you have a better definition?
>>I can't see how CR suggests anything of the sorts. In fact, being methodolo-
>>gically wrong, it does not suggest anything.
>
>Andrzej -- can I tell Ozan Yigit that you defend the TT?  From this,
>but more from other articles, it seems to me that you do.

Is you omission of the word "fiercely" accidental, Jeff?  Did you think no
one would notice?  It looks mighty dang *conscious* to me, Jeff.  Andrzej's
supposed "defense" here is about as tame as it gets ("[TT] probably is not
[a very good definition of intelligence]").  His main point is that the CR is
useless to suggest anything.  Way too many things follow from fallacies.

>I used to think the TT was right, BTW.  I even wrote a paper defending
>it when I was a student.  Although I think Searle's arguments are
>flawed, I nonetheless find that they help suggest that the TT is
>flawed as well.  If you want to show that "the system understands",
>you need more than "it passes the TT, therefore it understands".

Perhaps you can present the arguments of your paper and explain how they
are wrong.

All my life I have judged whether people understand things.  Aside from the
understanding of physical mechanisms or tasks, this has been solely based upon
texts or dialogs, and has worked very well.  Occasionally I have made
mistakes; people sometimes have an erroneous model or algorithm that happens
to give the right results sometimes.  But we don't need a fallacious CR to see
that we cannot *show*, in the sense of a proof, that something understands.
All we need is Hume and the nature of induction.  So sure, the TT is flawed in
that sense.  But we will never be able to even approach Searle's notion of
"understanding" as used in the CR argument, because it is some sort of
gut-felt "essence", never defined.  It is the worst sort of dualism.

>It may be that we will eventually establish that the TT is a
>reliable test.  But that's not the only possible outcome.

We use it all the time.  It's pretty danged reliable.  Of course, if someone
sets out to play AI Eleusis, inventing systems that, for instance, can respond
as well as Einstein or Hawking with an apparent internal life as rich as
Sartre or Plath, until the word "arrogant" occurs, and from then uses a
malapropism every hundredth word, we may be troubled by the outcome.  But
I think it could lead us to be less arrogant and more humble about these
"human" notions of "consciousness". "understanding", and "intelligence",
and "self".

-- 
<J Q B>
