From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!spool.mu.edu!agate!stanford.edu!rutgers!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:31:28 EST 1991
Article 1496 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:1496 sci.philosophy.tech:1057
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!spool.mu.edu!agate!stanford.edu!rutgers!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Daniel Dennett
Message-ID: <1991Nov22.115929.5757@husc3.harvard.edu>
Date: 22 Nov 91 16:59:24 GMT
References: <15019@castle.ed.ac.uk> <1991Nov19.210047.5646@husc3.harvard.edu> <15112@castle.ed.ac.uk>
Organization: Dept. of Math, Harvard Univ.
Lines: 139
Nntp-Posting-Host: zariski.harvard.edu

In article <15112@castle.ed.ac.uk> 
cam@castle.ed.ac.uk (Chris Malcolm) writes:

>In article <1991Nov19.210047.5646@husc3.harvard.edu> 
>zeleny@brauer.harvard.edu (Mikhail Zeleny) writes:

>>In article <15019@castle.ed.ac.uk> 
>>cam@castle.ed.ac.uk (Chris Malcolm) writes:

>>>In article <1991Nov18.083024.5560@husc3.harvard.edu> 
>>>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>>>>In article <OZ.91Nov17172508@ursa.sis.yorku.ca> 
>>>>oz@ursa.sis.yorku.ca (Ozan Yigit) writes:

OY:
>>>>>Your charge that Dennett has been intellectually dishonest is a
>>>>>serious one.  You are no doubt prepared to substentiate this charge,

MZ:
>>>>I am sorry, but I was making a general statement about the AI field, as
>>>>exemplified e.g. in the Boden anthology, "The Philosophy of Artificial
>>>>Intelligence", which starts out from an unconvincing and fallacious ...
>>>> .... implicitly assuming that man is a finite 
>>>>being in every relevant aspect ...

Please note that I gave an argument against Dennett elsewhere.

CM:
>>>Since it is possible to generate an infinite number of sentences from
>>>the 26 letters of the alphabet perhaps you can make explicit this
>>>implicit assumption (that man is finite in every relevant aspect) which
>>>you impute to Boden?

MZ:
>>I meant
>>not the introduction, but the first paper in the book, written by McCulloch
>>and Pitts; the finiteness assumption is already implicit in the quaint
>>title, "A Logical Calculus of the Ideas Immanent in Nervous Activity" ...
>> ... pay attention to ... page 37 ... ignoramus or a charlatan.

CM:
>So, the intellectually dishonest Dennett and his ilk turns out not
>really to be well exemplified by Dennett so much as those in general
>collected in Boden's anthology, who turn out to be best exemplified by
>McCulloch and Pitts -- whose paper is arguably the oldest AI paper in
>existence. Well, I suppose we should be thankful for that -- you can't
>really go any further back than McCulloch and Pitts!

Dennett will do just fine as a whipping boy; feel free to address my
article expressly dedicated to his foibles, fallacies, and frauds.
However, at this time, let's occupy ourselves with more capable targets.

CM:
>On the way I have also managed to collect some idea of what you think
>is wrong with the ideas of these foolish AI supporters: it has
>something to do with an implicit assumption that Man is finite, based
>on some presumed relationship between Man and a Turing Machine.  Your
>answer is not entirely clear, ignores my illustration of the infinite
>capability of Turing Machines and seems to me to suggest, as before,
>that you assume that a Turing machine has finite capabilities.

Not quite.  Turing machines have infinite "memory", i.e. tape; finite state
automata, like neural nets, most certainly don't.  See McCulloch and Pitts
on the infamous page 37: "every net, if furnished with a tape [...] can
compute only such numbers as can a Turing machine, [...and] each of the
latter numbers can be computed by such a net".  Pray tell, where does the
tape come from?

CM:
>If this is what you think, Mikhail, can you explain why? And can you
>please clarify what your case for intellectual dishonesty and
>charlatanism is? I'm quite happy to begin with the McCulloch and Pitts
>paper, if that's where you think it is most clearly manifest. I've
>consulted page 37, and found nothing redolent to me of charlatanry or
>ignorance.

As in Dennett's case, I can't decide between ignorance and charlatanism.
Read on: "This is of interest as affordind a psychological justification of
the Turing definition of computability and its equivalents, Church's
$\lambda$-definability and Kleene's primitive recursiveness: if any number
can be computed by an organism, it is computable by those definitions, and
conversely."  Reflect on the difference between computability and effective
computability, and judge for yourself.

>CM: ... ad feminam ...
>MZ:     ^^^^^^^^^^?

CM:
>It's a Latin PC joke: "ad hominem" arguments -- but Boden is a woman,
>so "ad feminam". I thought someone who reads Descartes would get that
>:-)

It's a stupid PC joke: recall that in Latin, `ad hominem' comprehends both
genders.

MZ:
>>The implications of Searle's argument are painfully obvious: semantical
>>knowledge must be represented in, and accessible by, the mind of any
>>intelligent being.  Pray tell, where are these issues adequately addressed?

CM:
>I don't think anybody is yet capable of addressing them. They are
>generally recognised as serious issues in the AI community (which is
>precisely _why_ the Chinese Room gets anthologised and debated so
>much), and some people are working on them, despite being handicapped
>by intellectual dishonesty :-) As it happens, there is still plenty we
>can do before the lack of resolution of this issue becomes an obstacle
>to further progress, so we (AI researchers) don't actually have to sit
>around twiddling our thumbs until someone manages to address them
>properly.

Sorry, Chris, but you are mistaken on two counts.  First of all, Dennett
does indeed claim that he has refuted Searle; see his latest book,
pp.435--40.  Secondly, there is no shortage of good semantical theories of
fragments of natural languages.  As I have argued earlier, no adequate
semantical theory is compatible with reductive materialism, on the natural
assumption that the brain is a finite state automaton.  Prove me wrong.

>-- 
>Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
>Department of Artificial Intelligence,    Edinburgh University
>5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


