From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:30:51 EST 1991
Article 1432 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1019 comp.ai.philosophy:1432
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <1991Nov20.083647.5664@husc3.harvard.edu>
Date: 20 Nov 91 13:36:45 GMT
References: <9653@optima.cs.arizona.edu> <1991Nov18.224139.21896@monu6.cc.monash.edu.au>
Organization: Dept. of Math, Harvard Univ.
Lines: 97
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Nov18.224139.21896@monu6.cc.monash.edu.au> 
john@publications.ccc.monash.edu.au (John Wilkins) writes:

>In article <9653@optima.cs.arizona.edu>, 
>gudeman@cs.arizona.edu (David Gudeman) writes:
>> 
>> ]This, of course, is exactly the same argument used by all those charlatans who
>> ]denied that any reductive argument of living processes could be given.

DG:
>> I'd like to note that there are two issues here.  First, whether there
>> is now, somewhere, some theory that successfully describes how
>> self-awareness might arise out of physical processes, and second
>> whether such a theory is even possible.  Mr. Zeleny seems to takes the
>> strong view that such a thing is not possible, but any argument over
>> that is going to boil down eventually to an argument over ontology,
>> because the belief in a physical basis for self-awareness is in fact a
>> deduction from philosophical materialism.  It is a philosophical
>> belief, a faith if you will, not a scientific observation.

A small correction: in this discussion, my position is skeptical, rather
than dogmatic.

DG:
>> Some of the best philosophical arguments against materialism, are over
>> just this issue --that there are good reasons to suppose that
>> consciousness cannot be explained strictly through physical,
>> "material" processes.  If someone did come up with a theoretical
>> account of such processes, it would be a critical event in ontology.

Not that I would deny myself the pleasure of refuting reductive materialism
and functionalism (but note that the issue of the possibility of AI doesn't
depend on these assumptions; only its likelihood does); you might look into
the writings of D.M.Armstrong, a real philosopher to Dennett's ignoramus
and/or charlatan, to see the dire inadequacy of the content-dependent
picture of reflective consciousness, which he successfully explains in
materialist terms.

DG:
>> So I would like to see an honest response to Zeleny's challenge
>> --either show us this world-shaking theory that explains how
>> intelligence can arise from physical processes, or just admit that
>> such a theory does not exist and may not be possible.

Seconded.

JW:
>Such a theory does not exist, so why may it not be possible?

Represent semantical knowledge in a finite-state automaton.  Model
*ineffective* computability.  For more details, see Penrose and Searle.

JW:
>                                                              That we
>have not YET modelled or recreated consciousness (and we are a hell of
>a lot closer now than we were in Descartes' day) in NO WAY implies that we
>cannot, or that AI and neurophysiological research will never deliver it.

Independent reasons, however, indicate that this may very well be the case.

JW:
>Still, it may not be possible - YOU show ME why it isn't, rather than all
>this semantic crap about intentionality being irreducible.

If a non-speculative discipline like semantics is a load of crap, what is AI?

JW:
>                                                           So much of
>what used to be irreducible about mind has been reduced - language
>learning, visual recognition, emotion - or is promising to be, that
>a believer in Occam's Razor has every reason to be confident that the
>obscurantism of occult properties such as "Consciousness" have no real
>future.

I believe I'll call you on that.  Kindly outline a reductive account of
language learning, visual recognition, or emotion, and stand back to watch
it blow up in your face, or retract your above statement.  Moreover, my
naive understanding of Occam's razor indicates that its application depends
on prior recognition of particular ontology, as well as on a choice between
the merits of theoretical simplicity and ontological parsimony.  In other
words, if "occultism" results in a simpler theory, as is the case in all of
mathematics, you can stuff the razor.

'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


