From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!munnari.oz.au!bruce!monu0.cc.monash.edu.au!monu6!john@publications.ccc.monash.edu.au Tue Nov 26 12:31:16 EST 1991
Article 1476 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1042 comp.ai.philosophy:1476
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!munnari.oz.au!bruce!monu0.cc.monash.edu.au!monu6!john@publications.ccc.monash.edu.au
>From: john@publications.ccc.monash.edu.au (John Wilkins)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <1991Nov20.223212.19719@monu6.cc.monash.edu.au>
Date: 20 Nov 91 22:32:12 GMT
Article-I.D.: monu6.1991Nov20.223212.19719
References: <9653@optima.cs.arizona.edu> <1991Nov18.224139.21896@monu6.cc.monash.edu.au> <1991Nov20.083647.5664@husc3.harvard.edu>
Sender: news@monu6.cc.monash.edu.au (Usenet system)
Organization: Monash University, Melbourne Australia
Lines: 52

In article <1991Nov20.083647.5664@husc3.harvard.edu>, zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
Other stuff deleted.

> JW:
> >                                                           So much of
> >what used to be irreducible about mind has been reduced - language
> >learning, visual recognition, emotion - or is promising to be, that
> >a believer in Occam's Razor has every reason to be confident that the
> >obscurantism of occult properties such as "Consciousness" have no real
> >future.
> 
> I believe I'll call you on that.  Kindly outline a reductive account of
> language learning, visual recognition, or emotion, and stand back to watch
> it blow up in your face, or retract your above statement.  Moreover, my
> naive understanding of Occam's razor indicates that its application depends
> on prior recognition of particular ontology, as well as on a choice between
> the merits of theoretical simplicity and ontological parsimony.  In other
> words, if "occultism" results in a simpler theory, as is the case in all of
> mathematics, you can stuff the razor.

Uh uh, it isn't up to me, it's up to those professionals in neurophysiological
psychology and other such disciplines. However, a dated but still very
interesting account is given of many of these (now uncontentious) reductions
in Stephen Rose's _The Conscious Brain_. My point is that, given all the
successes of the reductive program - and they are undeniable, aren't they -
it's up to those Platonists and dualists who wish to deny that this is
the path to take *in science* to show why. Dualism is ontically unparsimonious
on any interpretation: it posits two realms of being. IF one will do, whether
it is the mental or the physical, then that is simpler and to be preferred.
The rest is a matter of research success, and the mentalist program has been
stalled for over three centuries, while the physicalist account leaps ahead
from week to week.

A semantic argument against the possibility of success (you are not being
skeptical in tone, you are instead very dogmatic: "show me X and I'll show
you why it can't succeed" indeed!) is so much sophistry. Words do not
have priority over the phenomena, and the appearance is ALL in favour of
a reductive account to date.

As to the nature of AI, so far it has given us some remarkable tools for
making computers do things that look like what used to be thought the
exclusive domain of mind. Mentalism is dying the death of a thousand
qualifications, like the God of the gaps who could decree the falsity 
of this or that scientific theory - from Galileo to Darwin, including 
Newton on the way. If mind is a physical phenomenon, and I think I am
at least on a coherent path in thinking it to be, then it can be physically
modelled on a sophisticated enough system. Whether a Turing Machine-like
system is able to so model is a matter for empirical research (I doubt it).
The attempt to model consciousness physically will (i) teach us a lot about
dynamic information processing systems (and therefore applied computing)
and (ii) teach us a lot about what sorts of processes are going on in our
brains. More power to AI researchers.


