From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Dec  9 10:47:22 EST 1991
Article 1793 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Dennett on Edelman--what a total loss
Message-ID: <1991Dec2.073924.14411@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <57730@netnews.upenn.edu> <1991Nov29.050859.21552@bronze.ucs.indiana.edu> <57864@netnews.upenn.edu>
Date: Mon, 2 Dec 91 07:39:24 GMT
Lines: 88

In article <57864@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:

>This is my least favorite pro-AI argument.  You get all petulant,
>pointing to yet another tree when the other fellow is trying to
>study the forest.  As if one more programming technique is all
>that's needed.  From the air, one more technique has always been
>one more kludge.  The frame problem still can't find the frame.

The arguments I cited against connectionism were not forest-level
arguments, so they do not get a forest-level reply.  In any case, the
point here was not so much to engage in general Edelman-bashing
as to demonstrate his general ignorance of other fields that he
dismisses, thus illustrating Dennett's point.  That Edelman could
dismiss connectionism with an argument that is irrelevant to
what is by far its most important exemplar suggests a surprising
degree of ignorance and effrontery.

There is also a deeper point lurking here, which is that Edelman's
work is deeply compatible with the spirit of much work in connectionism,
whether he likes it or not.  Essentially he is working with neural
methods of unsupervised learning, something that many connectionists
also find very important (I wonder if Edelman has ever talked to
Grossberg; a humourous thought).  If his work is either (1) more
neurally plausible, or (2) more cognitively sophisticated than any given
connectionist work, then good for him, that means he's a bright guy (my
impression is that there's more of (1) and less of (2)).  But nothing he
is doing is so vastly different in kind from a lot of work in
connectionism and computational neuroscience, so it's difficult to see
why he sets himself up as such an iconoclast.

>We do certain computations very fast.  How?

I don't know exactly what "very fast" means, but certainly the
speed of our computations is not so amazingly fast that it
apparently is beyond the capacity of common-or-garden physical
mechanisms.  So this provides no evidence for the importance of
quantum computation in the brain.

>We observe a classical world.  Why?

Personally, I favour a version of the "many-minds" theory.  In
any case, I don't see that this provides any evidence one way or
the other for the importance of quantum computation in cognition.

>My own bafflement at the lightness with which you view Dennett on Edelman.
>It seems fairly ugly to me.  Do you think I was overstating my annoyance,
>repeated below, regarding Dennett on Edelman re continuity?

Hey, it's only a footnote, and Dennett on Edelman is very much a sideline
to Dennett on consciousness.  Certainly the footnote would have been
better omitted, and maybe Dennett's brief dismissal of Edelman's point
of view on continuity is over-hasty.  But should I write to my senator
about it?  Edelman's blanket dismissal of entire fields, based on
flimsy arguments, strikes me as worse.

>Like I said, it's only a half-hearted attempt on his part.  Do you
>agree or disagree with my more complete answer here:
>
>>>Why does he need to?  Would Isaac Newton have been improved with an
>>>explanation of why orrery construction is not the same as an under-
>>>standing of gravitation?  Edelman is interested in the biological
>>>basis of our minds.  It is merely the vagaries of modern intellectual
>>>history that force him to explain why he is in the "let's understand
>>>gravity" camp as opposed to the "let's construct orreries" school.

No-one has said that Edelman is obliged to build a computational model
of his theory.  You can certainly do good science without building such
systems.  But that doesn't provide any ground for his blanket dismissal
of those who choose to do so.  If you want to know whether I think
that the building of computational models of cognition is analogous
to the building of orreries, the answer, obviously, is "no".

[In another article]
>In particular, I don't understand the
>appeal of the weaker claim.  You want brains to do more than capture
>the relevant properties--you want brains to exploit the properties in
>an essential way.

The appeal of the weaker claim lies in the fact that if we can capture
those properties of the brain relevant to the causation of behaviour in
a computer model, then we can build a computational model that produces
qualitatively similar behaviour.  In other words, the weaker claim is
precisely what is needed for the success of AI.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


