From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Sun Dec  1 13:06:33 EST 1991
Article 1743 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Dennett on Edelman--what a total loss
Message-ID: <1991Nov29.050859.21552@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <57569@netnews.upenn.edu> <1991Nov27.031545.11235@bronze.ucs.indiana.edu> <57730@netnews.upenn.edu>
Date: Fri, 29 Nov 91 05:08:59 GMT
Lines: 131

In article <57730@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:
>In article <1991Nov27.031545.11235@bronze.ucs.indiana.edu>, chalmers@bronze (David Chalmers) writes:
>>In article <57569@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:

>>>Dennett dismisses Edelman completely, with the claim that Edelman's
>>>work shows the folly of someone working in the cognitive sciences
>>>without knowing everything.
>
>>That's incorrect.  Dennett likes and respects Edelman's work, even
>>though he thinks that it's ultimately unsuccessful.  The footnote on
>>page 268 is a little crass, with remarks that border on the ad hominem,
>
>Amazing.  You call me incorrect, and then start citing exactly what it
>was that led to me write >> above.

That's right.  To spell things out: Dennett calls Edelman's work
"an instructive failure", and makes some remarks about how Edelman
has ignored other relevant work, but this is far from "completely
dismissing" Edelman.

>If Dennett thinks that Edelman is salvageable,
>then why does he think it's ultimately unsuccessful?  I'm baffled.

Because even wrong theories can have a lot that's right in them.
Dennett presumably thinks that Edelman makes a number of insightful
points out of which a good theory could have been built.

>Huh???  Edelman does not treat connectionism.  There is one reference
>to it in his trilogy, and he says the models lack the precise neuro-
>anatomical detail that he wants in a brain/mind model.  No more, no
>less.

See "Real Brains and Artificial Intelligence", in the 1988 special
issue of Daedalus on AI.  This paper identifies among the core
tenets of connectionism: "the conception of memory retrieval
as the relaxation of a network to a stable state"; "the idea of
energy minimization through simulated annealing"; "the notion
of bidirectional and symmetric single connections"; and "the idea
that learning can proceed by clamping the output of the system to
a desired value while synaptic weights are adjusted according to
some rule".

In other words, he has identified connectionism entirely with
the use of Hopfield nets and Boltzmann machines, which in fact
form a small and non-central subset of the field.  He appears not
to have heard of backpropagation, for instance, which has been
much more central to connectionist practice, and which fits none
of the descriptions above.

>Digitial computation is not as universal as you are blanketly asserting,
>and Penrose knows this.  I mentioned this in a previous article: quantum
>gravity might not be computable.  A computational approach to a quantum
>field theoretic sum over four-manifolds runs into the unsolvability of
>recognizing when a four-manifold is trivial.  Actually, I don't know if
>anyone has shown that basic QFT is computable--people mostly just recog-
>nize when the relevant sums have converged for all practical purposes.

Of course it is not a necessary truth that digital computation can
simulate any physical process.  But the demonstrated power of
digital computation so far makes it a sufficiently plausible claim
that the burden of proof falls squarely on those who wish to argue
the opposite.

>The work of Deutsch, Landauer, Feynman, Margolus, etc has--well `shown'
>is a bit strong, but `suggested' is a bit weak--anyway, they have
>indicated that quantum mechanical computation is, in principal, a far
>superior beastie, when it comes to speed, than classical Turing compu-
>tation.

This is irrelevant to the universality claim.  Of course Deutsch
et al wanted to demonstrate the possibility of non-Turing-computable
mechanisms, but as is well-known, they came up empty-handed.  The
computational complexity results are interesting, but a difference
in speed falls far short of the radical difference in power that they
hoped for.

>In short, there is good physical speculation behind the notion that
>Church's thesis is on its way out.

I don't know what "good speculation" comes to, but there's certainly
no good evidence.  Nevertheless, I wouldn't completely dismiss the
possibility that the quantum level is noncomputable.  Even if it were,
however, it would remain to be seen whether these quantum
noncomputabilities amplified into relevantly noncomputable phenomena at
macroscopic levels.

>This is pretty funny.  When Dennett dismisses Edelman, it's "correct",
>but when Edelman does the same, it's "waffly" and "irrelevant".  And
>this has nothing to do with whom you're rooting for?  Remember, none
>of us know who is really right in the end.

What makes you think you know who I'm "rooting for"?  For what it's
worth, I think that Edelman's trilogy is perhaps slightly better than
Dennett's book.  Both of them seem to me to offer entirely inadequate
accounts of consciousness.

>>All AI needs to hold
>>is that brain processes can be simulated by digital computation -- a
>>claim that Edelman doesn't come close to refuting.
>
>Why does he need to?

I don't think he needs to, but he certainly tries.  E.g. on page 29
of _The Remembered Present_, he seems concerned to refute the claim
that "what the brain does may be described by algorithms".  However,
he is sufficiently confused that his arguments here consist of an
appeal to (1) Putnam's argument that mental states (such as belief)
can't be type-identified with computational states; (2) Searle's
argument that a computational simulation of the brain would lack
intentionality; and (3) Putnam's argument that meaning doesn't
supervene on brain state.  (1) and (3) are completely irrelevant, and
even (2) is irrelevant to the problem-solving abilities of brains
vis-a-vis Turing machines, which is what he had been discussing
imediately above.

>And grounded in the structure of actual brains, and loaded with evolu-
>tionary plausibility.  That's not "just another guy".  These aren't just
>"some ways" of being sophisticated.

Edelman has a reasonably interesting theory that probably has a number
of things right.  You'll note however that I haven't said anything
substantive about his theory, partly because I don't consider myself
qualified to comment on the neuroscience.  I've only been discussing
his arguments against other approaches, all of which seem to me to be
pretty feeble.  The point is that Edelman has been far more concerned
to distance himself from others than is necessary.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


