From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu Wed Dec 18 16:02:19 EST 1991
Article 2204 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu
>From: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Newsgroups: comp.ai.philosophy
Subject: Re: From neurons to computation: how?
Message-ID: <60551@netnews.upenn.edu>
Date: 17 Dec 91 19:21:41 GMT
References: <59809@netnews.upenn.edu> <310@tdatirv.UUCP> <60059@netnews.upenn.edu> <318@tdatirv.UUCP> <60329@netnews.upenn.edu> <320@tdatirv.UUCP>
Sender: news@netnews.upenn.edu
Reply-To: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Organization: The Wistar Institute of Anatomy and Biology
Lines: 318
Nntp-Posting-Host: libra.wistar.upenn.edu
In-reply-to: sarima@tdatirv.UUCP (Stanley Friesen)

In article <320@tdatirv.UUCP>, sarima@tdatirv (Stanley Friesen) writes:
>I do not believe I am restricting my model to such a simplistic level.
>But I believe that most minor bioelectrical details are irrelevant to
>higher level funtionality.  They seem to me to be biological methods of
>calculating complex functions.

I don't disagree with your calculation comment, except when you pin it
down to "digital".  My background is in mathematics and logic, so I am
much fussier over that word than you will ever want to be.

>OK. my tentative assumptions are:

>1. the models developed for sensory processing elements on the brain are
>   aproximately scalable to cover general brain operation.  (probably with
>   minor adjusments for different operating modes).

This sounds reasonable, but I think it has serious booby traps.  See below
when I comment on olfaction.

>2. the types of processing capacities shown in digital neural networks, and
>   which have been verified in biological neural systems, are the functional
>   basis for higher level brain operations.

This I think is murky, and is based partly on an ambiguity that the
terminology has encouraged.  The digital neural net models (DNN) that
have been proposed are not done in imitation of actual biological
neural net wirings (BNN).  This is Edelman's complaint. 

Consider: simulated annealing (SA) is a fascinating optimization technique.
By applying SA to the travelling salesman problem--and bringing in other
thermodynamic techniques--one is clearly not doing real-live thermodynamics.
The technique works, is all.  But if you apply SA to protein folding, the
murkiness of your assumption shows up.  Does SA work here because it's such
a clever algorithm?  Or because the protein actually solves in real-time the
same thermodynamic problem the SA model deals with?  I don't know.

Genetic algorithms can run into this same murkiness, whenever they are
applied to DNA to do some clever optimization.  And the DNN vs BNN is
here in the same murkiness too.  Does DNN work because it's clever?  Or
because it's a good relevant model?  We know that DNNs are clever from
numeorus other uses which are clearly not networks.  Without DNN -> BNN
mapping, we can't resolve this murkiness.

>3. those properties of neurons that effect thier 'neural-network' behavior are
>   the one that are relevant to brain funtion. Other properties are just
>   'accidental', due to the evolutionary origin of neural systems.

This I think is a big leap.  There may be neural vermiform appendices in
our brains, yes, but not an endless supply.  And neurons have a lot of
properties.  How many are needed for mere cellular this and that?  Only
research will tell us.

>I am willing to change these assumptions.  But *only* if shown observable
>evidence.  At the present they seem to me to be the 'Occam' assumptions.

I'm not even asking you to change your assumptions.  Just be aware that
some people are working in a different constellation of primary assumptions
to Occamize against, and that your 1-2-3 leads to greater complications for
these people.  There's no way to decide apriori which constellation is the
right one.

Penrose, for example, is one of many who consider the role of consciousness
in quantum mechanics to be an important question.  Your 1-2-3 makes his
approach more complicated, by leaving the quantum mystery as remote as
ever.  Penrose does not want to explain just mind, as maybe 1-2-3 will,
but mind_and_world, which, assuming 1-2-3, means that you or have to come
up with 4-5-6-7 to satisfy Penrose.  (I don't mean you are obligated to
explain everything.  I simply mean that in the Penrosian worldview of
things to be explained, 1-2-3 falls short.)

But watch what happens when you replace 3 with 3*: I'll concede you your
neural nets, but I'll add on a pumped phonon condensation assumption for
consciousness itself.  So obviously 1-2-3 is "simpler" than 1-2-3*.  And
if both 1-2-3 and 1-2-3* work out to give a coherent theory of mind, I
won't object in the least to working computational psychologists sticking
with 1-2-3.  But neither assumptions give such a theory, so I claim it is
premature to appeal to Occam.  3* already gives a link into the quantum
world, and so when we explain mind_and_world 1-2-3* plus 4-5 ends up being
"simpler" than 1-2-3 plus 4-5-6-7.

I can play the same game with Edelman's model.  Simplifying his approach,
let's say he's trying to model mind_and_sleep.  He replaces 3 with 3#: a
lot of neural nets, plus neuronal group selection and reentrant signalling.
Then 1-2-3 has to make ad hoc assumptions about sleep--you do so below--but
1-2-3# has sleep explained already.  (More accurately, Edelman is modelling
mind_and_brain.)

>Note, in particular, that they are based to a considerable degree on existing
>experimental results.

Based--and extrapolated.  Many of us go for different extrapolations,
from different perspectives as to what should be explained.

>|That's it?  EEG's just a result?  All that oscillation is just a buzz?

>No, it is the cooperation of a family of neurons (or many families of
>neurons) in some sort of organized activity.

>It almost certainly has some functional significance, but I see it as
>an emergent property of the *interaction* of many neurons organized
>in a particular way.

So we agree that the EEG has significance.  And that is an emergent
property of something going on in many neurons.  But of what?  The
synaptic signalling?  Or something else?

I still reject your previous conclusion:

>|>|>I conclude it because all that I know about the operation of
>|>|>neurons (and that is considerable, since I am by background a
>|>|>biologist) is fully consistent with the theory that it is only
>|>|>the signalling properties of a neuron that are relevant to
>|>|>thought.

But what about those olfaction studies?  Doesn't that clinch it?  Let's see:

>|>... but recent studies have shown that, at
>|>least in the olfactory cortex, the EEG patterns generated are
>|>tracable to neural-net type interactions amoung the individual
>|>neurons.  [there is now a NN model that correctly mimics the
>|>operation of the olfactory cortex, and generates equivalent
>|>activation patterns].

>|Meanwhile, olfaction and learning are cognitively much less complicated
>|than thought and consciousness itself.

>True, but it is a major breakthrough for digital NN's.  I was
>flabergasted when I saw that he had demonstrated the existance of a
>olfactory bulb global 'learning' signal, that activated learning
>mode.  This was a feature of digital NN's that I had previously
>believed to be unrealistic, and there it was in an actual biological
>neural system.

I consider it a breakthrough for DNNs also.  And more importantly, phase
transitions as models for cognition in general.  But remember: one
critical detail about the relation between DNN/BNNs and the olfactory
cortex is that there *already* is a linear relationship between various
EEG contributory amplitudes and neuron signalling probability.  It was
partly this simplicity which attracted researchers to olfaction in the
first place.

>Thus, I am currently assuming that such results are aproximately
>scalable, at least in theory. [The main current limitations being
>computation capacity and measurement technology rather than
>inadequacy of the models].

See that "already" up above?  It makes your assumption here nonsense
in theory.  Generalizing from olfaction to all EEGs and their putative
DNNs is a big leap.  Linear -> non-linear is never a mere conceptual
scale-up.  It's a conceptual phase transition!

>|>If this is what I think it is, so do I.  But it still seems to me to be
>|>*computable*, so it does not invalidate the strong AI hypothesis.

>|Edelman does not believe his TNGS based model of mind is computable.
>|And yes, he's running digital simulations of parts of it.

>Hmm, but do the digital 'simulations' capture all of the functionally
>relevant features.  If they do, then they are not just simulations.
>They are digital implementations of the same functionality.

To Edelman, the real world is a functionally relevant feature.  Like
Heidegger, Dreyfus, Putnam, Winograd and others, he does not believe
that there is a ready-made world out there, with pre-existing categories.

>It is only if the digital version of the process fails to capture some feature
>of the real process that impacts the *external* behavior of the system that
>it fails to capture the essenc of the original.

I can't parse the above sentence.

>|I certainly agree that chunks of our mind can be computed.  It's the
>|automatic extrapolation to the whole show that I reject.  

>Not automatic.  Just a simplifying assumption, to be used untol proven wrong.

You are right, it is simplifying.  You can use it all you want.

>|My point here is simply that we already know that there are machine-like
>|aspects to human intelligence, and I expect AI success in just such areas.
>|Extrapolation beyond that of-course core is weak.

>But existing digital neural networks *already* go beyond mere 'machine-like'
>behavior, showing various sorts of complex adaptive responses. This is quite
>the opposite of your 'idiot-savant' examples (which show invariant, non-
>adaptive behavior).

Not always.  Some musical savants have learned to improvise music in
"intelligent" ways.  They are not just human tape recorders.  The question
of autistic limits is rather fuzzy.

>|>  For example, human beings need sleep  And
>|>|when awake, they need stimulus or else they go nuts.  These are baf-
>|>|fling and/or bewildering from a purely computational point of view.

>|>O.K, now how does the Bose-Einstein effect explain this?

>|As phase transitions in the pumped phonon condensation.  The energy pump
>|can only last so long at a time, and then it goes below a critical level,
>|the condensation evaporates, and the brain falls asleep.  During sleep,
>|neuronal activity continues at the same or somewhat reduced level, yet
>|consciousness is gone.  The dream state is the metastable zone when the
>|pump is near the critical level.

>Hmm, interesting.

Yes.  Occam's razor is not the only way to select models for further
study.  Controlled featuritis is another way.

>As presented this does not seem to really address the *function* of sleep.
>Though perhaps that is simply because the presentation is so sketchy.

No, because the model says sleep really doesn't have a function.  It's
merely a necessary byproduct of consciousness.

>My own idea on sleep is that it is not really related to brain function,
>except indirectly.  I see it as essentially equivalent to PM on computer
>systems, where the OS must be brought down to do the maintenance functions.

You can assume this, but call it number 4 and say you assume 1-2-3-4.
No experiment supports the above scenario.

>In this concept it is only REM sleep that has direct cognitive relevance.
>And even this amy be due to processing speed limits.  That is during
>waking hours new sensations arrive faster than they can be assimilated fully,
>so some period of low external activity is used to 'catch up' on background
>tasks.

>Notice that in this model sleep is a very high level set of operations,
>that do no need direct 'phyisical' explanation.

Ah, you're having fun now.  I consider it a big loss by AI modellers
that they generally do not address the non-AI aspects of their models.
Margaret Boden COMPUTER MODELS OF MIND mentions this lack--and her own
book makes no mention of sleep, depression, schizophrenia, etc.

Part of the reason they don't is obvious: a computational model of
vision presumably has nothing to say about sleep.  And there's nothing
to "compute" regarding sleep, so it gets left out.  But if you or anyone
else is going to claim AI will explain our minds, and there is numerous
biological evidence for this claim, then you are going to have to address
a lot of non-"computational" aspects of our mind/brain.

>|As an aside, Edelman's model makes very good sense here for both of these
>|points.  He postulates different categorization nets in the brain--a C(I)
>|for our internal limbic states and a C(W) for the external world.  Con-
>|sciousness is an epiphenomenon resulting from--grossly oversimplifying--
>|memory recall running parallel with external world perception.

>This sounds like a fairly likely idea.  'The Remembered Present' eh.

The book is remarkable.  Whatever your feelings about AI, as a biologist
you owe it to yourself to read it.  Right or wrong as a model, Edelman has
a lot of incredible ideas.  His laboratory isn't following him to the
opposite coast because he's "just some guy with a computable model".

>But then, existing digital neural networks do just that themselves.

So can the right 10-line BASIC program.  Edelman's model is much more
complicated than my utterly brief description.

>|Meanwhile, I'm baffled and/or bewildered at why a program would need to
>|go to sleep, or require stimulus while awake.  Remember: no physiological
>|reason is known for the need for sleep.  A model of consciousness that
>|requires sleep would neatly fit in with this negative evidence.

>Well, if it had two different functionalities that operated at different
>speeds that needed to be occasionally synchronized by allowing the slow
>one to catch up ...
>[Since, in order to catch up, the fast operation must be put on hold ...]

Careful!  You are in danger of having your membership in the Occam club
revoked if you keep building explanations on top of explanations!

Not that I object, but I think you are catching on to my point: there is
a *lot* to explain.

How about a reason why a program needs stimulus when awake?

>|This is a joke, right?  After three decades, computer chess is now up to
>|grandmaster play, using brute force.  This is not progress.

>No, I am *not* talking about 'expert system' junk.

I did say "forty years" for a reason.  Although I think large chunks
of AI are cognitive dead ends, I consider this a working non-assumption
on my part, not a proven conclusion.

>						     I am mostly talking
>about the more advanced NN research.  This is the *first* time that
>cybernetic research has actually contributed to neurological research.

Edelman might (someday) say it's the *second* time.

>[Previously all the input has been from neurology to cybernetics].
>This, in itself, indicates that a major breakthrough has been made.
>The current *cybernetic* models are now useful neurological models as well.
>[Not complete by any stretch, but still actually applicable, rtaher than
>merely irrelevant].

Reread my comments about linear -> non-linear, and about the inherent
murkiness in any comparison of DNNs with BNNs.  It's a breakthrough,
but perhaps only enough to let a little light through.

>|If these models explain phenomena that are left uncovered by the
>|computational point of view, then so much the worse for the latter.

>Actually, the main problem here seems to be that ypu are taking a
>very narrow view of 'computational'.

You were the one who said "digital".  If you left that out, I'd have
never responded.  Digital, to me, is a very narrow view of computation.

Note, by the way, that while I push phase transitions, this is not
necessarily a push against digital.  Cellular automata can exhibit
phase transitions.
-- 
-Matthew P Wiener (weemba@libra.wistar.upenn.edu)


