From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu Mon Dec 16 11:02:08 EST 1991
Article 2145 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu
>From: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Newsgroups: comp.ai.philosophy
Subject: Re: From neurons to computation: how?
Message-ID: <60329@netnews.upenn.edu>
Date: 16 Dec 91 00:13:01 GMT
References: <59809@netnews.upenn.edu> <310@tdatirv.UUCP> <60059@netnews.upenn.edu> <318@tdatirv.UUCP>
Sender: news@netnews.upenn.edu
Reply-To: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Organization: The Wistar Institute of Anatomy and Biology
Lines: 209
Nntp-Posting-Host: libra.wistar.upenn.edu
In-reply-to: sarima@tdatirv.UUCP (Stanley Friesen)

In article <318@tdatirv.UUCP>, sarima@tdatirv (Stanley Friesen) writes:
>In article <60059@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:
>|"Mind" is one of those things that neurobiologists get real edgy about.
>|A few have offered models (Eccles, Sperry, Edelman), but most settle for
>|the understanding of low level processing, like vision.

>And right now, I am assuming that the same basic principles operate
>in the more cognitive parts of the brain as operate in the simple,
>sensory processors.

That's a very reasonable assumption.  But your other assumption, that it
will all tie together in a digitally understandable manner, just because
certain low level processes do, is the part that I find so dubious.  And
supported by nothing beyond sayso.

>|>Fine, and when someone does this experiment, *and* it shows a
>|>*psychologically* relevant effect as far as the human mind is
>|>concerned, I will keep to the simpler theory.

>|And how does computation show any special *psychologically* relevant
>|effect?

>I am talking about models of brain function and cognition.  A model of some
>component of the brain is only relevant to the formation of cognitive
>activity if it *contributes* to cognitive activity.  In practice, with humans
>at least, this is 'psychology' in its most general sense.

And the question of how well these computational models capture our
own psychology remains a rather divisive one.  None of them have really
established themselves.  If you want to see some really outre models,
look through THE JOURNAL OF MATHEMATICAL PSYCHOLOGY.

>|Indeed, the whole idea of saddling neurobiology/psychology with the AI
>|theme song is totally anti-Occam.  

>Actually, I thought I was going the other way.

That's my point.  You aren't.  Digital computability is a restrictive
assumption, with at best very weak evidence.

>I was viewing neurons from a biological perspective.

That makes it even weaker.  If you view neurons merely as fire/not-fire
abstractions, the digital analogy seems reasonable.  If you add in a
host of bioelectrochemical realities, you have a lot more explaining
to do.

>|>So, enlighten me.  What relevance do these have to human psychology?

>|As much, more, or less than computers.

>In what way does this sort of thing effect cognitive processes.
>That neurons are input-output data transformers is known, and it is
>clear how *this* could effect cognition.

HOW???  To quote an example from Dennett (yeah, Dennett), you're like
the expert in the audience watching a magician saw a lady in half, who
explains to his mystified friend that what he is doing is not actually
sawing her in half.

Neurons are more than i/o devices, and you can do no more than handwave
an explanation of how they take part in cognition.  I can do that with
the Marshall-Froehlich pumped phonon condensation model--the "these"
you asked about in >|> above.

>					   There is no evidence I know
>of for the relevance of this Bose-Einstein condensation effect.

I never said there was.  At least I try to be careful about distinguishing
evidence from my assumptions.  My point is that you haven't been careful,
and have indeed been mistaken here.  I'll repeat myself: I have no argument
with assumptions. Just conclusions.

>|>And I mean computable in the sense that physical computers as we build them
>|>today could compute the same data transform as any given neuron (including
>|>the temporal variability we calling learning).

>|>I conclude it because all that I know about the operation of neurons (and
>|>that is considerable, since I am by background a biologist) is fully
>|>consistent with the theory that it is only the signalling properties
>|>of a neuron that are relevant to thought.

>|Really?  Then what is all that brain EEG going on for?  It is not noise.
>|EEG activity can be correlated with thought.  Correlating it with neurons
>|is not easy.

>As I understand it, an EEG is essentially just a regional summation of
>neuronal activity.  Thus, whatever an EEG is doing, it is because some
>population of neurons is doing something specific.  That is, the EEG is
>the large scale result of the information processing activities of a
>family of interconnected neurons.

That's it?  It's just a result?  All that oscillation is just a buzz?

Yes, it could be.  Or it could be that the associated phonons are exactly
what the Froehlich-Marshall model calls for.

>It is true that at this time the exact details of the correlation
>have not been worked out, but recent studies have shown that, at
>least in the olfactory cortex, the EEG patterns generated are
>tracable to neural-net type interactions amoung the individual
>neurons.  [there is now a NN model that correctly mimics the
>operation of the olfactory cortex, and generates equivalent
>activation patterns].

Does it also imitate the effects of lesions?  Some NN models in neuro-
linguistics look good at first, but fail this important test.

Meanwhile, olfaction and learning are cognitively much less complicated
than thought and consciousness itself.

>|Right.  This is the start, for example, of Edelman's Neuronal Group
>|Selection Hypothesis, which I haven't had the time or knowledge to
>|rant about yet, but I hope I've implied that I look favorably upon.
>
>If this is what I think it is, so do I.  But it still seems to me to be
>*computable*, so it does not invalidate the strong AI hypothesis.

Edelman does not believe his TNGS based model of mind is computable.
And yes, he's running digital simulations of parts of it.

>[Especially since the relevant feature of the synaptic cluster seems to be
>that it is one way of representing connection weights, so a computer version
>could, concievably replace the synaptic group with a parameterized connection].

I certainly agree that chunks of our mind can be computed.  It's the
automatic extrapolation to the whole show that I reject.  

Consider autism.  The human seems to be very machine like.  Even the
idiot savants among the autistic usually give an impression of machine
intelligence in their gifts.  There are famous cases of "brilliant"
pianists, that play all requests out to the bitter end.  They cannot
be turned off.

My point here is simply that we already know that there are machine-like
aspects to human intelligence, and I expect AI success in just such areas.
Extrapolation beyond that of-course core is weak.

>|>In order to challenge this conclusion you must show behaviorally or
>|>informationally relevant effects that are not derived from this type
>|>of operation.  As far as I know, no such effect has ever been found.

>|There are numerous such effects known, although they are usually not
>|described in this sense.  For example, human beings need sleep.  And
>|when awake, they need stimulus or else they go nuts.  These are baf-
>|fling and/or bewildering from a purely computational point of view.

>O.K, now how does the Bose-Einstein effect explain this?

As phase transitions in the pumped phonon condensation.  The energy pump
can only last so long at a time, and then it goes below a critical level,
the condensation evaporates, and the brain falls asleep.  During sleep,
neuronal activity continues at the same or somewhat reduced level, yet
consciousness is gone.  The dream state is the metastable zone when the
pump is near the critical level.

The need for stimulus would again involve a phase transition.  The phonon
condensate is built from the oscillations of certain neural activity, and
when that activity drops--with the energy pump still charged--the condensate
would shift into the hypnotic trance state.

As an aside, Edelman's model makes very good sense here for both of these
points.  He postulates different categorization nets in the brain--a C(I)
for our internal limbic states and a C(W) for the external world.  Con-
sciousness is an epiphenomenon resulting from--grossly oversimplifying--
memory recall running parallel with external world perception.  Sleep
arises from the need to biochemically synchronize the C(I) and C(W) nets.
The internal system runs slowly, with an occasional change like adrenalin
or pain or hunger coming into the net.  The external system is fast, our
realtime comprehension of the world around us.  Over the course of a day,
the two nets would generate a biochemical mismatch that triggers drowsi-
ness--and with C(W) turned off, C(I) can catch up.  And obviously sensory
deprivation will mess up the Edelmanian consciousness, since C(W) is a
necessary part of this model.  See THE REMEMBERED PRESENT, Chapter 9.

(Note that both models can accomodate caffeine: in Marshall-Froehlich,
caffeine restimulates the energy pump, while in Edelman, caffeine does
biochemical magic that covers up the C(I)/C(W) mismatch!)

Meanwhile, I'm baffled and/or bewildered at why a program would need to
go to sleep, or require stimulus while awake.  Remember: no physiological
reason is known for the need for sleep.  A model of consciousness that
requires sleep would neatly fit in with this negative evidence.

>If it doesn't, then it has no better claim then the simple data transform
>model.

Even if these alternative models did not explain sleep and the need for
stimulus, your statement above, "as far as I know, no such effects has
ever been found", remains shallow.  Unless you care to explain why these
two are unmysterious under computational models.

>|Your "in order to challenge" is incomplete, by the way.  I can also
>|challenge the computational mind paradigm from the other side: forty
>|years of coming up short is a dismal record for any research claim.

>It now seems to be making progress though.

This is a joke, right?  After three decades, computer chess is now up to
grandmaster play, using brute force.  This is not progress.

>					     And it the effect your
>are talking about adds another layer of relevant entities to the
>model of the mind, so it requires extra evidence.

If these models explain phenomena that are left uncovered by the
computational point of view, then so much the worse for the latter.
-- 
-Matthew P Wiener (weemba@libra.wistar.upenn.edu)


