From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima Wed Dec 18 16:02:06 EST 1991
Article 2182 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: From neurons to computation: how?
Message-ID: <320@tdatirv.UUCP>
Date: 16 Dec 91 19:42:38 GMT
References: <59809@netnews.upenn.edu> <310@tdatirv.UUCP> <60059@netnews.upenn.edu> <318@tdatirv.UUCP> <60329@netnews.upenn.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 173

In article <60329@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:
|In article <318@tdatirv.UUCP>, sarima@tdatirv (Stanley Friesen) writes:
|>I was viewing neurons from a biological perspective.
|
|That makes it even weaker.  If you view neurons merely as fire/not-fire
|abstractions, the digital analogy seems reasonable.  If you add in a
|host of bioelectrochemical realities, you have a lot more explaining
|to do.

I do not believe I am restricting my model to such a simplistic level.
But I believe that most minor bioelectrical details are irrelevant to
higher level funtionality.  They seem to me to be biological methods of
calculating complex functions.

|I never said there was.  At least I try to be careful about distinguishing
|evidence from my assumptions.  My point is that you haven't been careful,
|and have indeed been mistaken here.  I'll repeat myself: I have no argument
|with assumptions. Just conclusions.

OK. my tentative assumptions are:

1. the models developed for sensory processing elements on the brain are
   aproximately scalable to cover general brain operation.  (probably with
   minor adjusments for different operating modes).

2. the types of processing capacities shown in digital neural networks, and
   which have been verified in biological neural systems, are the functional
   basis for higher level brain operations.

3. those properties of neurons that effect thier 'neural-network' behavior are
   the one that are relevant to brain funtion. Other properties are just
   'accidental', due to the evolutionary origin of neural systems.

I am willing to change these assumptions.  But *only* if shown observable
evidence.  At the present they seem to me to be the 'Occam' assumptions.

Note, in particular, that they are based to a considerable degree on existing
experimental results.

|>As I understand it, an EEG is essentially just a regional summation of
|>neuronal activity.  Thus, whatever an EEG is doing, it is because some
|>population of neurons is doing something specific.  That is, the EEG is
|>the large scale result of the information processing activities of a
|>family of interconnected neurons.
|
|That's it?  It's just a result?  All that oscillation is just a buzz?

No, it is the cooperation of a family of neurons (or many families of neurons)
in some sort of organized activity.

It almost certainly has some functional significance, but I see it as an
emergent property of the *interaction* of many neurons organized in a
particular way.

|>... but recent studies have shown that, at
|>least in the olfactory cortex, the EEG patterns generated are
|>tracable to neural-net type interactions amoung the individual
|>neurons.  [there is now a NN model that correctly mimics the
|>operation of the olfactory cortex, and generates equivalent
|>activation patterns].
|
|Meanwhile, olfaction and learning are cognitively much less complicated
|than thought and consciousness itself.

True, but it is a major breakthrough for digital NN's.  I was flabergasted
when I saw that he had demonstrated the existance of a olfactory bulb global
'learning' signal, that activated learning mode.  This was a feature of digital
NN's that I had previously believed to be unrealistic, and there it was in an
actual biological neural system.

Thus, I am currently assuming that such results are aproximately scalable,
at least in theory. [The main current limitations being computation capacity
and measurement technology rather than inadequacy of the models].

|>If this is what I think it is, so do I.  But it still seems to me to be
|>*computable*, so it does not invalidate the strong AI hypothesis.
|
|Edelman does not believe his TNGS based model of mind is computable.
|And yes, he's running digital simulations of parts of it.

Hmm, but do the digital 'simulations' capture all of the functionally relevant
features.  If they do, then they are not just simulations.  They are digital
implementations of the same functionality.

It is only if the digital version of the process fails to capture some feature
of the real process that impacts the *external* behavior of the system that
it fails to capture the essenc of the original.

|I certainly agree that chunks of our mind can be computed.  It's the
|automatic extrapolation to the whole show that I reject.  

Not automatic.  Just a simplifying assumption, to be used untol proven wrong.

|My point here is simply that we already know that there are machine-like
|aspects to human intelligence, and I expect AI success in just such areas.
|Extrapolation beyond that of-course core is weak.

But existing digital neural networks *already* go beyond mere 'machine-like'
behavior, showing various sorts of complex adaptive responses. This is quite
the opposite of your 'idiot-savant' examples (which show invariant, non-
adaptive behavior).

|>  For example, human beings need sleep.  And
|>|when awake, they need stimulus or else they go nuts.  These are baf-
|>|fling and/or bewildering from a purely computational point of view.
|
|>O.K, now how does the Bose-Einstein effect explain this?
|
|As phase transitions in the pumped phonon condensation.  The energy pump
|can only last so long at a time, and then it goes below a critical level,
|the condensation evaporates, and the brain falls asleep.  During sleep,
|neuronal activity continues at the same or somewhat reduced level, yet
|consciousness is gone.  The dream state is the metastable zone when the
|pump is near the critical level.

Hmm, interesting.

As presented this does not seem to really address the *function* of sleep.
Though perhaps that is simply because the presentation is so sketchy.

My own idea on sleep is that it is not really related to brain function,
except indirectly.  I see it as essentially equivalent to PM on computer
systems, where the OS must be brought down to do the maintenance functions.

In this concept it is only REM sleep that has direct cognitive relevance.
And even this amy be due to processing speed limits.  That is during
waking hours new sensations arrive faster than they can be assimilated fully,
so some period of low external activity is used to 'catch up' on background
tasks.

Notice that in this model sleep is a very high level set of operations,
that do no need direct 'phyisical' explanation.

|As an aside, Edelman's model makes very good sense here for both of these
|points.  He postulates different categorization nets in the brain--a C(I)
|for our internal limbic states and a C(W) for the external world.  Con-
|sciousness is an epiphenomenon resulting from--grossly oversimplifying--
|memory recall running parallel with external world perception.

This sounds like a fairly likely idea.  'The Remembered Present' eh.

But then, existing digital neural networks do just that themselves.

|Meanwhile, I'm baffled and/or bewildered at why a program would need to
|go to sleep, or require stimulus while awake.  Remember: no physiological
|reason is known for the need for sleep.  A model of consciousness that
|requires sleep would neatly fit in with this negative evidence.

Well, if it had two different functionalities that operated at different
speeds that needed to be occasionally synchronized by allowing the slow
one to catch up ...
[Since, in order to catch up, the fast operation must be put on hold ...]

|This is a joke, right?  After three decades, computer chess is now up to
|grandmaster play, using brute force.  This is not progress.

No, I am *not* talking about 'expert system' junk.  I am mostly talking
about the more advanced NN research.  This is the *first* time that
cybernetic research has actually contributed to neurological research.
[Previously all the input has been from neurology to cybernetics].
This, in itself, indicates that a major breakthrough has been made.
The current *cybernetic* models are now useful neurological models as well.
[Not complete by any stretch, but still actually applicable, rtaher than
merely irrelevant].

|If these models explain phenomena that are left uncovered by the
|computational point of view, then so much the worse for the latter.

Actually, the main problem here seems to be that ypu are taking a very narrow
view of 'computational'.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


