From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima Thu Dec 26 23:57:03 EST 1991
Article 2266 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: From neurons to computation: how?
Message-ID: <334@tdatirv.UUCP>
Date: 19 Dec 91 00:48:49 GMT
References: <59809@netnews.upenn.edu> <310@tdatirv.UUCP> <60059@netnews.upenn.edu> <318@tdatirv.UUCP> <60329@netnews.upenn.edu> <320@tdatirv.UUCP> <60551@netnews.upenn.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 279

In article <60551@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:
|>OK. my tentative assumptions are:
|
|>1. the models developed for sensory processing elements on the brain are
|>   aproximately scalable to cover general brain operation.  (probably with
|>   minor adjusments for different operating modes).
|
|This sounds reasonable, but I think it has serious booby traps.  See below
|when I comment on olfaction.

That is what research is for.  When the research is done that reveals a
problem I will re-evaluate, but not before.

|>2. the types of processing capacities shown in digital neural networks, and
|>   which have been verified in biological neural systems, are the functional
|>   basis for higher level brain operations.
|
|This I think is murky, and is based partly on an ambiguity that the
|terminology has encouraged.  The digital neural net models (DNN) that
|have been proposed are not done in imitation of actual biological
|neural net wirings (BNN).  This is Edelman's complaint. 

I think Edelman is behind the times on NN research.

There is now a great deal of NN research that is aimed *directly* at modeling
real brain operations.  This is why I put the "verified in BNS's" clause in
the original.  I am requiring a certain amount of back-checking.

|>3. those properties of neurons that effect thier 'neural-network' behavior are
|>   the one that are relevant to brain funtion. Other properties are just
|>   'accidental', due to the evolutionary origin of neural systems.
|
|This I think is a big leap.  There may be neural vermiform appendices in
|our brains, yes, but not an endless supply.  And neurons have a lot of
|properties.  How many are needed for mere cellular this and that?  Only
|research will tell us.

In the long run, quite true.  But I suspect there may be more basic cellular
factors at work than you may think.  For instance, I suspect that neurons use
the rather slow expedient of axonal transport because living cells cannot
easily manufacture wires with periodic voltage boosters.  Thus, I doubt if
replacing the axons with boosted wires would be operationally relevant.

|>I am willing to change these assumptions.  But *only* if shown observable
|>evidence.  At the present they seem to me to be the 'Occam' assumptions.
|
|I'm not even asking you to change your assumptions.  Just be aware that
|some people are working in a different constellation of primary assumptions
|to Occamize against, and that your 1-2-3 leads to greater complications for
|these people.  There's no way to decide apriori which constellation is the
|right one.

I have based mine on a long experience in biology, and a desire to avoid
postulating mechanisms that do not *yet* have observable evidence.

Bose-Einstein may well prove to be relevant to thought, but at present there
is no evidence for it.  And since this is an additional mechanism, that is not
observable in small groups of biological neurons, I will not postulate it until
there is more evidence for it.

|Penrose, for example, is one of many who consider the role of consciousness
|in quantum mechanics to be an important question.  Your 1-2-3 makes his
|approach more complicated, by leaving the quantum mystery as remote as
|ever.  Penrose does not want to explain just mind, as maybe 1-2-3 will,
|but mind_and_world, which, assuming 1-2-3, means that you or have to come
|up with 4-5-6-7 to satisfy Penrose. 

O.K. 4) Everything that is not relevant to mental processes acts as an 
        independent variable, to be explained by a seperate theory.

In short, I am far from convinced that the other stuff Penrose wants to explain
is even relevant to the operations of minds.

He can explain it all he wants.  But he needs to *demonstrate* relevance
if he wants me to treat his hypotheses as anything except philosophical
vaporware.

|  But neither assumptions give such a theory, so I claim it is
|premature to appeal to Occam.  3* already gives a link into the quantum
|world, and so when we explain mind_and_world 1-2-3* plus 4-5 ends up being
|"simpler" than 1-2-3 plus 4-5-6-7.

O.K. But *why* do we *need* to link into the quantum world?
I do not see it as at all necessary or relevant, as far as cognition
or consciousness is concerned.

*That* is the additional assumption that I do not accept.

You must show some observable cognitive processes that are *inconsistant*
with a non-quantum explanation before such a link becomes necessary.

|>It almost certainly has some functional significance, but I see it as
|>an emergent property of the *interaction* of many neurons organized
|>in a particular way.
|
|So we agree that the EEG has significance.  And that is an emergent
|property of something going on in many neurons.  But of what?  The
|synaptic signalling?  Or something else?

For now, since the main function of neurons seems to be signalling, I assume
that that is the relevant mode of cooperation.  At least for now.

I find it interesting that the NN model of the olfactory cortex that has been
developed shows the same EEG pattern as the living olfactory cortex. [Including
the chaotic wave-forms characteristic of most cortical EEG's].

|>Hmm, but do the digital 'simulations' capture all of the functionally
|>relevant features.  If they do, then they are not just simulations.
|>They are digital implementations of the same functionality.
|
|To Edelman, the real world is a functionally relevant feature.  Like
|Heidegger, Dreyfus, Putnam, Winograd and others, he does not believe
|that there is a ready-made world out there, with pre-existing categories.

Neither do I.

But I do believe that the concept of 'functional relevance' is meaningful.
[At least in the sense of allowing for the idea of 'equivalent' systems;
you use it in assuming that other humans are conscious like yourself, since
no two humans have the same neural wiring].

|>It is only if the digital version of the process fails to capture some feature
|>of the real process that impacts the *external* behavior of the system that
|>it fails to capture the essenc of the original.
|
|I can't parse the above sentence.

If Edelman's digital models of his group selection processes *behave*
differently then an exactly congruent biological system would, then, and
only then, is the difference between the digital and the biological
version of the system 'functionally relevant'.

|>But existing digital neural networks *already* go beyond mere 'machine-like'
|>behavior, showing various sorts of complex adaptive responses. This is quite
|>the opposite of your 'idiot-savant' examples (which show invariant, non-
|>adaptive behavior).
|
|Not always.  Some musical savants have learned to improvise music in
|"intelligent" ways.  They are not just human tape recorders.  The question
|of autistic limits is rather fuzzy.

Yes, but so is the boundry between digital NN behavior and animal behavior.

I see them as overlapping so much that a certain level of congruence is
indicated.

|>As presented this does not seem to really address the *function* of sleep.
|>Though perhaps that is simply because the presentation is so sketchy.
|
|No, because the model says sleep really doesn't have a function.  It's
|merely a necessary byproduct of consciousness.

Then all mammals and birds are conscious, they almost all sleep.
[This is of course a distinct possibilty,we have no good way of testing
this yet].

I guess I have a hard time seeing such an *expensive*, *dangerous* (in the
wild) activity being a mere byproduct of some other, rather generalized
process.  Especially if it is a process that is present in most mammals,
even ones we consider 'stupid'.  Such a process would not give enough
survival advantage to outweigh the danger of being eaten while sleeping.

Now, human level consciousness *is* advantageous enough to outweigh the
danger of sleeping, but sleeping is *not* limited to humans.

Thus, as an evolutionary biologist, I tend to think that sleep must have
a specific function, in and of itself.  Preventative Maintenance certainly
sounds like a reasonable specific function.  This is, for now, only a
hypothesis.  I just throw it out to show that the phenomenon of sleep is
not really a problem for 'classical' neurological models.

|You can assume this, but call it number 4 and say you assume 1-2-3-4.
|No experiment supports the above scenario.

See above.  Field observation of wild animals suggests a problem with a
non-functional explanation. [I do not limit science to *experiment*,
what I require is repeatable observations].

And it is not really an assumption or even a conclusion so much as a
hypothesis.  But I consider ti a reasonable one, and it removes the *need*
for a seperate, purely neurological, explanation for sleep.

|>In this concept it is only REM sleep that has direct cognitive relevance.
|>And even this may be due to processing speed limits.  That is during
|>waking hours new sensations arrive faster than they can be assimilated fully,
|>so some period of low external activity is used to 'catch up' on background
|>tasks.
|
|>Notice that in this model sleep is a very high level set of operations,
|>that do no need direct 'phyisical' explanation.
|
|Ah, you're having fun now.  I consider it a big loss by AI modellers
|that they generally do not address the non-AI aspects of their models.

Actually, I suspect that some of these issues may well prove to be relevant
to AI systems as they become more sophisticated and biological.

In particular, the occurance of a non-processed residue that needs to be
backgrounded until sufficient processing power is available is likely
to recur in any large system.  [It happens in most mainframes, where large,
low-priority jobs are delayed until the night shift].

|  But if you or anyone
|else is going to claim AI will explain our minds, and there is numerous
|biological evidence for this claim, then you are going to have to address
|a lot of non-"computational" aspects of our mind/brain.

Eventually, quite true.  But I think things need to be built up in a
hierarchical fashion, since that is how the brain is organized, and how it
evolved.

If I am right, sleep is a high level, systemic feature, that can be left until
later to incorporate into the model.

|>This sounds like a fairly likely idea.  'The Remembered Present' eh.
|
|The book is remarkable.  Whatever your feelings about AI, as a biologist
|you owe it to yourself to read it.  Right or wrong as a model, Edelman has
|a lot of incredible ideas.  His laboratory isn't following him to the
|opposite coast because he's "just some guy with a computable model".

Quite right.  And someday, when my reading list gets down that far I will.

I have considered buying it on one or two occasions, but my budget never
reached that far.  [I read books voraciously, so I will eventually get to it].

|>Well, if it had two different functionalities that operated at different
|>speeds that needed to be occasionally synchronized by allowing the slow
|>one to catch up ...
|>[Since, in order to catch up, the fast operation must be put on hold ...]
|
|Careful!  You are in danger of having your membership in the Occam club
|revoked if you keep building explanations on top of explanations!
|
|Not that I object, but I think you are catching on to my point: there is
|a *lot* to explain.

Of course there is.  The human brain is *very* complicated.

But I do not see the delayed processing concept as being too outlandish,
I based it on my experience with mainframe computers in the '70s.
[The one I used the most had a job catagory for "jobs so large that they
are put on indefinate hold, until the operator thinks there is little enough
going on to run them"]

|How about a reason why a program needs stimulus when awake?

I see this as one of the functions programmed into the system.
Exploratory and cognitive processes have a *direct* survival advantage
in the wild.  What you don't know *can* hurt you, it can even kill you.
Thus I see selection for those variants of the mental program that had
a strong exploration urge [read 'need for stimulus'].

One thing I think is *very* important in considering living brains is the
issue of the *evolutionary* origin of the various subsystems and behaviors.

Specific needs, feelings, and innate responses are *usually* preprogrammed
survival behaviors (at least in the context of wild existence), where the
"programmer" is natural selection.

It is only features that lack survival value in the wild that need a
seperate, purely operational, explanation.  [Note that a cognitive process
that is maladative in a civilized human is *not* necessarily maladaptive
in a normal wild animal].

|>						     I am mostly talking
|>about the more advanced NN research.  This is the *first* time that
|>cybernetic research has actually contributed to neurological research.
|
|Edelman might (someday) say it's the *second* time.

Hmm, when has cybernetics actually helped neurologists in the past?

[My knowledge of the history of science is a little sketchy, so I sould
just have missed it].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



