Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Grounding Representations
Message-ID: <D8DBv8.K2A@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <D89s5r.Hzs@indirect.com>
Date: Wed, 10 May 1995 15:18:44 GMT
Lines: 166

Clay Thurmond  <claytex@panix.com> wrote:
>Andrzej Pindor, pindor@gpu.utcc.utoronto.ca writes:
>>Clay Thurmond  <claytex@panix.com> wrote:

>>>This suggests the sort of selection theory type stuff that
>>>Gazzaniga discusses in "Nature's Mind".  In his view (or at
>>>least my understanding of it) the structures are all there,
>>>at least potentially, but will not emerge if the proper
>>>environmental stimuli are not there to trigger the
>>>selection mechanism. In the case of the cats, the
>>>developmental window of opportunity has been lost. This way
>>>of thinking at least has the interesting effect of
>>>problematizing the notion of causality.
>>>
>>I am not sure what is exactly meant by "...the structures are all
>>there, at least potentially...". Presumably there is only a
>>finite range of structures which can develop, no matter what the
>>stimuli. Saying that therefore they are all (at least
>>potentially) in there is like saying that all possible games of
>>chess are already in the rules of chess. They are, but is it
>>helpful to look at the problem this way?
>
>Right, my phrasing was misleading.  Not all possible structures.
>But it's plausible to think that natural selection has programed
>certain potential structures.  In this particular case, i.e.
>cats which fail to develop the ability to perceive horizontal
>lines, it could well be that they fail here because they are not
>exposed to environmental cues at the appropriate stage of
>development which would have otherwise served to trigger, or
>activate, these potential structures.  It doesn't seem very
>controversial to suppose that kittens are predisposed to the
>ability to perceive horizontal lines, but when deprived of such
>stimuli, this potential fails to be realized.  A certain amount
>of plasticity is also programmed in:  environmental cues must be
>extant in order to trigger this pre-programmed structure.

The problem with your way of thinking is, as I see it, that it makes you to 
assume as noncontroversial the statements like the one above. Why would kittens
be predisposed to precieve the horizontal lines and not the ones at 45 degrees?
Maybe in a suitable environment they would only grow to see curvy lines and
not the straight ones? There are surely limits on what they can grow to learn
to discriminate (what evolution found useful them to have), but assumptions
about particular dispositions seem to me unnecessary. There is only a finite
number of books of a given length which can be written, does it mean that the 
latin alphabet "predisposes" humans to write these books?

>It is helpful to look at the problem this way because, as I said,
>it leads toward a more nuanced view of causality.  In a sense,
>these cues "cause" kittens to have the ability to perceive
>certain sorts of stimuli.  But in another sense, there is a
>limited set of stimuli that they are predisposed to have the
>ability to perceive.  Thus, nervous systems are programmed with

I'd rather say "a limited _range_ ..."

>both a certain amount of plasticity along with a finite set of
>predispostions.  Since nature normally does not present kittens

Our disagreement seems to be about specificity of 'dispositions' vs. range of
plasticity.

>with a situation in which no horizontal lines are encountered,
>it is quite parsimonious to consider that natural selection
>allows environmental factors to some extent to dictate which of
>the myriad possibilities of development will be realized, while
>at the same time limiting possible realizations to a finite
>number.  After all, there is little reason to believe that
>kittens need to be able to discriminate between say, a line with
>an angle of 40 degrees and a line with an angle of 35 degrees.
>
>It seems to me that a notion which emphasizes "causality" at the
>expense of "structure" (not that you are espousing this
>necessarily) would have to allow for the possibility that cats
>could, through exposure to only 35 degree lines, develop the
>ability to perceive such lines at the expense of, say, 40 degree
>lines.  Why would natural selection need to provide such a
>degree of plasticity to nervous systems if it is only concerned
>with discriminating phenomena which are salient for survival
>while being restricted to finite mechanisms of an organism's
>nervous system?

Such a specificity may be (and probably is) impossible due to physical limi-
tations of the discriminating apparatus.

>What I am suggesting here is a middle ground between causality
>and structure. If natural selection is parsimonious, then it is
>certainly interested in such a middle ground.  Causality, in
>this view, becomes somewhat restricted to a selection of finite
>alternatives, rather than a simple one way action in which the
>environment dictates the organism's development.
>
I am all for middle ground. However, for me the natural selection being
parsimonious, means programming into the brain as little as possible and let 
environment dictate what is necessary or interesting. The more plasticity,
tha more ability to adapt to particulars of the environment. Note that
brains of all creatures we know have the same underlying structure (neurons,
axons, synapses, etc.) and very similar chemistry and compare the demands
on the brain capapbilities of land animals, birds and dolphins (for instance).
........
>>We are obviously still far away from fully appreciating "human
>>scope" :-). Nevertheless machine may perhaps be truly able to
>>think in terms unavailable to humans. On the other hand, the
>>same features which may give machines these capabilities may
>>make it difficult for them to follow many human ways of
>>thinking.
>
>Yes I agree.  It might be interesting to think about what the
>essential difference might be between the sorts of insights a
>machine might have, and those that humans seem to display in
>phenomena such as, say, Kuhn's paradigm shifts.  To the extent
>that humans think in ways that follow mental models derived
>from sensory-motor activity, we would seem to be barred to some
>extent from radically new insights.  Yet within the limitations
>of these mental stuctures are multutudes of possiblities of
>transcendence, of forging new connections, the most important of
>which might well come to us without conscious intent, especially
>if you consider the mind as computational.  
>
>What exactly might it be about a computational machine that can
>allow it to do things that a computational mind can't?  (not a
>rhetorical question) 
>
Good question! In general I'd think that it would be a different topology
of associations it might have formed among terms it would use, from the
topology of associations present in a human mind, due both to brain
architecture and the way humans learn these terms and manipulate them.
Also computers might just use brute force to explore all available possibili-
ties, whereas humans choose hunches to decide which trail to follow.
Few years ago there was an article in Scientific American about a chess
program competition. During one of the matches one of the top programs (Russian
I think) suddenly sacrificed the queen and finally lost after 13 moves.
The authors of the program (one of them a chess master of international 
standing) were sure there was a bug in the program. After spending a whole 
night analyzing the program they found out that sacrificing the queen was the
only way to avoid a mate in 5 moves. The problem was that the mate was a very
unusual one, enough so that it escaped the attention of even a chess master,
due to the way humans analyze the chess moves, basically using pattern matching
and their previous chess experience. The program was using the brute force
so it was not discarding the branches which human players did.
Genetic algorithm is another example of procedure which may help to find 
solutions which humans might have very low chance to stumble upon.

>The other question is: if a machine were to come up with a result
>that a human could not, could we then in any way understand this
>result, and incorporate it into our sensory-motor way of being? 
>If not, what would it mean, if anything?
>
This depend on what you mean by "understand". Does anyone _really_ understand
quantum mechanics? Feynman maintained that one should not try to understand it,
just follow it as a prescription to get results which agree with experiment.
Some people refuse to accept some of consequences of quantum mechanics (like
EPR paradox) because they "do not understand" it. In my view "understanding"
means reducing to (or mapping onto) our sensory experiences. However, our 
sensory experiences relate to what nature is like at our size and time scales.
There is no reason that nature at very different time and size scales should
look the same. And it does not, see QM or relativity.

>
>Clay Thurmond

Andrzej 
-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
