From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!trwacs!erwin Mon Aug 24 15:41:47 EDT 1992
Article 6690 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy
Subject: Re: Freewill, chaos and digital systems
Message-ID: <708@trwacs.fp.trw.com>
Date: 23 Aug 92 22:53:58 GMT
References: <25048@castle.ed.ac.uk> <1992Aug20.192242.2728@mp.cs.niu.edu> <706@trwacs.fp.trw.com> <1992Aug23.045218.4408@wl.com>
Organization: TRW Systems Division, Fairfax VA
Lines: 107

schuette@wl.com (Wade Schuette) writes:

>In article <706@trwacs.fp.trw.com> erwin@trwacs.fp.trw.com (Harry Erwin)
>writes:
>>Let's see.
>>1. There's determined and predictable behavior.
>>2. There's determined but unpredictable behavior. (chaos in a general
>>sense).
>>3. There's random, ergodic behavior. (time average = average over state
>>space.)
>>4. There's random, non-ergodic behavior. (random behavior, but statistics
>v>change with time.)
>>5. Then there's free-will. 
>>
>>How as an observer do I distinguish 5 from 1-4?

>It seems to me that the question of free-will has a lot to do with 
>whether "I" determine my behavior, or whether "it/they" determine it
>for me.  If so, the scope of "I" is an issue, in both space and time.

>At the summer conference of the Institute on Religion in an Age of Science
>in 1986 (or so) we spent a week thrashing around determinism and free will,
>without much resolution.

>There is a subtle sort of recursion to the problem.  My behavior may be
>"determined" by my companions of the past year, but, then again, "I" had
>some say in selection of those companions, but then again... 

...

>The sense I get from chaos is that a whole lot MORE of the
>world is a whole lot LESS determined externally than we thought.
>Another question might be then, what exactly does it mean to say
>"It's up to you..."  Kinda means it's determined by a complex function
>of recall of the past, perception of the present, and mental model of
>the future by the decision-making agent, along with what they had for
>breakfast and whether they made that red light or not on the way to work.

Most non-linear processes appear to be actually chaotic, and most
processes appear to be non-linear. Most "random" processes appear to be
actually chaotic.  So yes. One thing most people interested in chaos
aren't aware of is that a model of a chaotic process diverges from the
chaotic process...


>To add to that, a good deal of the perception or model may have to do
>with what they THINK that other decision-makers are going to think or
>do or say or react to the contemplated actions.  So now I'm determined
>by whether he's determined by what SHE is going to do... 
This was what I was studying--in essence--in my work on non-zero sum games
with information collection. I showed that the strategy for this game
could be expected to evolve chaotically if people were playing it and
non-chaotically if Darwinian evolution dominated it. It's a funny test for
culture, but the simulation results were pretty clear.

BTW, the evolutionary path to culture thus involves a significant
bifurcation. We had to learn to play chaotic games--a highly non-trivial
advance, because it depends on being able to "read the other person's
mind." That's why I'm interested in distributed cognition in social groups
--we're preadapted for it. Karl Pribram calls it "vibes." It's not ESP,
but it looks like it.

>...
>Which raises one more question:  can a COLLECTIVE of agents have 
>free-will, even if no single one alone has, even remotely, free-will?
>This is more relevant to the 10*10 neurons you call a brain, or the
>ant-hill or bee colony or State of Massachusetts.

>Or, a massively parallel whatever.

Answer: a collective of agents can certainly show cognition. It doesn't
take a very large collective, either. The basic model is somewhat akin to
my crude (and false) model of the cerebellum--biological HDP, with
pseudo-holographic memory resident in the population at large. It's
beginning to clarify some strange aspects to urban culture that I've been
looking at with Sander van der Leeuw and John Dockery. John has evidence
that something fails in a social group if the lines of communication are
disrupted sufficiently--to the point that it can't function, despite
relatively low percentage damage. Sander has evidence that Zipf's Law for
urban center sizes is connected in some fashion with the culture being
organized as a heterogeneous hierarchy of subunits, with _some_ subunit
being the right size for handling any given problem. The point is that
distributed cognition can occur, with no one individual being
_responsible_ for the results--because no one individual did very much to
bring on the consequences. 

Think about it--how smart is a city? Well you have to stop and think
about it being a collective with emergent cognition. Perhaps a paramecium
or a flatworm? Smart enough to survive. Perhaps smart enough to be
dangerous. How smart is a nation?  Perhaps smart enough to be really
dangerous.

Cheers,









>-- 
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com



