Newsgroups: comp.ai,comp.ai.alife,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel-eecis!news.mathworks.com!newsfeed.internetmci.com!torn!tortoise.oise.on.ca!tortoise!dyeo
From: David Yeo <dyeo@tortoise>
Subject: Re: qualification for attribute acquisition 
In-Reply-To: <4qror5$rj8@life.ai.mit.edu> 
Content-Type: TEXT/PLAIN; charset=US-ASCII
Message-ID: <Pine.SOL.3.91.960627223817.14058B-100000@tortoise>
Sender: news@oise.on.ca
Nntp-Posting-Host: tortoise
Organization: Ontario Institute for Studies in Education, Toronto
References: <4pk7b5$42n@nntp.seflin.lib.fl.us> <834570973snz@chatham.demon.co.uk> <YOSHIDA.96Jun15211940@odin.ai.rcast.u-tokyo.ac.jp> <4qror5$rj8@life.ai.mit.edu> 
Mime-Version: 1.0
Date: Fri, 28 Jun 1996 03:37:07 GMT
Lines: 98
Xref: glinda.oz.cs.cmu.edu comp.ai:39605 comp.ai.alife:5803 comp.ai.philosophy:43364

On 26 Jun 1996, Marvin Minsky wrote:

> In article <YOSHIDA.96Jun15211940@odin.ai.rcast.u-tokyo.ac.jp> 
> yoshida@ai.rcast.u-tokyo.ac.jp (YOSHIDA TETSUYA) writes:
> 
> >I am interested in the above gestalt phenomena (often claimed as
> >*emergence*?). However, it may be true that several simple modules
> >connected together would do much complicated things *compared* to
> >itself, but is it a fair comparison to compare the complexity of the
> >*whole* with *single* one? I mean, is there any proof or mechanism
> >which explicitly shows that the complexity or the work by done the
> >collection of simple systems is more than that of the the sum of each
> >one, something compareble with the phenomena ``super speedup'' in
> >parallel computation?
> 
> Well, this depends on how you define "complexity".  In commonsense
> terms it can be enormously harder to understand what the compound
> system does, as compared to the smaller components.  But that's a
> problem for the observer, and involves what we ought to regard as some
> sort of 'relative' complexity.
> 
> As for you last question, if we agree to use standard algorithmic
> complexity for the meaning of 'complexity', that, I think, the answer
> is almost precisely the opposites.  At most the algorithmic complexity
> is no more than the sum of those of the parts *plus* the description
> of how they're connected. Most often, though, the "complexity" is
> likely to be a great deal less than the sum of the parts -- simply
> because we're usually talking about crowds of people, or networks of
> neurons, or other such aggregates, in which the descriptions of the
> individuals shared enough that the ensemble's description can be
> compressed.  And the definition of algorithmic complexity is,
> basically, the length of the maximally compressed description.
> 
> In other words, in the view of algorithmic complexity, there can't
> exist any "emergents".  They exist only in the eye of the
> uncomprehending observer -- that is, to someone unable to discover the
> optimal compressed description of what is being observed.
> 
> What about open systems, in which one might suppose that we might need
> much more complex descriptions to specify the initial conditions
> required for some later state to occur. Well, even then, the
> algorithmic approach would say that those descriptions are no larger
> than the sum of the descriptions of the machine and the specified
> final states.
> 
> Moral: Emergents don't exist if you can compute algorithmic
> complexities; however you can't, in fact, compute them.  What a pity,
> most scientist would say.  What a blessing, many others would say,
> because if we were able to know the truth, we probably wouldn't like
> it.

IMHO you are confusing connectivity with interaction. I would certainly
agree, in fact I have argued in past, that complexity is the sum of the
parts plus their interactions. Here I am suggesting, in particular, that
the nonlinearity generated by the interaction of the parts distinguishes
interactivity from simple connectivity.  As the general system theorist 
Ludwig von Bertalanffy (1968) once put it: 

  The meaning of the somewhat mystical expression, "the whole is more 
  than the sum of its parts" is simply that constitutive characteristics
  are not explainable from the characteristics of isolated parts.  The
  characteristics of the complex, therefore, compared to those of the
  elements, appear as "new" or "emergent". 

  (General System Theory, George Braziller Inc., New York, p. 55)

The distinction between connectivity and interaction strikes at the most
sacred tenet of science - its analytic procedure.  For it has typically
been wrongly assumed that if one divides the whole into its constituent
parts, study the parts in isolation then reassembles them, that you will
understand the whole.  This is simple not the case. Chris Langton (1989) 
made the point much more eloquently: 

  The distinction between linear and nonlinear systems is fundamental, 
  and provides an excellent insight into why the mechanisms of life should
  be so hard to find.  ... The key feature of nonlinear systems is that
  their primary behaviours of interest are properties of the interactions
  between parts, rather than being properties of the parts themselves, and
  these interaction based properties necessarily disappear when the parts
  are studied independently. 

  (Artificial Life, Addison-Wesley, p. 41)

The distinction is easily illustrated.  Given three inputs: x1, x2, x3, the
traditional connectivity model can be represented by the equation: 

                   y1 = x1 + x2 + x3

whereas the interactivity model is given by (for instance):

                   y2 = x1 + x2 + x3 + x1x2 + x1x3 + x1x2x3

Clearly y1 != y2.  Moreover, the second equation allows a much more powerful 
set of input-output mappings (even XOR :-).

Cheers,

- David Yeo (Applied Cognitive Science, University of Toronto)
