Newsgroups: sci.nonlinear,sci.cognitive,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!cs.utexas.edu!news.sprintlink.net!noc.netcom.net!netcom.com!doug
From: doug@netcom.com (Doug Merritt)
Subject: Re: Chaos and Computation
Message-ID: <dougD8FM8y.GI6@netcom.com>
Organization: Netcom Online Communications Services (408-241-9760 login: guest)
References: <3opic3$kbb@uuneo.neosoft.com> <pecora-100595085205@lou-pecora.nrl.navy.mil> <kovskyD8Dwu7.AAG@netcom.com>
Date: Thu, 11 May 1995 20:58:10 GMT
Lines: 64
Sender: doug@netcom19.netcom.com
Xref: glinda.oz.cs.cmu.edu sci.nonlinear:3119 sci.cognitive:7571 comp.ai.philosophy:27954

In article <kovskyD8Dwu7.AAG@netcom.com> kovsky@netcom.com (Bob Kovsky) writes:
>	They said:  [...]
>notion of memory as a replica or a transformation of "information" given
>in the world (human memories are highly context- and affect-sensitive and
>to some extent nonveridical);

Yes, but "everyone" knows that; sophisticated models (as opposed to
straw man models) have many layers of transformation and preprocessing,
so e.g. context sensitivity occurs inherently as a matter of the
particular features that are input to any given layer.

>the conception of memory retrieval as the
>relaxation of a network to a stable state (a brain is continually exposed
>to changing input patterns and has no opportunity to freeze them while
>waiting for an approach to equilibrium)...[more of the same]

This is even more of a straw man; for instance, pipelined processing
does not require inputs to be frozen; the relaxation can be achieved via
a sequence of processing stages -- an unrolled feedback layer.

Or in an artificial system, say that it *is* implemented as a single
layer, in which case the question is how fast can it achieve that
relaxation. If it's fast enough, then inputs will have changed only
negligeably by the time it finishes.

Only an idiot would hold the view they are refuting -- that the brain
expects the world to freeze for a long time while it gets busy processing
its relaxation.

The insightful way of looking at relaxation is that it is simply
an implementation of the mathematical notion of approximating
the fixed point of a topological system, which is a powerful
idea indeed...these guys should read Kohonen. The particular
method of implementing convergence to a fixed point is moderately
irrelevent at this level of discussion.

>	"These unrealistic features should be a warning that something is
>seriously amiss in the basic assumptions behind the AI paradigm, even as
>modified by the introduction of parallel processing in
>neural-network-inspired systems..." 

Maybe. :-) Certainly artificial neural nets are no panacea.

As for the issue that started this subthread, about whether artificial
neural nets are sufficient to model the brain -- the discussion immediately
went off-track, in arguing whether ANN's are biologically realistic.
It is very widely known that they are not, except in a few near-trivial
ways, no question about it.

However, it is not necessary for ANN's to be biologically realistic,
any more than a VLSI CPU needs to be. The question is, what is their
computational power -- a question that was already answered here.

What the *brain* is capable of doing (e.g. is Penrose right about
quantum computation in the brain?) is quite another issue, and one
that is very far from answered, even on far less controversial
issues than Penrose's, such as the role of the hundreds of unstudied
neurotransmitters.
	Doug
-- 
Doug Merritt				doug@netcom.com
Professional Wild-eyed Visionary	Member, Crusaders for a Better Tomorrow

Unicode Novis Cypherpunks Gutenberg Wavelets Conlang Logli Alife HC_III
Computational linguistics Fundamental physics Cogsci Egyptology GA TLAs
