From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!caen!destroyer!cs.ubc.ca!mprgate.mpr.ca!mprgate.mpr.ca!siemens Thu Oct  8 10:10:55 EDT 1992
Article 7086 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!caen!destroyer!cs.ubc.ca!mprgate.mpr.ca!mprgate.mpr.ca!siemens
>From: siemens@mprgate.mpr.ca (Curtis Siemens)
Newsgroups: comp.ai.philosophy
Subject: Re: self-evolution
Message-ID: <1992Oct1.234648.21710@mprgate.mpr.ca>
Date: 1 Oct 92 23:46:48 GMT
References: <92274.201057GE0QC@CUNYVM.BITNET> <BILL.92Oct1165428@ca3.nsma.arizona.edu>
Sender: news@mprgate.mpr.ca
Reply-To: siemens@mprgate.mpr.ca (Curtis Siemens)
Organization: MPR Teltech Ltd.
Lines: 17

> Bill Skaggs writes:
> Hofstadter, for one, argues that it does (in GEB).  He says that
> flexibility must always be built upon a fixed substrate -- that every
> functional system has *some* level insulated from control by higher
> levels.  The argument seems plausible to me:  it's hard to see how a
> system could intelligently change itself at the most fundamental
> level.  A system surely could not predict the effect of such a change
> in its own structure ...

But don't you think that a system like a computer or brain could first
run a simulation of its new self on top of the existing hardware/software,
to predict the effect of changing a structure?  If it liked the effect, then
it go about modifying its S/W & if it had sufficient mechanisms, it could
perform "surgery" and change its H/W.

This wouldn't work of course if the existing hardware/software didn't have
sufficient "power" to simulate the new structure.


