From newshub.ccs.yorku.ca!torn!utgpu!pindor Thu Oct  8 10:11:01 EDT 1992
Article 7096 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: self-evolution
Message-ID: <BvIAv1.CLr@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <92274.201057GE0QC@CUNYVM.BITNET> <BILL.92Oct1165428@ca3.nsma.arizona.edu> <1992Oct1.234648.21710@mprgate.mpr.ca>
Date: Fri, 2 Oct 1992 18:18:35 GMT

In article <1992Oct1.234648.21710@mprgate.mpr.ca> siemens@mprgate.mpr.ca (Curtis Siemens) writes:
>> Bill Skaggs writes:
>> Hofstadter, for one, argues that it does (in GEB).  He says that
>> flexibility must always be built upon a fixed substrate -- that every
>> functional system has *some* level insulated from control by higher
>> levels.  The argument seems plausible to me:  it's hard to see how a
>> system could intelligently change itself at the most fundamental
>> level.  A system surely could not predict the effect of such a change
>> in its own structure ...
>
>But don't you think that a system like a computer or brain could first
>run a simulation of its new self on top of the existing hardware/software,
>to predict the effect of changing a structure?  If it liked the effect, then
>it go about modifying its S/W & if it had sufficient mechanisms, it could
>perform "surgery" and change its H/W.
>
>This wouldn't work of course if the existing hardware/software didn't have
>sufficient "power" to simulate the new structure.

I do not think it is a matter only of "sufficient power". It is also a matter
of being able to simulate its own simulation capabilities, which leads to
infinite regression.

Andrzej Pindor
-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


