Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uunet!in1.uu.net!fdn.fr!jussieu.fr!pasteur.fr!univ-lyon1.fr!swidir.switch.ch!news.unige.ch!usenet
From: sylvere@divsun.unige.ch (Silvere Martin-Michiellot)
Subject: Re: Thought Question
Message-ID: <1995Mar1.133620.29709@news.unige.ch>
Sender: usenet@news.unige.ch
Reply-To: sylvere@divsun.unige.ch
Organization: University of Geneva, Switzerland
References: <3ie147$5cc@remus.rutgers.edu>
Date: Wed, 1 Mar 1995 13:36:20 GMT
Lines: 43

In article 5cc@remus.rutgers.edu, wclark@remus.rutgers.edu (Bill Clark) writes:
>joel@oapd.oki.com (Joel Johnson) writes:
>
>>If I understand you correctly, the above C program is the brain and the laws 
>>of physics are the environment in which the program executes 
>>(processor, memory ,compiler etc..). It seems that with this model, the brain 
>>or the C program can as trivial as long as the compiler or the laws of physics
>>are sufficiently complex. This seems to beg the question of infinate regress. 
>
>It doesn't beg the question at all.  The original argument was that self-
>modelling is impossible in finite systems, which it is not.  These last few
>posts haven't *proven* that the brain models itself, they have simply moved
>the argument away from that of infinite regress to one based upon the
>complexity/structure of the laws of physics.
>
>bill clark
>rutgers university
>

Wrong...
Self modelling is impossible in finite system.
what you have proven is that given an interpreter of a language, a program written
in that languge may be able to express it's own code.
That is YOU NEED AN INTERPRETER (meta language).

I gave an example at the begining of the thread : it is impossible for any
computer to print down the EXACT content of what it is and especially the
contents of its RAM, since the program is in RAM and you'll need variables that will
change betwwen the beginning and the ending of the program.

Moreover, the class of Turing Universal Machines is unable of self modelling,
thanks to Godel.

Sorry boys that would be very great but it's not possible.

So, the brain can PARTIALLY model itself (AT MOST) since we are at least as
powerful as a Turing Machine (we can simulate it with our mind).


-----------------
Silvere MARTIN-MICHIELLOT


