Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!usenet.eel.ufl.edu!news-feed-1.peachnet.edu!gatech!howland.reston.ans.net!pipex!oleane!jussieu.fr!univ-lyon1.fr!swidir.switch.ch!news.unige.ch!usenet
From: sylvere@divsun.unige.ch (Silvere Martin-Michiellot)
Subject: Re: Thought Question
Message-ID: <1995Mar13.110044.24081@news.unige.ch>
Sender: usenet@news.unige.ch
Reply-To: sylvere@divsun.unige.ch
Organization: University of Geneva, Switzerland
References: <3jpnvj$r51@oznet03.ozemail.com.au>
Date: Mon, 13 Mar 1995 11:00:44 GMT
Lines: 88

In article r51@oznet03.ozemail.com.au, Alan Tonisson <tonisson@ozemail.com.au> writes:
>sylvere@divsun.unige.ch (Silvere Martin-Michiellot) wrote:
><<deleted quote of previous message by Bill Clark>>
>> Self modelling is impossible in finite system.
>> what you have proven is that given an interpreter of a language, a program written
>> in that languge may be able to express it's own code.
>> That is YOU NEED AN INTERPRETER (meta language).
>> 
>> I gave an example at the begining of the thread : it is impossible for any
>> computer to print down the EXACT content of what it is and especially the
>> contents of its RAM, since the program is in RAM and you'll need variables that will
>> change betwwen the beginning and the ending of the program.
>> 
>> Moreover, the class of Turing Universal Machines is unable of self modelling,
>> thanks to Godel.
>> 
>> Sorry boys that would be very great but it's not possible.
>> 
>> So, the brain can PARTIALLY model itself (AT MOST) since we are at least as
>> powerful as a Turing Machine (we can simulate it with our mind).
>> 
>> 
>> -----------------
>> Silvere MARTIN-MICHIELLOT
>> 
>> 
>Unless your brain is much more powerful than mine, it has no chance of modelling
>anything more than a very simple Turing Machine.  Talking about Universal Turing
>machine proves nothing at all.

When I say that we can model any turing machine, I mean that we are able to 
REPRODUCE every computation it does. That (and only that) can be considered as a 
model. I do not say anything on the way we achieve this but anyway, we can do it.

A bit more about what I call a model :
a model is a (finite or infinite) set of rules that are able to reproduce/predict
the observable behaviour of a phenomena, when used to make valid combinations.

>It might be possible to prove that a finite system cannot model itself, but I haven't
>seen anything in this thread which resembles a proof.  Just because a Universal
>Turing Machine cannot model itself, it does not mean that no finite system can model
>itself.  The human mind is nothing like a Universal Turing Machine.

If a big system can't model itself then a small system in the class of the big system
will not be able to model itself. Small systems are not powerful enough.
My demonstration for simple systems, was just an attempt to show that a bit of
reflexion and an example was enough.
In fact, for big systems, a famous matematician I won't tell the name, has shown that
complex systems (with computational power) are never able to demonstrate every
property about the things they are able to talk. He has demonstrated so by putting
the formal system in the formal system.
So Turing Machines are unable to model themselves. Neither do we.
But does it mean that we are unable to think about ourselves ?
No. This is even what we do most of the day.
But we'll never reach the deep core of ourselves.
(can tell more of this if you want me to)

>Though I will
>agree that we can assume that the mind is no more powerful than a Turing machine with
>a finite tape.  I don't believe that there is anything magical or mystical about the
>human mind.
>

I agree with you but this is just our philosophical point of view.

>I don't even know what it means to say that a finite system models itself.

Someone was talking about a program that could write down its own code saying
that this program was modelling itself. The word modelling was misused.
I think he meant that the autowritingprogram had a meta view on itself.
He was wrong.

>I think that before you try to invoke Godel's Theorem you should clearly state what it
>is that you are trying to prove.

Sorry, I shouldn't have written things about Godel in my previous mail.

>Alan Tonisson (tonisson@ozemail.com.au)



-----------------

"Is anyone alive down there ?"

Silvere MARTIN-MICHIELLOT


