From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!tamsun.tamu.edu!mtecv2!academ01!iordonez Mon Aug 24 15:41:38 EDT 1992
Article 6679 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!tamsun.tamu.edu!mtecv2!academ01!iordonez
>From: iordonez@academ01.mty.itesm.mx (Ivan Ordonez-Reinoso)
Newsgroups: comp.ai.philosophy
Subject: Re: Marvin Minsky's Concious Machines
Message-ID: <iordonez.714442640@academ01>
Date: 22 Aug 92 00:17:20 GMT
References: <1992Aug13.025506.2404@news.media.mit.edu>
Sender: usenet@mtecv2.mty.itesm.mx
Lines: 121
Nntp-Posting-Host: academ01.mty.itesm.mx

minsky@media.mit.edu (Marvin Minsky) writes:

>  Conscious Machines, by Marvin Minsky

[...]

>Consistency

[...]

>If you are not a logician, then you might wonder what's all the fuss
>about.  "What could possibly be wrong with logical consistency.  Who
>wants those contradictions, anyway?"  The trouble with this is that
>the problem is worse than it looks: paradoxes start to turn up as soon
>as you permit your machine to use ordinary common-sense reasoning.
>For example, troubles appear as soon as you try to speak about your
>own sentences, as in "this sentence is false" or "this statement has
>no proof" or in "this barber shaves all persons who don't shave
>themselves."  The trouble is that when you permit "self reference" you
>can quickly produce absurdities.  Now you might say, "Well then, why
>don't we redesign the system so that it cannot refer to itself?"  The
>answer is that the logicians have never found a way to do this without
>either getting into worse problems, or else producing a system too
>constrained to be useful.

I read in Rudy Rucker's "Infinity and the Mind" that these
contradictions do not arise because we allow self reference, but because
they contain an element of non-formality. All paradoxes are reducible to
the statement

P: This sentence is not true.

If senntence P were formalizable, it could be expressed in a logical
finite formal system, which would require defining each of its elements.
The "This sentence" part is the self-referent part, and is not hard to
express (Godel's theorem is based on formalizing a self referent
sentence). The "is not true" part can be restated as simply NOT (is
true). The hard part comes when we try to formalize the "is true" part,
because truth is not formalizable. No matter how complex a formal system
is, there will be sentences expressible in the language of the system
that are true, but non-provable from within the system. Therefore there
is no way to write sentence P in any finite formal language, because it
would imply formalizing the concept of mathematical truth, which cannot
be done.

But we can express sentence P! I think Penrose would argue that this is
because our insight allows us to understand truth beyond any formality.
I guess Minsky would argue this is because we are inconsistent.

[...]

>Consciousness.


>In any case, we have much the same problem with ourselves; try asking
>a friend to describe what having consciousness is like.  Good luck!
>Most likely you hear only the usual patter about knowing oneself and
>being aware, of sensing one's place in the universe, and so on.  Why
>is explaining consciousness so dreadfully hard?  I'll argue that this
>is something of an illusion, because consciousness is actually easier

I do not understand. Conciousness is an illusion? Then who experiences
the illusion, given that we are our conciousnesses?

>to describe than most other aspects of mind; indeed, our problem is a
>far more general one, because our culture has not developed suitable
>tools for discussing and describing thinking in general.  This leads
>to what I see as a kind of irony; it is widely agreed that there are
>"deep philosophical questions" about subjectivity, consciousness,
>meaning, etc. But people have even less to say about questions they'd
>consider more simple:

>	How do you know how to move your arm?
>	How do you choose which words to say?
>	How do you recognize what you see?
>	How do you locate your memories?
>	Why does Seeing feel different from Hearing?
>	Why does Red look so different from Green?
>	Why are emotions so hard to describe?
>	What does "meaning" mean?
>	How does reasoning work?
>	How do we make generalizations?
>	How do we get (make) new ideas?
>	How does Commonsense reasoning work?
>	Why do we like pleasure more than pain?
>	What are pain and pleasure, anyway?

Almost all this questions are about doing things: moving, recognizing,
reasoning, making, etc. In my opinion conciousness is not something that
we do, but something that we are. I cannot explain which parts of my
brain I use to plan the movements of I don't know which muscles of my
arm, but I think that asking this has very little to do with asking what
conciousness is.

[..]

>Then what might be the functions and the organs of what we call
>consciousness?  To discuss this, we'll have to agree on what we're
>talking about -- so I'll use the word consciousness to mean the
>organization of different ways we have for knowing what is happening
>inside your mind, your body, and in the world outside.  Here is my
>thesis; some people may find it too radical:

>	We humans do not possess much consciousness.
>	That is, we have very little natural ability to sense
>	what happens within and outside ourselves.

>In short, much of what is commonly attributed to consciousness is
>mythical -- and this may in part be what has led people to think that
>the problem of consciousness is so very hard My view is quite the
>opposite: that some machines are already potentially more conscious
>than are people, and that further enhancements would be relatively
>easy to make.

If being concious means to know one's own functioning, this may be true,
but, as I said before, I think conciousness relates more to being than
to doing things, like 'knowing' or 'explaining'.


Ivan Ordonez-Reinoso
iordonez@mitras.mty.itesm.mx


