Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornell!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!zombie.ncsc.mil!news.duke.edu!agate!howland.reston.ans.net!ix.netcom.com!netcom.com!departed
From: departed@netcom.com (just passing through)
Subject: Re: Thought Question
Message-ID: <departedD4HoF4.Jnr@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <vlsi_libD45qrE.zx@netcom.com> <3iggvt$r38@mp.cs.niu.edu> <departedD4G07E.9Fq@netcom.com> <3ijdad$4hu@mp.cs.niu.edu>
Date: Fri, 24 Feb 1995 05:25:51 GMT
Lines: 163
Sender: departed@netcom20.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:2569 comp.ai.philosophy:25735 comp.ai:27742

In article <3ijdad$4hu@mp.cs.niu.edu>, Neil Rickert <rickert@cs.niu.edu> wrote:
>In <departedD4G07E.9Fq@netcom.com> departed@netcom.com (just passing through) writes:
>>In article <3iggvt$r38@mp.cs.niu.edu>, Neil Rickert <rickert@cs.niu.edu> wrote:
>>>In <departedD4FA2s.M23@netcom.com> departed@netcom.com (just passing through) writes:
>
>>>>Hence, from the outside, if entity X were hungry and it said, "Let's
>>>>go to Burger King, it's cheap"  I would consider that a better indication
>>>>of subjectivity than its forcing its face into a pile of food.
>
>>>Entity A (= my automobile) from time to time flashes a red light on
>>>the dashboard which, in effect, says "let's go to the gas station
>>>and drink some gasoline."  Is it conscious?
>
>>Naw, your car isn't transforming the information in any interesting way.
>
>You seem to have transformed the ambiguity of "consciousness" to the
>ambiguity of "any interesting way".  Since I presume "interesting" is
>to be determined by a human, you have essentially come to the
>conclusion that a system is conscious if people agree that it is
>conscious.

Nah.  (I knew somebody was going to pick up on 'interesting' as
anthropomorphic.)  What I mean by interesting way is able to translate
that information into a different domain.  A differently structured
domain, with different information, and perhaps more complex.
Consider various entities watching a dog scratch:
Dog scratching --> domain of pixels changing colors.  Boring.
Dog scratching --> domain of shapes that move.  Uninteresting.
Dog scratching --> domain of creatures that maintain bodies
                   The dog is itching.  More interesting.
Dog scratching --> domain of many creatures that interact with each other.
                   The dog has fleas, making it itch.  More interesting.
Dog scratching --> domain of all things in your life that do things or
                   can be done to, in whatever manner.
                   i.e. get a flea collar.  Most interesting.

Now, these domains are not differently interesting simply from a human
point of view.  They are more interesting because each successive domain
is more elaborate and more finely constructed, and takes resident
information along different paths.  (Nevertheless, the information does go from
one domain to another.)

Point being, you could objectively measure the elaborateness of space.
1) Pixels can be darker, lighter, different colors.
2) Shapes can change boundaries, form, dissolve, shift, have different colors.
3) Critters can  walk, sit, lie, avoid pain, deal with hunger, move,
   change shape, move, be darker or lighter or different colors.

A dog taking a walk doesn't imply much in pixel-space besides more
shifting pixels; in critter-space you can foresee it dropping an
'offering'; in human-space, you can connect it to the need to take
along a paper bag so as not to offend other humans.

In a new space, you have a different description which lets you make
connections not made before, and also reduces the possibilities of the 
previous space.  Information is bounded differently; you would 
expect your dog to perhaps hunch up but not dissolve into a puddle (which 
might be considered reasonable within the 'shapes' domain.)  On the other hand,
calling it 'a dog' might explain things about the shape, like part of
it elongating (raising a leg.)

And so on ... 

>>The throughput is very direct -- gas low --> blinking light.  It is you
>>who is being conscious,
>
>Of course, I agree.  That is why I introduced the blinking light.  I
>think it demonstrates that your proposed way of identifying
>consciousness doesn't work.

???  What's your point?  Does your car ever do anything about low gas
that indicates it's mapping that information through a complex space ??? 

Is the red light supposed to be a complex remapping since it 'means'
'low on gas, go to station'?  I think not; the information mapping
that your car is using (if examined) is dirt-simple.  Repeated experiments
won't show any other remapping for different inputs.  That's about all
you can do.

>>I think there's a lot more going on in consciousness than this kind of
>>writing a script, but I think being able to take 1 bit of information
>>and transform this input via a well formed world into an elaborate
>>construction is pretty indicative of consciousness.
>
>Is transforming it indicative of consciousness?  Or must it be
>transformed "in an interesting way" as you suggested above?  If the
>latter, they how do we determine which types of transformation are
>interesting?

Those that are taking it into a different domain which has different behaviors 
for the information.  Transforming is indicative of consciousness; more
transforming is indicative of more consciousness.  If information is not much
transformed, it's not showing much consciousness.  Very transformed is very
interesting, very conscious-seeming.

Dog scratching?  You could help it scratch; not very interesting; the
information is at a level of "there's an itch."
Now if you translate that into, "there's something causing it," and
that into "the dog has fleas" and that into "the dog is hateful" and
that into "give dog to friend who likes dogs" -- then in that process
you're translating the original input into many different worlds, the
more 'interesting' of which have many more possible points of departure
for your train of thought (but still keep a relation to the topic.)
A script could create this sequence; the 'interesting' part is where
you get almost unlimited possibilities for reactions to your dog
scratching, within elaborate and consistent worlds of dogs, fleas,
friends, scratching and so on.  You could even get philosophical and
ponder sadly how fleas must torment dogs in order to live.  A far cry
from an image of a dog flexing its leg -- this domain of philosophy is
going to treat its information very differently than anything which
manipulates images or movies.

Essentially, you're threading information along axes which reside in more
and more complex spaces.  These spaces still have access to (or shall we say
depend on) less complex spaces.

>>You could have a doll which has a string attached so that when you yank it,
>>it emits the sounds, "I'm hungry, let's go to Burger King, it's cheap."
>>But over time, you begin to suspect that it's incapable of transforming
>>the string-yank input into anything different, and hence not conscious.
>>On the other hand, if your friend sometimes says, "I'm hungry, let's eat
>>at BK" or sometimes says, "I'd rather be hungry for a while" or sometimes
>>says, "Let's eat later and catch the movie first," then you begin to
>>suspect that yes, he does have an inner life which is changing what hunger
>>means to him.
>
>Let's take the doll, and add a random number generator.  We use it so
>that it randomly chooses 1 of 100 reasonable responses.  Is it now
>conscious?  If not, then you still haven't made your point.

I'm talking about indicators of consciousness, not proof of it, first of
all.  Any test which relies upon examining productions may be fooled for
some length of time; consider a huge memory full of phrases and kind of
an involved Markov machine for generating likely new ones faced with
a Turing test.  So, yes, 100 different responses would be a better
indicator of consciousness, but unfortunately doesn't prove anything
at all.  I'm just saying that you're looking for evidence of transformations,
not that any such test can't be fooled over any limited time.

(I.e. a car factory emits cars.  Fair enough?  Unfortunately, a parking
 garage will too...  just not indefinitely.)

As I said before, you need to either be able to look into the doll's 
handling of information, or be able to test it indefinitely.

Secondly (since you're probing the doll for transformations) you'd
ideally like to have a more complex input.

Thirdly, you'd catch on to the severe limitations of the doll's internal
information space after yanking the string a few hundred times.

Fourthly, one reason the doll may be fooling you for a while is that it
is emitting the products of someone else's transformations, that do
indicate consciousness.  So you do have a window into a consciousness
that took place some time ago and elsewhere -- just not the doll's.

Finally, something I didn't talk about before -- I think an essential
attribute of consciousness is the changing of the information space you
use to map things with.  Will the doll ever modify (in the long run) its
opinions about your pulling its string?

-- Richard Wesson (departed@netcom.com)

