Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!news.alpha.net!uwm.edu!vixen.cso.uiuc.edu!howland.reston.ans.net!ix.netcom.com!netcom.com!departed
From: departed@netcom.com (just passing through)
Subject: Re: Thought Question
Message-ID: <departedD4HIr4.4Ir@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <vlsi_libD45qrE.zx@netcom.com> <3iggvt$r38@mp.cs.niu.edu> <departedD4G07E.9Fq@netcom.com> <3ij7b7$8j3@news.u.washington.edu>
Date: Fri, 24 Feb 1995 03:23:28 GMT
Lines: 208
Sender: departed@netcom20.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:2567 comp.ai.philosophy:25733 comp.ai:27739

In article <3ij7b7$8j3@news.u.washington.edu>,
Gary Forbis  <forbis@cac.washington.edu> wrote:
>In article <departedD4G07E.9Fq@netcom.com>, departed@netcom.com (just passing through) writes:
>|> You may be right; as soon as we can nail it down then it's evidently
>|> objective.  Nevertheless, there are certain traits we associate with
>|> subjectivity.  When you say that someone is 'being subjective' you're
>|> saying that they're relying on their own interpretation, speaking out
>|> of their worldview -- in essence maintaining their own informational
>|> realm, discrete from the world at large.  This is the vernacular, but
>|> I think it points at what is meant by 'subjective'.
>
>I'll point at what I mean by "subjective."  When I talk about pain I do not 
>mean some set of physical transformation to your corporeal body caused by
>whacking your hand with a hammer.  What I mean by pain is the feeling caused
>by whacking your hand with a hammer.  The two are related.  I don't know
>what it is about the human system that relates the two.  I can understand 
>why one would want to say one implies the other in all systems.  I'm not
>sure how one can do this without a theory as to how feeling can be 
>epiphenominal to (implementation independent) function.

One doesn't _imply_ the other; they are two faces of the same thing,
neither face of which is a complete and accurate description.
_Inside_ the pain, there is no pain.  The pain (locus of what shall be
experienced as pain) _radiates_ pain-information, much of which is non
verbal and hence difficult to describe, but is loosely aversion, body-image,
sensation described as unpleasant elsewhere, shock, etc, all of which
components feed back into each other.  This is all echoing inside your
information space(s).  You can make a gestalt out of it and call it 'pain'.
You call it 'you' and 'yours' because you are well-connected internally and 
discrete from other information spaces.  You can feel someone else's pain
to a limited degree, if you're well-connected; can't it cause you anguish
to see the signs of someone else suffering?  (I don't even think that is
really _done_; you may feel it just as automatically as you feel your own
pain, although with not as much richness!)
Point being, the subjective perception _is_ the meaning that it radiates
(into your subjective world.) If you could look _inside_ the subjective 
sensation of pain, all you would see is signals being pumped into certain 
areas in certain ways.  That is the function; the experience of pain
occurs as the signals are mediated out of this locus, giving this locus
the (justified) appearance of radiating 'pain' -- the feeling.
Function 'A' radiating information to B,C,D is NOT 'A' -- it is AB+AC+AD -- 
subjectively anyhow -- and this is added up to construe 'A'.   We may be able 
to dissect 'A' and find interesting things, but it doesn't mean anything until 
it relates. 

>When your hand is whacked does it feel pain?  I don't think so.  I think
>our feelings are related to brain activity caused in a very mechanical way
>by signals sent from hands when they are whacked.
>
>Does your theory of subjectivity rest upon some second order function?  If
>so then couldn't it be the case that pain is felt at one level and identified
>as such at another?

Pain exists by being a signal (null) (objective) (non-subjective) and then
being remapped.  (subjective.)   Pain is the existence of pain (meaningless)
AND the translation thereof, in relation (meaningful.)
The signal isn't anything until it makes a difference elsewhere, at which 
point it's no longer quite the same as it was.  (Shades of quantum 
mechanics!)
Once it begins being relayed, you can have as many levels as you like.

>|> >Entity A (= my automobile) from time to time flashes a red light on
>|> >the dashboard which, in effect, says "let's go to the gas station
>|> >and drink some gasoline."  Is it conscious?
>|> 
>|> Naw, your car isn't transforming the information in any interesting way.
>
>I don't think "interesting" makes for a very firm descriminator of
>consciousness.

yah yah yah.  see below if you're confused about what I mean by
'interesting.'   'uninteresting' = 'unrelated'.  The blinking light
makes no relations except for one very very simple one, which translates
1 bit, into 1 bit, not a relation at all, really, since it brings no
other information into the picture.

>|> The throughput is very direct -- gas low --> blinking light.  It is you
>|> who is being conscious, in that via your internal realm you are translating
>|> a blinking light into information about your car (a much more complicated
>|> gestalt) and further transforming _that_ information (with a desire to not
>|> get stranded, having some money, so on) into a plan to go to the gas 
>|> station.
>
>I think human behavior will either reduce to this kind of stuff or we will
>find some component in the brain that has behaviors not reducable to the 
>behaviors of its subcomponents.  Around here there is strong resistence to
>oracle like behaviors out of subcomponents and a strong preference for
>deterministic or statistical processes.

H'mm ... you can couple even dirt-simple oscillators and get behavior
that appears, at least, far more complex than the action of either
oscillator.  Is this reducible or not?  and suppose you have a third
oscillator coupled to the combined output of the first two?  etc.  Point
being, at some point, reduction just won't be meaningful and you have
to look at the system.

>|> I think there's a lot more going on in consciousness than this kind of
>|> writing a script, but I think being able to take 1 bit of information
>|> and transform this input via a well formed world into an elaborate
>|> construction is pretty indicative of consciousness.
>
>or functioning models of consciousness.

When you're dealing with information, the distinction between a model
of an information system and the real thing is going to be pretty fuzzy.   
You could call a chess-playing computer 'just a model' of a chess player.
But it does play chess, by gum.  Why not have something that 'does
consciousness' -- who cares whether it's a model or not?!?  the point
is moot...

>|> Thing is, any example I give can potentially be 'faked' arbitrarily well
>|> by an unconscious system -- some program driven by scripts could handle
>|> the above without difficulty.  (Which brings us back to the difficulty of
>|> demonstrating subjectivity.)  I would suspect, though, that one of the
>|> necessary qualities for subjectivity is a well formed world, one
>|> capable of almost any arbitrary transformation.
>
>I don't.  I don't think we feel pain because we have a model of pain.  I wish
>I knew how it is that our physical existences is refleced on our mental and
>our mental back on our physical.  Sure we have to have a physical existence
>which allows others to infer a model of pain but their interpretation of our
>physical existence does not cause us to feel pain.

Since when did I say anything about a model of pain ... ?!?  I'm saying
that 'pain' (as you experience it) IS a certain kind of information dynamic.
We ARE information dynamics, and have no more right to consider ourselves
'a self' than an eddy in a river has to consider itself 'a thing'.  (Which
is to say, a temporary and limited right ...)

Ahem.

Their interpretation generally doesn't cause you to feel pain because your
informational realms are somewhat discrete: not entirely well-connected.  On 
the other hand, if you're hurt, and somebody goes 'eeuch, that looks really bad'
it's not surprising if your perception of pain increases.  So their
interpretation can indeed be part of your subjective experience of pain.

>|> You could have a doll which has a string attached so that when you yank it,
>|> it emits the sounds, "I'm hungry, let's go to Burger King, it's cheap."
>|> But over time, you begin to suspect that it's incapable of transforming
>|> the string-yank input into anything different, and hence not conscious.
>|> On the other hand, if your friend sometimes says, "I'm hungry, let's eat
>|> at BK" or sometimes says, "I'd rather be hungry for a while" or sometimes
>|> says, "Let's eat later and catch the movie first," then you begin to
>|> suspect that yes, he does have an inner life which is changing what hunger
>|> means to him.
>
>Do you really go through this process?  I don't.

If you were very naive, you might actually go through this process where
the doll was concerned.  On the other hand, if your friends simply responded
always with catchphrase B to statement A, you might begin to wonder whether
their remapping was insufficient to consider them conscious, and start
speculating about pod people.

>|> One might conceive of machines which _will_ fail the Turing test, but only
>|> after an arbitrarily long time -- consider a CD-ROM full of statements
>|> and sentence fragments with which to compose possible answers.  And it 
>|> would be rather easy to put together a program that passed the TT for at 
>|> least five seconds.
>|> The only way to make sure that your intelligent machine is being conscious
>|> would be to look inside its workings and see that they corresponded to your 
>|> formal definition of consciousness.
>|> Either that, or quiz it for an arbitrarily long period of time.
>
>Niether method is acceptable to me.  A model is a model but a model need not
>be the thing modeled.  How close is close enough?  I'll delay my choice until
>I have a testable theory.  Right now I'm going on faith that other humans
>have consciousness based upon physical, functional, and behavioral similarities
>to myself.  I know I have consciousness; I assume others do too.

If we can take consciousness out of the realm of "it's what I feel like" and
recast it as something that happens to or with information, then we may
hope to approach this.  I would agree that we have to be VERY CAREFUL about
mistaking X for Y because both produce Z; in this manner, one might mistake
a large parking garage for a car factory, because both emit cars.  You will
be disappointed when the parking garage runs out of cars.

I think you could create a some kind of software which could just as
justifiably say, "I exist" as you do -- by objective criteria.  That
is, if it handled information in your manner, which you would have to
look inside it to ascertain.

Or on the other hand, if you could _prove_ that it would pass an 
indefinitely prolonged ('complete') Turing test ... I think that would do 
it too.

>The arguments I've heard concerning machines presume a functionally
>implentable model of consciousness.  While I won't agrue the point with such
>an implementation when it comes to exist--I have no theory encompassing 
>machines--I will argue the point with humans, especially until such 
>implementations exist.

Suits me.

>--gary forbis@u.washington.edu

Underneath all this, there resounds a cry, "it's meeee ... I exist!"
Well, sort of ... consider this: your subjectivity has no clear boundaries;
everything around you is helping to create 'you' ... or on the other hand
you might feel as if you were living in your mind and there were other parts
'out there' elsewhere in your mind ... and maybe sometimes these parts feel
like you and the 1st part that was observing them before feels like 'other'.

This 'you' has no home ... just an ill-defined home range.

-- Richard Wesson (departed@netcom.com)

