From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!darwin.sura.net!gatech!utkcs2!memstvx1!langston Wed Feb  5 11:55:58 EST 1992
Article 3373 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!darwin.sura.net!gatech!utkcs2!memstvx1!langston
>From: langston@memstvx1.memst.edu
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and panpsychism (was Re: Virtual person?)
Message-ID: <1992Feb1.165346.1204@memstvx1.memst.edu>
Date: 1 Feb 92 16:53:46 -0600
References: <1992Jan30.171309.1168@memstvx1.memst.edu> <1992Jan31.211540.13945@psych.toronto.edu> <1992Jan31.173006.1194@memstvx1.memst.edu> <1992Feb1.194246.10778@psych.toronto.edu>
Organization: Memphis State University
Lines: 92

In article <1992Feb1.194246.10778@psych.toronto.edu>, michael@psych.toronto.edu (Michael Gemar) writes:
> In article <1992Jan31.173006.1194@memstvx1.memst.edu> langston@memstvx1.memst.edu writes:
>>In article <1992Jan31.211540.13945@psych.toronto.edu>, michael@psych.toronto.edu (Michael Gemar) writes:
>>> In article <1992Jan30.171309.1168@memstvx1.memst.edu> langston@memstvx1.memst.edu writes:
>>>>
>>>>  In the discussion of consciousness in this thread, are we adopting the
>>>>'objective' view of consciousness (as I try to argue), or a more third-
>>>>person, subjective view?  If the subjective experience is what is
>>>>being considered here as consciousness, I would argue that this could be
>>>>explained as an agent's (or an agent's processes) goal-directed behaviour
>>>>regarding incoming information.
>>> 
>>> It is not at all clear to me how "goal-directed behaviour" explains
>>> *experience*.   The notions of "agent" and "goal" *assume* the notion
>>> of consciousness, that which is to be explained.  I think that the
>>> above definition is a standard one when taking the "objective view"
>>> of consciousness, but it does nothing to explain experience.
>>> 
>>> - michael
>>
>>   Okay.  Let's start by clarifying what I meant by 'agent'.  I consider an
>>agent to be any process or assemblage of processes that have some sort of
>>interaction with its environment (however sparse it may be).  This is a la
>>Minsky's agents, to draw a comparison.  By goal I was simply meaning to
>>convey the concept of intentional progress towards a desired state.
>                        ^^^^^^^^^^^                    ^^^^^^^
> 
> Both of the above terms are *mental* terms, which describe *subjective, mental*
> states.  Using these terms won't do if you want an *objective*, third-person
> account of consciousness.


  Granted.  (I loathe semantic at times...).  Let me rephrase and support that.
What I meant to say was progress towards a terminal state.  I tend to
anthropomorphize sometimes.  A finite-state machine can be said to be making
intentional (as opposed to random) progress towards a desired (terminal)
state.  I meant to present the concept of goal in terms of any process.
e.g., a calculator, given the input '2+3' has the goal of adding the 2 and
the three and outputting the answer.  Again, these are very SIMPLE examples,
and I simply meant them to be used to convey my usage of terms.  The agents
and their goals I am arguing for take place in a VERY complex system with
multiple active agents and possible conflicting goals.

 
>>I am trying to convery here is that two identical agents (theoretically, now...
>>I know that isn't possible at a complex scale...) subjected to identical
>>external stimuli, will have different subjective experiences depending upon
>>the active goals of the two agents.  And, if we are defining 'consciousness'
>>as the subjective experience (as I think I have seen a few do), then this
>>is my position on what that 'consciousness' is.
>> 
> 
> I would agree that this account explains *differences* in subjective experience.
> But I would argue that this still does not give an *objective* account of
> how consciousness arises.
> 
> - michael

   Well, it looks like I will have to fall back on D. Chalmers' idea of
qualia to clear this up.  In his (now VERY familiar) thermostat example,
consider the notion that not only can it be said that the thermostat has a
VERY limited intelligence, but also a V E R Y limited subjective experience
(= conscious state) regarding the information encoded from the termperature
it is monitoring and maintaining.  Now, jump up the scale several million
orders of magnitude to a more complex system, say, a dog.  It also has
a subjective conception (= consciousness) of the world around it.  And so on
to homo sap.

  I may have boxed myself into allowing that ALL systems have some form of
consciousness, but beyond a certain threshhold, it is purely argumentative
and trivial.  Perhaps I am arguing that consciousness is a product of
subjective experience.  For now I will stand by that argument.  It may not
be possible to 'objectively' define consciousness, since consciousness itself
IS a subjective experience.  As far as the process that gives rise to
consciousness, I still assert it is a product of a highly complex system
coordinating and overseeing its many complex goals and the means by which it
achieve these goals.  (no, I am not postulating a homunculus here)
The control processes that coordinate and supervise these goals would play
a part in the emergence of consciousness, but without the goals and the
subjective experience of satisfying them, the control structure itself would
not generate a conscious experience, and is therefore irrelevant for the
moment.

-- 

Mark C. Langston                                  "What concerns me is not the
Psychology Department                              way things are, but rather
Memphis State University                           the way people think things
LANGSTON@MEMSTVX1.MEMST.EDU                        are."     -Epictetus

     "...a brighter tomorrow?!?  How about a better TODAY?"  -me



