From newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!uunet!sun-barr!ames!eos!aio!way@gothamcity.jsc.nasa.gov Thu Jul  9 16:20:40 EDT 1992
Article 6434 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!uunet!sun-barr!ames!eos!aio!way@gothamcity.jsc.nasa.gov
>From: way@gothamcity.jsc.nasa.gov (Bob Way)
Subject: Emotional Frogs? Summary, Really
Message-ID: <1992Jul9.160634.15888@aio.jsc.nasa.gov>
Sender: news@aio.jsc.nasa.gov (USENET News System)
Organization: Computer Sciences Corporation
Date: Thu, 9 Jul 1992 16:06:34 GMT
Lines: 313


As several people ask me to forward any information I received
on artificial emotions, here is a short summary.

Thank you for the well thought out responses. I'm sorry that I have 
not replied to several of these, but they deserve more than a half-assed
answer. I will respond shortly.

Bob Way
Computer Sciences Corporation
NASA / Johnson Space Center

P.S. Sorry about the previous blank post.


Original Text:

A while back there was an interesting discussion of levels of intelligence. 
Specifically, the one related to rocks, frogs, mice, apes & humans.

In interesting (to me) similarity among the non-mineral categories is that 
they all seem to exhibit behaviors which suggest that they may possess 
certain levels of *Feelings* and *Emotions*.

For this discussion let me briefly draw a distinction between *Feelings* 
and *Emotions*. When I say *Feelings* I mean things specific to the 
physical operation of the body. (i.e. hunger, fatigue, pain, etc.) When I 
refer to *Emotions* I mean psychologically(?) related states. (i.e. fear, 
anticipation, happiness, curiosity, confusion, etc.)

I speculate that even the low level frog (from the previous thread) at least 
possesses the feeling pain and the emotion fear.

>From my own experience I believe that changes in my own *Feelings* 
and *Emotions* have a great deal to do with my associative memory. 
When Im happy I usually begin to remember previous times I was happy. 
The same is often true of fear or confusion. These strange associations, 
based only on emotions, often help me to find my way in novel circumstances.

*End Monologue*

Has there been any AI research which involves emotions in any way?

What are the possibilities for (effects of) using emotions like these in a 
memory based reasoning system?

What is the possibility for low level intelligence emerging from a robotic 
system based on emotions and associated memory?


Obviously too much time on my hands...

Bob Way
Computer Sciences Corporation
NASA / Johnson Space Center


-----------------------------------------
>From: surana@comm.mot.com (Pinku Surana)

In article <1992Jun25.185352.16671@aio.jsc.nasa.gov> you write:
>Has there been any AI research which involves emotions in any way?
>
>What are the possibilities for (effects of) using emotions like 
>these in a memory based reasoning system?
>
>What is the possibility for low level intelligence emerging from a 
>robotic system based on emotions and associated memory?
>

This is a very interesting question, and one I have given some thought
to (usually in a drunken stupor). It seems that conventional AI 
techniques leave out the most important aspects of biological life, 
namely, instinct, emotion, drive, etc.  Without these, there is no way to
categorize the information flowing in from the senses.  In other words,
blue, rock, water, bacon and heat mean something to us because it evokes or
reminds us of an emotional (in degrees) response. To a computer, it is a 
non-sensical symbol that has no meaning.

This may be a wild idea, but I do not believe intelligence can be created
(at least in the way we know it) without due consideration of emotions. 

I think even HAL (2001) had some emotions. 

Sorry, I don't have any leads as to publications. If you find any, please
pass them along. At least then I'll have something in writing to support
my madness.

Pinku Surana

-----------------------------------------
>From: xarax@eniac.inesc.pt (Luis Antunes)
Organization: INESC - Inst. Eng. Sistemas e Computadores, LISBOA. PORTUGAL.

Try Aaron Sloman (Birmingham UK, I think) work on affects and related
work. One interesting book is 'Intentions on communication', MIT Press,
with papers from Bratman, Cohen&Levesque, etc.



-----------------------------------------
>From: spratt@hawk.cs.ukans.edu (Lindsey Spratt)
Organization: University of Kansas Computer Science Dept

It was asked if there has been AI research which involved emotions in any way.
Some that I know of is that done by Michael Dyer, where he builds a model
of emotions to aid in interpreting narrative text, as presented in his work
"In Depth Understanding", MIT Press, 1982.


-----------------------------------------
>From: rmurali@loki.une.edu.au (Murali Ramakrishnan)
Organization: University of New England - Northern Rivers, (Lismore)
NSW Australia

> It was asked if there has been AI research which involved emotions in any way.
> Some that I know of is that done by Michael Dyer, where he builds a model
> of emotions to aid in interpreting narrative text, as presented in his work
> "In Depth Understanding", MIT Press, 1982.

                  Yes , it might be possible to build such systems and
  even program them to "react" to emotions / feelings / instincts etc.
  Such "concepts" can be "translated" into "machine-understandable codes"
  and a "system" made to react to such codes. 

                  But , the point is , once such "mental states" are
  "translated" , can they still be categorised as "mental states" ?
  As far as the "system" or the computer is concerned , they are only a 
  bunch of binary codes - only electrical pulses. This is , again , 
  nothing but the "Chinese-room problem" , ofcourse.

                  There have been articles on this topic - e.g. the 
  article "Can A computer Feel Pain ?" ( or some such title) in 
  Dennett's book "Brainstorm". But , it does not answer many of the 
  fundamental questions. Seems , it is "built" on Logic , rather than
  Reason. 

-----------------------------------------
>From: El Gordo <G.Joly@cs.ucl.ac.uk>
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

> way@gothamcity.jsc.nasa.gov (Bob Way) says
> 
> Has there been any AI research which involves emotions in any way?

Nope: AI is reductionist. Holism is out!

-----------------------------------------
>From: edmunds@syma.sussex.ac.uk (Edmund Shing)
Organization: University of Sussex
E-Mail: exs@cs.bham.ac.uk	|  University of Birmingham


On 25 Jun 1992 18:53:52 GMT, way@gothamcity.jsc.nasa.gov (Bob Way) said:

BW> A while back there was an interesting discussion of levels 
BW> of intelligence. Specifically, he one related to rocks, 
BW> frogs, mice, apes & humans.

BW> In interesting (to me) similarity among the non-mineral 
BW> categories is that they all seem to exhibit behaviors which 
BW> suggest that they may possess certain levels of *Feelings* 
BW> and *Emotions*.

BW> For this discussion let me briefly draw a distinction between 
BW> *Feelings* and *Emotions*. When I say *Feelings* I mean things 
BW> specific to the physical operation of the body. (i.e. hunger, 
BW> fatigue, pain, etc.) When I refer to *Emotions* I mean 
BW> psychologically(?) related states. (i.e. fear, anticipation, 
BW> happiness, curiosity, confusion, etc.)

There are many problems in trying to define what emotions and feelings
are, probably due to the vagueness and inadequacy of everyday
definitions of these terms. Any definition that encompasses the notion
of consciousness is by definition going to be subject to attacks of
vagueness (handwaving) due to the fact that the term consciousness is
itself ill-defined. I personally would say that your definitions of
feelings and emotions are almost certainly going to be insufficient
for basing any argument on, although it could well be argued that this
can be said of virtually all definitions of emotions and emotional
states.  

BW> I speculate that even the low level frog (from the previous 
BW> thread) at least possesses the feeling pain and the emotion fear.

>From my own experience I believe that changes in my own *Feelings* 
BW> and *Emotions* have a great deal to do with my associative memory. 
BW> When IUm happy I usually begin to remember previous times I was 
BW> happy. The same is often true of fear or confusion. These strange 
BW> associations, based only on emotions, often help me to find my way 
BW> in novel circumstances.

BW> *End Monologue*


BW> Has there been any AI research which involves emotions in any way?

There is AI/cognitive science research on the computational modelling
of emotions. Several interesting models have been put forward, such as
those of Oatley & Johnson-Laird (1987), Sloman (1987) ,Ortony, Clore &
Collins (1988) and Frijda (1986). Computational models of the latter
two theories have been implemented by Elliott (1991) and Swagerman
(1987) respectively, and models based on expansions of Sloman's
design-based approach to emotion and motivation are being constructed
here at Birmingham (I am one of his PhD students). In addition,
Pfeifer (1988) has written an overview of AI models of emotion,
including models such as Colby's PARRY (a model of neurosis).

The basic premise of most of these theories is that emotions are
functional and play a crucial role in the everyday functioning of
human beings. For instance, Oatley & Johnson-Laird's theory suggests a
dual communicative role for emotions, the first role being one of
global signalling within the entire cognitive system of "important"
stimuli (such as an approaching tiger in the jungle). In this case the
global signalling system would bring this potentially life-threatening
stimulus to one's attention and trigger the flight/fight mechanism. 
The second function is that of social communication between
intelligent agents such as people. In many cases the types of plans
that people wish to carry out (based on mutual goals) depend on
cooperation between people which in turn requires communication between
people. Emotions (which are communicated between people by facial
expression, bodily posture, tone of voice etc.) can then give valuable
information about what people are thinking/planning to do even when
they don't wish to give this information away (e.g. when someone who
is upset cannot control their tone of voice, and their wavering of
tone indicates to other people that this person is upset).

BW> What are the possibilities for (effects of) using emotions like 
BW> these in a memory based reasoning system?

Based on cognitive studies of state-dependent memory recall and
recognition (e.g. Teasdale et al. on faster recall of
negatively-valenced memories by clinical depressives when depressed,
but not when not depressed), there have been models of this kind
suggested by people like Bower & Cohen (1982). I suppose that
state-dependent memory can be useful in selecting memories relevant to
the situation at hand in the same way that context-dependent memory is
(the context cues the recall of the associated memories); this may be
one way to order memories efficiently in the brain so as to facilitate
easier recall. 

BW> What is the possibility for low level intelligence emerging from a 
BW> robotic system based on emotions and associated memory?

What type of behaviour are you thinking of when you refer to low-level
intelligence? It may be that a design for a robot that can accomplish
simple functions in a limited domain may be aided in its plan
selection by choosing plans partly based on the context or "state" it
is in. 

-----------------------------------------
>From: mkant@GLINDA.OZ.CS.CMU.EDU
Organization: School of Computer Science, Carnegie Mellon

In article <1992Jul6.083903.14066@syma.sussex.ac.uk> you write:
>BW> Has there been any AI research which involves emotions in any way?
>
>There is AI/cognitive science research on the computational modelling
>of emotions. Several interesting models have been put forward, such as
>those of Oatley & Johnson-Laird (1987), Sloman (1987) ,Ortony, Clore &
>Collins (1988) and Frijda (1986). Computational models of the latter
>two theories have been implemented by Elliott (1991) and Swagerman
>(1987) respectively, and models based on expansions of Sloman's
>design-based approach to emotion and motivation are being constructed
>here at Birmingham (I am one of his PhD students). In addition,
>Pfeifer (1988) has written an overview of AI models of emotion,
>including models such as Colby's PARRY (a model of neurosis).

You omit Dyer's work. Also, see a paper by Scott Reilly to appear in
the proceedings of the Cognitive Science Society. It describes the
implementation of the emotional model of a cat named Lyotard within
the Tok reactive agent architecture. When you interact with this cat,
it feels just like a real cat.

-----------------------------------------
>From: schlue@jargon.gmd.de (Bernd Schlueter)
I3.KI.AS [Adaptive Systems Research Group]
GMD [German National Research Center for Computer Science]

the following article may be very interesting for you:

Pattie Maes (1990): A Bottom-Up Mechanism for behavior selection in an
artificial creature; in Proceedings of the First Interntl Conference
in Simulation of Adaptive Behavior "From Animals to Animats", Paris,
France, 1990; pp. 238-246.

Maes distinguishes between MOTIVATIONS safety, curiosoty, aggression,
fear, hunger, lazyness, thirst, and BEHAVIORS explore, fight, eat,
drink, sleep, avoid-obstacle, flee-from-creature.  What you call
FEELINGS may be something like Maes' PERCEPTUAL CONDITIONS.
Maes simulates behavior selection and presents some very interesting
results, like emerging (not preprogrammed) displacement bahavior,
opportunistiv behavior, ...

You may get some interesting answers from people working on
behavior-based-robotic, if you repeat your question in comp.robotic.

Yes, I tried to integrate an associative memory (based on Kanerva's
Sparse Distributed Memory) into a behavoir based robot controller.
But very simple, very low-level.  The memory associates with every
sensor situation (8 range finders) a motor step in a simple simulated
2D world with walls and obstacles.  Trainig, exploring, and
"coping"-behaviors are emitted by a behavior based adaptive control
system.

Bernd Schlueter (1992): A Cybernetic Control Model from Ethlogy for
Adaptive Coordination of Robot Behaviors; in Proceedings of the SPIE
Conference on Adaptive and Learning Systems, Orlando, (Fl), 20-24.4.1992.
Proceedings are published within five months from April.
If you want me to I will email a copy.

Hope that helps.



