The Role of Emotions in Modular Intelligent Control

Joanna J. Bryson, Emmanuel A. R. Tanguy and Philip J. Willis
Artificial models of natural Intelligence (AmonI) Group
Department of Computer Science, University of Bath
http://www.cs.bath.ac.uk/ai/AmonI.html

17 May 2004

Often artificial intelligence researchers are tempted to look for a single, simple, homogeneous solution to creating intelligent systems. But animal intelligence isn't like that at all. Animal brains have large numbers of different representations for different sorts of information. Over the last 10 years, we have been working on integrating the roles of hierarchical action-selection mechanisms with behavior modules containing more distributed, dynamic learning and control processes. Now we have begun working on understanding how different representations and time courses for emotions come together to influence action selection, particularly in social contexts.

We are working on a representational framework for emotions that will integrate with action selection, providing the sort of persistent internal contextual state that animals have found so useful for regulating their behavior. We are using two mechanisms to test the plausibility of our representations: a facial animation tool, which will allow us to tap human expertise on the believability of emotions, and multi-agent simulations of primate social behavior, to test the importance of introducing realistic emotional onsets and decays in building stable group social dynamics. This article will concentrate on the former.

Figure 1: Draft architecture for a complete emotional agent. Arrows indicate the flow of information, boxes types of processes, which will be further modularised. Dashed box is the location of the Dynamic Emotion Representation (DER).
\includegraphics[width=\linewidth]{EAD2}

An initial draft of an overall emotional and personality architecture can be seen in Figure 1. The lowest level of the emotional architecture are primary emotions (Picard, 1997; Sloman, 2001) which are generated reactively from the experiences of the agent. Secondary emotions map more closely to the common concepts of emotion, e.g. joy or anger, and often have cognitive reference. Moods affect perception as well as expressed behavior, and typically have longer duration than secondary emotions. The dynamic emotion representation (DER) represents secondary emotions and mood state. It consists of a number of modules each containing dynamic representations of their intensity. Each module also has a stimulus and decay function -- the intensity increases sharply in response to primary emotion stimuli then decay slowly. The number of secondary emotions can be altered depending on the emotional theories a researcher chooses to represent, but our default values are set for Ekman (1999).  For more details about the DER and its role in facial expression, see Tanguy et al. (2003).

We are building a facial animation tool (see Figure 2) that is based on the two-channel concept of facial expressions. This concept assumes two things. First, facial displays are used deliberately by humans to support speech with both redundant and novel visual information. This information is referred to as communicative acts (Pelachaud and Poggi, 1998) or visible acts of meaning (Bavelas and Chovil, 2000). Second, facial displays also reveal internal emotional state, which could be called emotional facial expressions. This suggests the existence of two different channels creating facial displays: the communicative channel which is composed of speech and its tightly synchronised communicative acts; and the emotional channel.

The emotional channel derives data from the DER. Each secondary emotion has a facial expression associated with it, while the mood impacts other signals such as tension and communication. The communicative channel's information comes in the form of tagged text, for which techniques already exist for producing speech and facial animation (Pelachaud and Bilvi, 2003). Since there is a non-unique correspondence between meanings and facial displays, the selection of a facial display should be based on mental and emotional context (Pelachaud and Poggi, 1998). What we propose is to use the mood state, delivered by the DER, to discriminate between the facial displays correspond to a meaning. A second issue will be to merge, mix or select the facial displays produce by the communicative and emotional channels.

Figure 2: Dynamic Emotion Representation Tool, which allows editing the response curves for secondary emotions, and playing the real-time combination of expressions on the model.
Image /home/jjb/tex/cspeart//DERTool-bw.gif

The DER is unique in its support of the different temporal characteristics of onset and decay of different emotions, and in its combination of a variety of time courses (primary emotions, secondary emotions and moods).   These attributes are often overlooked, but presumably serves an adaptive purpose.  Some species even seem to display sexually determined differences in onset curves for some emotions.  Our current work involves evaluating both the DER Tool and the DER itself.  The DER tool will be used to develop communicative agents.  The usability of the tool will be judged by the ability of the artists to manipulate the perception of the agents, for example how persuasive they are.  The DER will also be evaluated in terms of its utility when incorporated into models of primate social behaviour.  This work is so far in its infancy  (Bryson, 2003), but we will have a new PhD student working on this from September 2004. 

Bibliography

Bavelas, J. B. and Chovil, N. (2000).
Visible acts of meaning: An integrated message model of language in face-to-face dialogue.
Journal of language and social psychology, 19(2):163-194.

Bryson, J. J. (2003).
Where should complexity go? Cooperation in complex agents with minimal communication.
In Truszkowski, W., Rouff, C., and Hinchey, M., editors, Innovative Concepts for Agent-Based Systems, pages 298-313. Springer.

Ekman, P. (1999).
Facial expressions.
In Dalgeish, T. and M. P., editors, Handbook of Cognition and Emotion, chapter 16, pages 301-320. John Wiley & Sons Ltd.

Pelachaud, C. and Bilvi, M. (2003).
Computational model of believable conversational agents.
In Huget, M.-P., editor, Communication in MAS:background, current trends and future. Springer-Verlag.

Pelachaud, C. and Poggi, I. (1998).
Facial performative in a conversational system.
In Proceedings of the WECC98.
Workshop on Embodied Conversational Characters.

Picard, R. W. (1997).
Affective Computing.
The MIT Press, Cambridge, Massachusetts, London, England.

Sloman, A. (2001).
Beyond shallow models of emotions.
Cognitive Processing, 2(1):177-198. 

Emmanuel Tanguy, Philip Willis and Joanna J. Bryson (2003).
A Layered Dynamic Emotion Representation for the Creation of Complex Facial Expressions.
In Rist, T., R. Aylett, D. Balin and J. Rickel, editors. Intelligent Virtual Agents 2003,  pages 101-105, Springer.