August 28, 2005

Emotions for believable agents

by Michael Mateas · , 5:01 pm

The call for papers is up for ACE 2006 – Agent Construction and Emotions:
Modeling the Cognitive Antecedents and Consequences of Emotion. This is the latest in a series of workshops on modeling emotions in autonomous agents.

While this topic is obviously relevant to anyone building autonomous characters, there’s an interesting tension between functionalist models of emotion that are concerned with how emotion serves as an internal resource for guiding decision making (emotions make us more rational), and computational models of emotion for believable characters that must respond to and convey emotion. For autonomous characters, AI research tends to take the stance that if we build computational models that capture the way human beings do something, then building a believable, autonomous character is merely a matter “applying” the model to the character. I’m deeply skeptical of this position. The whole believable agents turn, as first articulated by Joe Bates and the Oz Project, was about recognizing that the field of believable agents has its own first class research agenda, that characters are not realistic human beings, but are an artistic abstraction; the technical research agenda then becomes focused on AI architectures that support making these artistic abstractions (characters) autonomous. The tension can be seen clearly in this call; the main text focuses entirely on cognitive science questions in the modeling of emotion, while expression is buried in a topic bullet: “The use of computational models or methods to evoke emotion in human subjects”. This is not to say that cognitive scientists and Expressive AI researchers shouldn’t talk to each other, just that the research goals of Expressive AI shouldn’t be reduced to an application of the research results of cognitive science.

While we’re on the topic of emotion modeling, it’s interesting that all of the emotion models I’m familiar with ultimately represent emotion as a vector of floating point numbers, where the number of elements in the vector correspond to unique emotions, and the value corresponds to how much the agent is feeling that emotion (e.g. the agent is angry with degree 4.2 and sad with degree 3.1). While there are obviously many details regarding how you update these values, how they decay over time, and how actions taken by the agent are conditional on these values, emotion is still treated as a collection of simple, local internal state that can be used to conditionalize decision making. I’m interested in architectural conceptions of emotion that would take a much more organic, wholistic, view of emotional state, not reducing it to a state vector isolated within a special “emotion” box. Aaron Sloman has written extensively on architectural models of emotion, and has argued for just such an organic notion of affect, though how you actually implement such an architecture remains unclear.

5 Responses to “Emotions for believable agents”

  1. Nora P. Says:

    It seems to me that the ultimate aims of Expressive AI are far more clear and practicable than “Modeling the Cognitive Antecedents and Consequences of Emotion.” Applying these models towards the creation of believable agents is eventually going to require a framework geared toward the experience of the user — where the “believability” actually enters the picture. Developing cognitive models may advance this effort, but there is no way to directly apply these models so that – poof! – believeability and expressivity are achieved. The Expressive AI approach seems to be a necessary bridge between a conceptual model intended to achieve believability and the realization of an artist-conceived agent actually capable of “capturing” believability.

  2. Aubrey Says:

    …and the question as to whether it’d be ludically useful also still remains. If the emotional machinations represent themselves without perceivable consequence, you’re going to get both the uncanny valley (can’t quite tell why these emotional reactions feel wrong) and I-can’t-tell-what-the-hell-is-going-on syndrome. If emotional models have no perceivable consequence, they’ll cease to be useful manipulable systems to the player, and start to become black boxes of chaos.

    I suppose that if we do not consider natural, but clear ways of explaining emotional states and behaviours, it hardly matters how complex or simplistic an emotional model is. Our model will always be an abstraction, and for it to be useful, we must be able to clearly explain how our somewhat arbitrary model works, hopefully in elegant ways, which are remotely believable (even if they’re total characatures of emotion).

    I know I’m stating the obvious here, but I guess I think that there might be a third track to our studies of emotional models: not just Cog Sci and Expressive AI, but the desires which Ludic systems (sorry, GAMES) have of Expressive AI, too… unless that’s already contained under the banner of Expressive AI?

  3. Jonathan Gratch Says:

    As someone that is interested in both the expressive and cognitive function of emotion, I too am skeptical that we can “simply” map a cognitive model to physical behavior and expect “valid” results. Beyond the tension between cognitive models and external behavior, I would like to emphasize the tension between “realism” (meaning the study of naturalistic human cognitive and physical behavior and the concept “believability,” which may only indirectly related, at best.

    In terms of this later distinction, it is clear that many of the tools animators, actors, and believable agent practitioner use are outside the realm of naturalistic human behavior. Manipulations that impact the concept “believability” include notions such as “hyper reality”, the effects of lighting and music. These are clearly relevant topics in cognitive and social psychology, and their use may be an important channel into understanding aspects of human cognitive and social behavior, but they should not be equated with human cognitive and social behavior (although this tension is rarely explicitly acknowledge in agent work, and often conceptually muddled).

    In terms of the former distinction, we should also be skeptical of simply mapping cognitive models to physical behavior as current AI models are poor proxies for human cognition. If the goal is to build (generative) models of human behavior, then simply binding models to physical behavior will certainly fail to produce realistic behavior, but these failures can be interesting and inform the construction of better models.

    My sense is most venues about “computational emotion” have emphasized expression believability, and so this is my attempt to emphasize the cognitive role of emotion and build connections to cognitive emotion psychology. Thus, the particular focus of the ACE workshop is on functionalism and realism, in particular with respect to cognition, and not believability. Expression of emotion is quite naturally buried in the call, and in this setting, the concern is to use computational methods to induce emotion in human subjects as a methodological tool towards furthering this goal.

    Finally, turning to the topic of emotion modeling, I agree that most models emphasize a vector approach, which seems a superficial and sterile approach for any application that emphasizes cognitive realism in an interactive setting. Rather, what is interesting in emotion research is the machinery that underlies emotional reactions and the way these mechanisms change in response to processing that observers refer to as emotional. Discrete emotion labels such as “Joy” and “Fear” are likely linguistic terms that poorly relate to underlying architectural mechanisms and overemphasizing them can lead to overly simplistic architectural commitments. As Michael mentions, Sloman and this group have emphasized this view for years. A few more recent approaches have this flavor, including the work of Stacy Marsella and myself. I would argue that such “deep” models will also be essential for richly interactive believable agents as well, but that is still an empirical question.

  4. » Blog Archive » Soul Charts Says:

    [...] r Development for Gaming and AI? Don’t ask any writers. Just make a chart of emotions. Grand Text Auto » Emotions for believable agents [...]

  5. michael Says:

    Just a reminder that the submission deadline for this workshop, Nov. 30, is approaching.

Leave a Reply

Powered by WordPress