Lecture Notes in Artificial Intelligence Edited by J. G. Carbonell and J. Siekmann
Subseries of Lecture Notes in Computer Science
2792
3
Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo
Thomas Rist Ruth Aylett Daniel Ballin Jeff Rickel (Eds.)
Intelligent Virtual Agents 4th International Workshop, IVA 2003 Kloster Irsee, Germany, September 15-17, 2003 Proceedings
13
Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA J¨org Siekmann, University of Saarland, Saarbr¨ucken, Germany Volume Editors Thomas Rist DFKI GmbH Stuhlsatzenhausweg 3, 66111 Saarbrücken, Germany E-mail:
[email protected] Ruth Aylett University of Salford, Centre for Virtual Environments Business House, Salford, M5 4WT, UK E-mail:
[email protected] Daniel Ballin Radical Multimedia Lab, BT Exact Ross PP4 Adastral Park Ipswich, IP5 3RE, UK E-mail:
[email protected] Jeff Rickel USC Information Sciences Institute 4676 Admiralty Way, Suite 1001, Marina del Rey, USA E-mail:
[email protected] Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at
. CR Subject Classification (1998): I.2.11, I.2, H.5, H.4, K.3 ISSN 0302-9743 ISBN 3-540-20003-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by Boller Mediendesign Printed on acid-free paper SPIN: 10931851 06/3142 543210
Preface
This volume, containing the proceedings of IVA 2003, held at Kloster Irsee, in Germany, September 15–17, 2003, is testimony to the growing importance of Intelligent Virtual Agents (IVAs) as a research field. We received 67 submissions, nearly twice as many as for IVA 2001, not only from European countries, but from China, Japan, and Korea, and both North and South America. As IVA research develops, a growing number of application areas and platforms are also being researched. Interface agents are used as part of larger applications, often on the Web. Education applications draw on virtual actors and virtual drama, while the advent of 3D mobile computing and the convergence of telephones and PDAs produce geographically-aware guides and mobile entertainment applications. A theme that will be apparent in a number of the papers in this volume is the impact of embodiment on IVA research – a characteristic differentiating it to some extent from the larger field of software agents. Believability is a major research concern, and expressiveness – facial, gestural, postural, vocal – a growing research area. Both the modeling of IVA emotional systems and the development of IVA narrative frameworks are represented in this volume. A characteristic of IVA research is its interdisciplinarity, involving Artificial Intelligence (AI) and Artificial Life (ALife), Human-Computer Interaction (HCI), Graphics, Psychology, and Software Engineering, among other disciplines. All of these areas are represented in the papers collected here. The purpose of the IVA workshop series is to bring researchers from all the relevant disciplines together and to help in the building of a common vocabulary and shared research domain. While trying to attract the best work in IVA research, the aim is inclusiveness and the stimulation of dialogue and debate. The larger this event grows, the larger the number of people whose efforts contribute to its success. First, of course, we must thank the authors themselves, whose willingness to share their work and ideas makes it all possible. Next, the Program Committee, including the editors and 49 distinguished researchers, who worked so hard to tight deadlines to select the best work for presentation, and the extra reviewers who helped to deal with the large number of submissions. The local arrangements committee played a vital role in the smooth running of the event. Finally, all those who attended made the event more than the sum of its presentations with all the discussion and interaction that makes a workshop come to life.
July 2003
Ruth Aylett Daniel Ballin Jeff Rickel Thomas Rist
To Our Friend and Colleague Jeff
Committee Listings
Conference Chairs Ruth Aylett (Centre for Virtual Environments, University of Salford, UK) Daniel Ballin (Radical Multimedia Laboratory, BTexact, UK) Jeff Rickel (USC Information Sciences Institute, USA)
Local Conference Chair Thomas Rist (DFKI, D)
Organizing Committee Elisabeth Andr´e (University of Augsburg, D) Patrick Gebhard (DFKI, D) Marco Gillies (UCL@Adastral Park, University College London/BTexact, UK) J´esus Ib´an ˜ ez (Universitat Pompeu Fabra, E) Martin Klesen (DFKI, D) Matthias Rehm (University of Augsburg, D) Jon Sutton (Radical Multimedia Laboratory, BTexact, UK)
Invited Speakers Stacy Marsella (USC Information Sciences Institute) Antonio Kr¨ uger (University of Saarland) Alexander Reinecke (Charamel GmbH) Marc Cavazza (Teeside University)
Program Committee Jan Albeck Elisabeth Andr´e Yasmine Arafa Ruth Aylett Norman Badler Daniel Ballin Josep Blat Bruce Blumberg Joanna Bryson
Lola Ca˜ namero Justine Cassell Marc Cavazza Elizabeth Churchill Barry Crabtree Kerstin Dautenhahn Ang´elica de Antonio Nadja de Carolis Fiorella de Rosis
X
Committee Listings
Patrick Doyle Marco Gillies Patrick Gebhard Jonathan Gratch Barbara Hayes-Roth Randy Hill Adrian Hilton Kristina H¨ o¨ ok Katherine Isbister Mitsuru Ishizuka Ido Iurgel Lewis Johnson Martin Klesen Jarmo Laaksolahti John Laird James Lester Craig Lindley Brian Loyall Yang Lu Nadia Magnenat-Thalmann Andrew Marriot Stacy Marsella
Michael Mateas Alexander Nareyek Anton Nijholt Gregory O’Hare Sharon Oviatt Ana Paiva Catherine Pelachaud Paolo Petta Tony Polichroniadis Helmut Prendinger Thomas Rist Matthias Rehm Jeff Rickel Daniela Romano Anthony Steed Emmanuel Tanguy Daniel Thalmann Kris Thorisson Demetri Terzopolous Hannes Vilhj´almsson John Vince Michael Young
Sponsoring Institutions EU 5th Framework VICTEC Project SIGMEDIA, ACL’s Special Interest Group on Multimedia Language Processing DFKI, German Research Center for Artificial Intelligence GmbH BTexact Technologies University of Augsburg, Dept. of Multimedia Concepts and Applications
Table of Contents
Keynote Speech Interactive Pedagogical Drama: Carmen’s Bright IDEAS Assessed . . . . . . . . . . . .1 S.C. Marsella
Interface Agents and Conversational Agents Happy Chatbot, Happy User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 ´ Kiss, A. Szal´ G. Tatai, A. Csord´ as, A. o, L. Laufer Interactive Agents Learning Their Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 M. Hildebrand, A. Eli¨ens, Z. Huang, C. Visser Socialite in derSpittelberg: Incorporating Animated Conversation into a Web-Based Community-Building Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B. Krenn, B. Neumayr FlurMax: An Interactive Virtual Agent for Entertaining Visitors in a Hallway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 B. Jung, S. Kopp When H.C. Andersen Is Not Talking Back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 N.O. Bernsen
Emotion and Believability Emotion in Intelligent Virtual Agents: The Flow Model of Emotion . . . . . . . . . 31 L. Morgado, G. Gaspar The Social Credit Assignment Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 W. Mao, J. Gratch Adding the Emotional Dimension to Scripting Character Dialogues . . . . . . . . . 48 P. Gebhard, M. Kipp, M. Klesen, T. Rist Synthetic Emotension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 C. Martinho, M. Gomes, A. Paiva FantasyA – The Duel of Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 R. Prada, M. Vala, A. Paiva, K. Hook, A. Bullock Double Bind Situations in Man-Machine Interaction under Contexts of Mental Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 T. Nomura
XII
Table of Contents
Expressive Animation Happy Characters Don’t Feel Well in Sad Bodies! . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 M. Vala, A. Paiva, M.R. Gomes Reusable Gestures for Interactive Web Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Z. Ruttkay, Z. Huang, A. Eli¨ens A Model of Interpersonal Attitude and Posture Generation . . . . . . . . . . . . . . . . . 88 M. Gillies, D. Ballin Modelling Gaze Behaviour for Conversational Agents . . . . . . . . . . . . . . . . . . . . . . . 93 C. Pelachaud, M. Bilvi A Layered Dynamic Emotion Representation for the Creation of Complex Facial Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 E. Tanguy, P. Willis, J. Bryson Eye-Contact Based Communication Protocol in Human-Agent Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 H. Nonaka, M. Kurihara
Embodiment and Situatedness Embodied in a Look: Bridging the Gap between Humans and Avatars . . . . . 111 N. Courty, G. Breton, D. Pel´e Modelling Accessibility of Embodied Agents for Multi-modal Dialogue in Complex Virtual Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 D. Sampath, J. Rickel Bridging the Gap between Language and Action . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 T. Takenobu, K. Tomofumi, S. Suguru, O. Manabu VideoDIMs as a Framework for Digital Immortality Applications . . . . . . . . . . 136 D. DeGroot
Motion Planning Motion Path Synthesis for Intelligent Avatar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 F. Liu, R. Liang ”Is It Within My Reach?” – An Agents Perspective . . . . . . . . . . . . . . . . . . . . . . . .150 Z. Huang, A. Eli¨ens, C. Visser Simulating Virtual Humans Across Diverse Situations . . . . . . . . . . . . . . . . . . . . . 159 B. Mac Namee, S. Dobbyn, P. Cunningham, C. O’Sullivan A Model for Generating and Animating Groups of Virtual Agents . . . . . . . . . 164 M. Becker Villamil, S. Raupp Musse, L.P. Luna de Oliveira
Table of Contents
XIII
Scripting Choreographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 S.M. Gr¨ unvogel, S. Schwichtenberg Behavioural Animation of Autonomous Virtual Agents Helped by Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 T. Conde, W. Tambellini, D. Thalmann
Modells, Architectures, and Tools Designing Commercial Applications with Life-like Characters . . . . . . . . . . . . . . 181 A. Reinecke Comparing Different Control Architectures for Autobiographic Agents in Static Virtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 W.C. Ho, K. Dautenhahn, C.L. Nehaniv KGBot: A BDI Agent Deploying within a Complex 3D Virtual Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 I.-C. Kim Using the BDI Architecture to Produce Autonomous Characters in Virtual Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 J.A. Torres, L.P. Nedel, R.H. Bordini Programmable Agent Perception in Intelligent Virtual Environments. . . . . . .202 S. Vosinakis, T. Panayiotopoulos Mediating Action and Music with Augmented Grammars . . . . . . . . . . . . . . . . . . 207 P. Casella, A. Paiva Charisma Cam: A Prototype of an Intelligent Digital Sensory Organ for Virtual Humans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 M. Bechinie, K. Grammer
Mobile and Portable IVAs Life-like Characters for the Personal Exploration of Active Cultural Heritage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .217 A. Kr¨ uger Agent Chameleons: Virtual Agents Real Intelligence . . . . . . . . . . . . . . . . . . . . . . . 218 G.M.P. O’Hare, B.R. Duffy, B. Sch¨ on, A.N. Martin, J.F. Bradley A Scripting Language for Multimodal Presentation on Mobile Phones . . . . . . 226 S. Saeyor, S. Mukherjee, K. Uchiyama, M. Ishizuka
XIV
Table of Contents
Narration and Storytelling Interacting with Virtual Agents in Mixed Reality Interactive Storytelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 M. Cavazza, O. Martin, F. Charles, S.J. Mead, X. Marichal An Autonomous Real-Time Camera Agent for Interactive Narratives and Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 A. Hornung, G. Lakemeyer, G. Trogemann Solving the Narrative Paradox in VEs – Lessons from RPGs . . . . . . . . . . . . . . . 244 S. Louchart, R. Aylett That’s My Point! Telling Stories from a Virtual Guide Perspective . . . . . . . . . 249 J. Ibanez, R. Aylett, R. Ruiz-Rodarte Virtual Actors in Interactivated Storytelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 I.A. Iurgel Symbolic Acting in a Virtual Narrative Environment . . . . . . . . . . . . . . . . . . . . . . 259 L. Sch¨ afer, B. Bokan, A. Oldroyd Enhancing Believability Using Affective Cinematograhy . . . . . . . . . . . . . . . . . . . . 264 J. Laaksolathi, N. Bergmark, E. Hedlund Agents with No Aims: Motivation-Driven Continous Planning . . . . . . . . . . . . . 269 N. Avradinis, R. Aylett
Evaluation and Design Methodologies Analysis of Virtual Agent Communities by Means of AI Techniques and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274 ˇ D. Kadleˇcek, D. Rehoˇ r, P. Nahodil, P. Slav´ık Persona Effect Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 H. Prendinger, S. Mayer, J. Mori, M. Ishizuka Effects of Embodied Interface Agents and Their Gestural Activity . . . . . . . . . 292 N.C. Kr¨ amer, B. Tietz, G. Bente Embodiment and Interaction Guidelines for Designing Credible, Trustworthy Embodied Conversational Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 A.J. Cowell, K.M. Stanney Animated Characters in Bullying Intervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 S. Woods, L. Hall, D. Sobral, K. Dautenhahn, D. Wolke Embodied Conversational Agents: Effects on Memory Performance and Anthropomorphisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 R.-J. Beun, E. de Vos, C. Witteman
Table of Contents
XV
Agents across Cultures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 S. Payr, R. Trappl
Education and Training Steve Meets Jack: The Integration of an Intelligent Tutor and a Virtual Environment with Planning Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . 325 G. M´endez, J. Rickel, A. de Antonio Machiavellian Characters and the Edutainment Paradox . . . . . . . . . . . . . . . . . . . 333 D. Sobral, I. Machado, A. Paiva Socially Intelligent Tutor Agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341 D. Heylen, A. Nijholt, R. op den Akker, M. Vissers Multimodal Training Between Agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348 M. Rehm
Posters Intelligent Camera Direction in Virtual Storytelling . . . . . . . . . . . . . . . . . . . . . . . . 354 B. Bokan, L. Sch¨ afer Exploring an Agent-Driven 3D Learning Environment for Computer Graphics Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 W. Hu, J. Zhu, Z.G. Pan An Efficient Synthetic Vision System for 3D Multi-character Systems . . . . . . 356 M. Lozano, R. Lucia, F. Barber, F. Grimaldo, A. Lucas, A. Fornes Avatar Arena: Virtual Group-Dynamics in Multi-character Negotiation Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 M. Schmitt, T. Rist Emotional Behaviour Animation of Virtual Humans in Intelligent Virtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Z. Liu, Z.G. Pan Empathic Virtual Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 C. Zoll, S. Enz, H. Schaub Improving Reinforcement Learning Algorithm Using Emotions in a Multi-agent System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .361 R. Daneshvar, C. Lucas
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Interactive Pedagogical Drama: Carmen’s Bright IDEAS Assessed Stacy C. Marsella Center for Advanced Research in Technology for Education Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, California, 90292, USA [email protected]
1
Extended Abstract
The use of drama as a pedagogical tool has a long tradition. Aristotle argued that drama is an imitation of life, and not only do we learn through that imitation but our enjoyment of drama derives in part from our delight in learning. More recently, research in psychology has argued that narrative is central to how we understand the world and communicate that understanding[1]. And of course, the engaging, motivational nature of story is undeniable; the world consumes stories with a “ravenous hunger”[3]. However, stories traditionally place the learner in the role of passive spectator instead of active learner. The goal of Interactive Pedagogical Drama (IPD) is to exploit the edifying power of story while promoting active learning. An IPD immerses the learner in an engaging, evocative story where she interacts openly with realistic characters. The learner makes decisions or takes actions on behalf of a character in the story, and sees the consequences of her decisions. The learner identifies with and assumes responsibility for the characters in the story, while the control afforded to the learner enhances intrinsic motivation[2]. Since the IPD framework allows for stories with multiple interacting characters, learning can be embedded in a social context[6]. We take a very wide view of the potential applications of interactive story and IPD in particular. We envision interactive story as a means to teach social skills, to teach math and science, to further individual development, to provide health interventions, etc. We have developed an agent-based approach to interactive pedagogical drama. Our first IPD was Carmen’s Bright IDEAS (CBI), an interactive, animated health intervention designed to improve the social problem-solving skills of mothers of pediatric cancer patients. Parents of children with chronic diseases are often poorly equipped to handle the multiple demands required by their ill child as well as the needs of their healthy children, spouse and work. Critical decisions must be made that affect family and work. To help train parents in the problem-solving skills required to address such challenges, CBI teaches a method for social problem-solving called Bright IDEAS[5]. Each letter of IDEAS refers to a separate step in the problem solving method: Identify a solvable problem, T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 1–4, 2003. c Springer-Verlag Berlin Heidelberg 2003
2
Stacy C. Marsella
Develop possible solutions, Evaluate options, Act on plan and See if it worked. Prior to CBI, the Bright IDEAS method was taught in a series of one-on-one sessions with trained counselors, using worksheets that helped a mother detail her problems in terms of IDEAS steps. The purpose of Carmen’s Bright IDEAS is to teach mothers how to apply the Bright IDEAS method in concrete situations. Mothers learn more on their own and at times of their own choosing, and rely less on face-to-face counseling sessions, The interactive story of Carmen’s Bright IDEAS is organized into three acts. The first act reveals the back story; various problems Carmen is facing, including her son’s cancer, her daughter Diana’s temper tantrums, work problems, etc. The second, main, act takes place in an office, where Carmen discusses her problems with a clinical counselor, Gina, who suggests she pick a solvable problem and use Bright IDEAS to help her find solutions. See Figure 1. With Gina’s help, Carmen goes through the initial steps of Bright IDEAS, applying the steps to one of her problems and then completes the remaining steps on her own. The final act reveals the outcomes of Carmen’s application of Bright IDEAS. The learner interacts with the drama by making choices for Carmen such as what problem to work on and how she should cope with the stresses she is facing. The learner can choose alternative internal thoughts for Carmen. These are presented as thought balloons (see Figure 2). Both Gina’s dialog moves and the learner’s choices influence the cognitive and emotional state of the agent playing Carmen, which in turn impacts her behavior and dialog,
Fig. 1. Gina (left) and Carmen (right) in Gina’s office.
Interactive Pedagogical Drama: Carmen’s Bright IDEAS Assessed
3
Fig. 2. Interaction with Carmen through Thought Balloons.
In general, creating an IPD requires the designer to balance the demands of creating a good story, achieving pedagogical goals and allowing user control, while maintaining high artistic standards. To ensure a good story, dramatic tension, pacing and the integrity of story and character must be maintained. Pedagogical goals require the design of a pedagogically-appropriate “gaming” space with appropriate consequences for learner choices, scaffolding to help the learner when necessary and a style of play appropriate to the learner’s skill and age. To provide for learner control, an interaction framework must be developed to allow the learner’s interactions to impact story and the pedagogical goals. These various demands can be in conflict, for example, pedagogically appropriate consequences can conflict with dramatic tension and learner control can impact pacing and story integrity. As the difficult subject matter and pedagogical goals of Carmen’s Bright IDEAS makes clear, all these design choices must be sensitive to the learner and their needs. An early version of the CBI system was first described in [4]. It was subsequently further developed and then tested as an exploratory arm of a clinical trial of the Bright IDEAS method at seven cancer centers across the U.S. Results of the exploratory trial have recently become available. The results overall were very positive and promising for the use of IPD in health interventions. For example, the mothers found the experience very believable and helpful in understanding how to apply Bright IDEAS to their own problems. This talk will reveal the rationale behind the design choices made in creating CBI, describe the technology that was used in the version that went into clinical
4
Stacy C. Marsella
trials, as well as discuss in detail results from its evaluation. Our more recent research in applying IPD to language learning will also be discussed. Although the expectation that a system like CBI could substitute for time spent with a trained clinical counselors teaching Bright IDEAS is bold, the reality is that the alternative of repeated one-on-one sessions with counselors is not feasible for reaching a larger audience. Interactive Pedagogical Drama could fill a void in making effective health intervention training available to the larger public at their convenience. The training task for Carmen’s Bright IDEAS was a difficult one, fraught with many potential pitfalls. The fact that it was so well received by the mothers was remarkable, and bodes well for applying IPD to other training and learning tasks. Acknowledgement I would like to thank my colleagues on the Carmen project, W. Lewis Johnson and Catherine M. LaBore as well as our clinical collaborators, particularly O.J. Sahler, MD, Ernest Katz, Ph.D., James Varni, Ph.D., and Karin Hart, Psy.D. Supported in part by the National Cancer Institute under grant R25CA65520.
References 1. Bruner, J. (1990). Acts of Meaning. Harvard Univ., Cambridge, MA 2. Lepper, M.R. and Henderlong, J. (2000). Turning play into work and work into play: 25 years of research in intrinsic versus extrinsic motivation. In Sansone and Harackiewicz (Eds.), Intrinsic and Extrinsic Motivation: The Search for Optimal Motivation and Performance, 257-307. San Diego: Academic Press. 3. McKee, R. (1997). Story. Harper Collins, NY, NY. 4. Marsella, S. Johnson, W.L. and LaBore, C. (2000). Interactive Pedagogical Drama. In Proceedings of the Fourth International Conference on Autonomous Agents, 301308. 5. Varni, J.W., Sahler, O.J., Katz, E.R., Mulhern, R.K., Copeland, D.R., Noll, R.B., Phipps, S., Dolgin, M.J., and Roghmann, K. (1999) Maternal problem-solving therapy in pediatric cancer. Journal of Psychosocial Oncology, 16, 41-71. 6. Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. (M. Cole, V. John-Steiner, S. Scribner and E. Souberman, Eds. and Trans.). Cambridge, England: Cambridge University Press.
Synthetic Emotension Building Believability Carlos Martinho, M´ ario Gomes, and Ana Paiva Instituto Superior T´ecnico, Taguspark Campus Avenida Prof. Cavaco Silva, TagusPark 2780-990 Porto Salvo, Portugal {carlos.martinho,mario.gomes,ana.paiva}@dei.ist.ult.pt
Emotension: concatenation of the words emotion, attention, and tension, expressing the attentional and emotional predisposition towards an action, as well as the cognitive “tension” sustained during this action.
Abstract. We present our first steps towards a framework aiming at increasing the believability of synthetic characters through attentional and emotional control. The framework is based on the hypothesis that the agent mind works as a multi-layered natural evolution system, regulated by bio-digital mechanisms, such as synthetic emotions and synthetic attention, that qualitatively regulate the mind endless evolution. Built on this assumption, we are developing a semi-autonomous module extending the sensor-effector agent architecture, handling the primary emotensional aspects of the agent behavior thus, providing with the necessary elements to enrich its believability.
1
Introduction
Believability is a subjective yet critical concept to account for when creating and developing synthetic beings. Synthetic characters are a proved medium to enhance and enrich the interaction between the user and the machine, be it from the usability point of view or from the entertainment point of view. When focusing on the machine-to-user side of the interaction, the believability of the intervening artificial life forms plays an important role in the definition of the quality of the interaction. By believable character, we mean a digital being that “acts in character, and allows the suspension of disbelief of the viewer”[1]. Disney’s concept of awareness[2] provides useful guidelines to build believable characters, namely specifying the function of attention and emotions in building believability. Being more than 80 years old, Disney’s approach to create the illusion of life remains nonetheless actual. Although generally interpreted as “display the internal state of the character to the viewer”, the concept of awareness suggests more: that expression should be consistent with the surrounding environment, specially in terms of attentional and emotional reactions of the intervening characters. Consider the following example. Nita stands inside a room when Emy (main character) enters. Nita should respond by looking at T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 57–61, 2003. c Springer-Verlag Berlin Heidelberg 2003
58
Carlos Martinho, M´ ario Gomes, and Ana Paiva
Emy - implicitly, Nita’s attention focus is on Emy. Furthermore, Nita should express an emotional reaction - perceived as caused by Emy, even if indirectly. The same behavior principle should be applied to all intervening characters, including Emy. The richness of the reactive response varies according to the character importance, but should always be present, as this behavior loop increases the believability of the main character. Taking the concept of awareness one step further, this work researches which mechanisms are suited to control both the focus of attention and the emotional reactions of a synthetic character, to increase its believability. Furthermore, it will assert if such control can be performed on a semi-autonomous basis, that is with a certain independence from the main processing of the agent. This would allow us to extend the base agent architecture1 with a module designed to provide support for believability in synthetic character creation. The document is organized as follows. Next section, “Architecture”, presents the architecture extension being researched and exemplifies the role of the emotensional module. Afterwards, “Emotension” discusses the role of evolution, attention and emotion in the architecture. Finally, “Results” and “Conclusions” discuss our preliminary experiments and results.
2
Architecture
The emotensional module architecture is based on the hypothesis that the agent’s mind is a natural evolution system of perceptions, regulated by emotension.
Fig. 1. Agent Architecture The module is composed by two columns where perceptions evolve (Fig. 1): the sensor column, intercepting the data flow from sensors and; the effector 1
Russel and Norvig definition of agent: “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”.
Synthetic Emotension
59
column, regulating the data flow entering the effector module. The choice of the term “column” will become clear in next section. The intercepted data is used to update the current emotensional state of the agent. At all time, the current emotensional state is available to the processing module. The data flow between the base agent modules may be altered or induced by the emotensional module, transparently simulating sensor or effector information. The emotensional module transparently provides the agent with the building blocks of its believability. Let us go back to our example and analyse it at the architecture level. When Emy enters the room where Nita is standing, Nita’s sensor column appraises the stimulus as highly relevant, since it is unexpected (involuntary attention) and raises a set of emotional memories associated with Emy in Nita’s mind (somatic markers), pulled from long-term memory (by similarity with the current stimulus). Due to the nature of these memories, Nita becomes afraid. Regulated by Nita’s emotensional state, the effector column starts evolving fear-oriented action tendencies which are fed to Nita’s effectors, disclosing her inner state to the viewer. Nita becomes restless. Meanwhile, all emotensional stimuli continue their natural evolution process in the sensor column. Suddenly, and as a result of this evolution process, the memory of an unhappy episode with Emy and a glove pops up in Nita’s mind. Nita’s attention is now on the beige glove left on the table near her (involuntary attention provoked from the recalled event), although the one she remembers from the story was brown. She cannot take her eyes from the glove (strong emotional reaction which temporarily floods the evolution process in the sensor column). The same type of processing happens in Emy’s emotensional module. She entered the room searching for her missing glove (voluntary attention) and suddenly, noticed that Nita was near (involuntary attention). Remembering the same past experience (somatic marker), she leaves the room laughing out loud (gloating behavior evolved in the effector column) while looking at Nita (focus of attention), paler than ever. The main point is that everything happened transparently. From the processing module point of view, Emy entered the room, picked her glove and left. Nothing more.
3
Emotension
In brief, the emotension module works as follows: the sensor column receives the perceptions and updates the emotensional state, while the effector column generates the action tendencies based on the current emotensional state. The sensor column receives the agent internal and external perceptions and appraises their relevance using a process inspired on the Psychology of human attention and emotion. As current research shows[3], both concepts are interrelated. In our work, the notion of expectation brings them together. A signal is considered relevant when it is unexpected according to the extrapolations made from past observations, or when it is actively searched for (this information is provided by the processing module). So inner drives which suddenly change, objects which suddenly pop up, or objects that are being searched for will have high relevance. This approach is consistent with the voluntary-involuntary control dichotomy based on Posner’s and Muller’s theories of attention[4] as well
60
Carlos Martinho, M´ ario Gomes, and Ana Paiva
as with the notion of emotion as an interruption or warning mechanism[5]. It is also consistent with the developmental theories of emotions from Psychology[5] which, following the seminal works of Watson (1929) and Bridges (1932), consider emotions to be a relevant selection mechanism, as well as an adapting mechanism controlling the individual’s behavior and the control process itself. The relevance is presently calculated mathematically, using polynomial extrapolation. However, we aim at developing an evolution system to evaluate relevance. Unexpected changes in inner drives generate primary emotions. For instance, while experiencing a “sugar need” and sensing the need increasing (as energy is spent on movement), we suddenly notice a fall (as energy is replenished after eating a candy): a relief emotion is launched. By monitoring a drive over time, we predict its next state. If the prediction is far from the value read from the drive, an emotion is raised. The greater the error, the higher the intensity. The type and valence of the emotion depends on the previous state, the expected state and the new state. Note that by mapping the concept of emotion over the concept of drive, an unidimensional value with a resting (desired) state, we achieve a certain semantical independence from the agent body. The external signal then enters the sensor memory (Fig. 2) associated with the calculated relevance, and joins the previously evolved perceptions. The evolution happening inside the column is implemented as a multi-layered cellular automata. Each plateau is a bidimensional hexagonal wrapped cellular automata where perceptions are copied, crossed, mutated and decay. Perceptions also move in between plateaux according to specific rules allowing meaningful perceptions to be recorded in memory or recalled when emotensionally significant2 . All signals decay. Each plateau has a different decay rate. When a cell becomes empty, it is substituted by a combination of its neighbor cells, with added noise. At all time, the more relevant signals in sensor memory define the primary emotensional state of the agent. This state is used to generate the agent action tendencies. Although currently the effector column is limited to a random action based on the current emotensional state of the agent3 , we aim at developing an action memory as the effector column depicted in Fig. 2.
4
Results
Our first results have been encouraging. Fig. 2 shows some snaps of the simulations we are performing on the sensor column. Preliminary results point to: – given time, the column can make recall associations, similar to the association of the beige-brown glove in the previous example. This can potentially enrich idle character behavior in a believable way, at no additional cost; – a certain immunity to sensor noise and a certain randomness/diversity of directed behavior, due to the chaotic nature of the dynamic system; – unfortunatly, one parameter still has to be tuned: stimulus decay. 2 3
Currently, the signals are compared in terms of intensity with the short-term memory (working memory) and in terms of similarity with the long-term memory. An action is selected in response to a < emotion, object > pair.
Synthetic Emotension
61
Fig. 2. Emotensional module (left) and sensor column prototype (right)
5
Conclusion
We presented an architecture and briefly described the implementation of a module aiming at increasing the believability of synthetic characters. The main motivation is that by providing with automated control of a synthetic character emotensional reactions and following Disney’s Illusion of Life guidelines, we will increase its believability. The framework, based on the hypothesis that the agent mind works as a natural evolution system regulated by synthetic emotions and synthetic attention, provides the basis of a potentially robust, diverse, and generic form of handling the agent perceptions. The architecture, extending the base sensor-effector agent architecture, allows the believability to occur without an explicit control from the processing module. The multi-layer cellular automata prototype implementation allowed us to assert the preliminary feasibility of the enunciated concepts.
References [1] Bates, J.: The role of emotions in believable agents. Technical report, Carneggie Mellon University (1994) [2] Thomas, F., Johnson, O.: The Illusion of Life. Hyperion Press (1994) [3] Wells, A., Matthews, G.: Attention and Emotion - a Clinical Perspective. Psychology Press (1994) [4] Styles, E.: The Psychology of Attention. Psychology Press (1995) [5] Strongman, K.: The Psychology of Emotions. John Wiley and Sons, Ltd (1996)
A Model of Interpersonal Attitude and Posture Generation Marco Gillies1 and Daniel Ballin2 1
UCL@Adastral Park, University College London, Adastral Park, Ipswich IP5 3RE, UK, [email protected], http://www.cs.ucl.ac.uk/staff/m.gillies 2 Radical Multimedia Lab, BTexact, Adastral Park, Ipswich IP5 3RE, UK, [email protected]
Abstract. We present a model of interpersonal attitude used for generating expressive postures for computer animated characters. Our model consists of two principle dimensions, affiliation and status. It takes into account the relationships between the attitudes of two characters and allows for a large degree of variation between characters, both in how they react to other characters’ behaviour and in the ways in which they express attitude.
Human bodies are highly expressive, a casual observation of a group of people will reveal a large variety of postures. Some people stand straight, while others are slumped or hunched over; some people have very asymmetric postures; heads can be held at many different angles, and arms can adopt a huge variety of postures each with a different meaning: hands on hips or in pockets; arms crossed; scratching the head or neck, or fiddling with clothing. Computer animated character often lack this variety of expression and can seem stiff and robotic, however, posture has been relatively little studied in the field of expressive virtual characters. It is a useful cue as it is very clearly visible and can be displayed well on even fairly graphically simple characters. Posture is particularly associated with expressing relationships between people or their attitude to each other, for example a close posture displays a liking while drawing up to full height displays a dominant attitude. Attitude is also an area of expressive behaviour that has been less studied than say, emotion. As such we have chosen to base our model of gesture generation primarily on attitude rather than emotion or other factors.
1
Related Work
Various researchers have worked on relationships between animated characters. Prendiger and Ishizuka[7] and Rist and Schmitt[8] have studied the evolution of
This work has been supported by BTexact. We would like to thank Mel Slater and the UCL Virtual Environments and Computer Graphics group for their help and support and Amanda Oldroyd for the use of her character models.
T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 88–92, 2003. c Springer-Verlag Berlin Heidelberg 2003
A Model of Interpersonal Attitude and Posture Generation
89
relationships between characters but, again, have not studied the non-verbal expression aspects. Cassell and Bickmore[4] have investigated models relationships between characters and users. Closer to our work, Hayes-Roth and van Gent[5] have used status, one of our dimensions of attitude, to guide improvisational scenes between characters. Research on posture generation has been limited relative to research on generating other modalities of non-verbal communication such as facial expression or gesture. Cassell, Nakano, Bickmore, Sidner and Rich[3] have investigated shifts of postures and their relationship to speech, but not the meaning of the postures themselves. As such their work is complimentary to ours. B´echeiraz and Thalmann[2] use a one-dimensional model of attitude, analogous to our affiliation, to animate the postures of characters. Their model differs from ours in that it involves choosing one of a set of discrete postures rather than continuously blending postures. This means that it is less able to display varying degrees of attitude or combinations of different attitudes.
2
The Psychology of Interpersonal Attitude
We have based our model of interpersonal attitude on the work of Argyle[1] and Mehrabian[6]. Though there is an enormous variety in the way that people can relate to each other Argyle identifies two fundamental dimensions that can account for a majority of non-verbal behaviour, affiliation and status. Affiliation can be broadly characterised as liking or wanting a close relationship. It is associated with close postures, either physically close such as leaning forward or other close interaction such as a direct orientation. Low affiliation or dislike is shown by more distant postures, including postures that present some sort of barrier to interaction, such as crossed arms. Status is the social superiority (dominance) or inferiority (submission) of one person relative to another. It also cover aggressive postures and postures designed to appease an aggressive individual. Status is expressed in two main ways, space and relaxation. A high status can be expressed by making the body larger (rising to full height, wide stance of the legs) while low status is expressed with postures that occupy less space (lowering head, being hunched over). People of a high status are also often more relaxed, being in control of the situation, (leaning, sitting and asymmetric postures) while lower status people can be more nervous or alert (fidgeting, e.g. head scratching). The meaning of the two types of expression are not fully understood but Argyle[1] suggests that space filling is more associated with establishing status or aggressive situations while relaxation is more associated with an established heirarchy. Attitude and its expression can depend both on the general disposition of the person and their relationship to the other person, for example status depends on whether they are generally confident for status and whether they feel superior to the person they are with. The expression of attitude can also vary between people both in style and degree.
90
Marco Gillies and Daniel Ballin
The relationship between the attitude behaviour of two people can take two forms, compensation and reciprocation. Argyle presents a model in which people have a comfortable level of affiliation with another person and will attempt to maintain it by compensating for the behaviour of the other, for example, if the other person adopts a closer posture they will adopt a more distant one. Similar behaviour can be observed with status, people reacting to dominant postures with submission. Conversely there are times where more affiliation generates liking and is therefore reciprocated, or where dominance is viewed as a challenge and so met with another dominant posture. Argyle suggests that reciprocation of affiliation occurs in early stages of a relationship. Status compensation tend to occur in an established heirarchy, and challenges occur outside of a heirarchy.
3
Implementation
This section presents a model of interpersonal behaviour that is used to generate expressive postures for pairs of interactive animated characters. The model integrates information about a character’s personality and mood, as well as information about the behaviour and posture of the other character. Firstly a value for each of the two attitude dimensions is generated and then this is used to generate a posture for the character. An overview of the process is shown in figure 1. As described below this process is controlled by a number of weights that are able to vary the character’s behaviour thus producing different behaviour for different characters. Values for these weights are saved in a character profile that is loaded to produce behaviour appropriate to a particular character. The first stage in the process is to generate a value for each of the dimensions of attitude. As described above these depend both on the character itself and the behaviour of the other character. The character’s own reactions can be controlled directly by the user. A number of sliders are presented to the user with parameters that map onto the two dimensions. They take two forms, parameters representing the personality of the character, for example “friendliness” maps on to affiliation, and parameters representing the character’s evaluation of the other character, for example “liking of other”. These parameters are combined with variables corresponding to the posture types of the other character (see below) to produce a final value for the attitude. For example, affiliation depends on how close or distant the other person is being, and possibly other factors such as how relaxed the other character is. Thus the equation for affiliation is: wotheri postureTypei affiliation = wselfi sliderValuei + Where wselfi is a weighting over the parameters representing the characters own reactions and wotheri is a weighting over the other characters posture types. These weights not only control the relative importance of the various posture types but their sign controls whether the character displays reciprocation or compensation. There is an equivalent equation for status. The attitude values are used to generate a new posture. Firstly they are mapped onto a posture type, which represents a description of a posture in
A Model of Interpersonal Attitude and Posture Generation
91
Fig. 1. The posture generation process.
terms of its behavioural meaning, as discussed in section 2. The postures types are: close (high affiliation), distant (low affiliation), space filling (high status), shrinking (low status), relaxation (high status) and nervousness (low status). As attitudes can be expressed in different ways, or to a greater and lesser degree the mapping from attitude to posture type is controlled by a weighting for each posture type that is part of a characters profile. As well as being used to generate concrete postures the posture type values are also passed to the other character to use as described above. The values of the posture values are clamped to be between 0 and 1 to prevent extreme postures. Each posture type can be represented in a number of different ways, for example space filling can involve raising to full height or putting hands on hips while closeness can be expressed as leaning forward or making a more direct orientation (or some combination). Actual postures are calculated as weighted sums over a set of basic postures each of which depends on a posture type. The basic postures were designed based on the description in Argyle[1] and Mehrabian[6] combined with informal observations of people in social situations. The weights of each basic posture is the product of the value of its posture type and its own weight relative to the posture type. The weights of the basic postures are varied every so often so that the character changes its posture without changing its meaning, thus producing a realistic variation of posture over time. Each basic posture is represented as an orientation for each joint of the character and final posture is calculated as a weight sums of these orientations. Figure 2 shows example output postures.
4
Conclusion
We have explored the use of interpersonal attitude for the generation of body language and in particular posture. Our initial results are encouraging and in particular attitude seems to account for a wide range of human postures Figure 2 shows some examples of postures generated for interacting characters.
92
Marco Gillies and Daniel Ballin
Fig. 2. Examples of postures generated displaying various attitudes. (a) affiliation reciprocated by both parties, displaying close posture with a direct orientation and a forward lean. (b) the male character has high affiliation and the female low affiliation, turning away with a distant crossed arm posture. (c) both characters are dominant, the female has a space filling, straight posture with raise head, while the male also has a space filling posture with a hand on his hips. (d) The male character responds submissively to the dominant female character, his head is lowered and his body is hunched over. (e) The female character responds with positive affiliation to the male character’s confident, relaxed, leaning posture. (f) A combined posture: the female character shows both low affiliation and high status and the male character low affiliation and low status.
References 1. Michael Argyle: Bodily Communication. Routledge (1975) 2. B´echeiraz, P. and Thalmann, D.: A Model of Nonverbal Communication and Interpersonal Relationship Between Virtual Actors. Proceedings of the Computer Animation. IEEE Computer Society Press (1996) 58–67 3. Cassell, J., Nakano, Y., Bickmore, T., Sidner, C., Rich, C.: Non-Verbal Cues for Discourse Structure. Proceedings of the 41st Annual Meeting of the Association of Computational Linguistics, Toulouse, France.(2001) 106-115 4. Cassell, J., Bickmore, T.: Negotiated Collusion: Modeling Social Language and its Relationship Effects in Intelligent Agents. User Modeling and User-Adapted Interaction 13(1-2) (2003) 89-132 5. Barbara Hayes-Roth and Robert van Gent: Story-Making with Improvisational Puppets. in Proc. 1st Int. Conf. on Autonomous Agents. (1997) 1–7. 6. Albert Mehrabian: Nonverbal Communication. Aldine-Atherton (1972) 7. Helmut Prendiger and Mitsuru Ishizuka: Evolving social relationships with animate characters. in Proceedings of the AISB symposium on Animating expressive characters for social interactions (2002) 73–79 8. Thomas Rist and Markus Schmitt: Applying socio-psychological concepts of cognitive consistency to negotiation dialog scenarios with embodied conversational characters. in Ca˜ namero and Aylett (eds) Animating Expressive Characters for Social Interaction. John Benjamins (in press)
Author Index
Akker, Rieks op den 341 Antonio, Ang´elica de 325 Avradinis, Nikos 269 Aylett, Ruth 244, 249, 269 Ballin, Daniel 88 Barber, Fernando 356 Bechinie, Michael 212 Becker Villamil, Marta 164 Bente, Gary 292 Bergmark, Niklas 264 Bernsen, Niels Ole 27 Beun, Robbert-Jan 315 Bilvi, Massimo 93 Bokan, Boˇzana 259, 354 Bordini, Rafael H. 197 Bradley, John F. 218 Breton, Gaspard 111 Bryson, Joanna 101 Bullock, Adrian 62 Casella, Pietro 207 Cavazza, Marc 231 Charles, Fred 231 Conde, Toni 175 Courty, Nicolas 111 Cowell, Andrew J. 301 Csord´ as, Annam´ aria 5 Cunningham, P´ adraig 159 Daneshvar, Roozbeh 361 Dautenhahn, Kerstin 182, 310 DeGroot, Doug 136 Dobbyn, Simon 159 Duffy, Brian R. 218 Eli¨ens, Anton 13, 80, 150 Enz, Sibylle 360 Fornes, Alicia 356 Gaspar, Gra¸ca 31 Gebhard, Patrick 48 Gillies, Marco 88 Gomes, M´ ario R. 57, 72 Grammer, Karl 212
Gratch, Jonathan 39 Grimaldo, Fran 356 Gr¨ unvogel, Stefan M. 170 Hall, Lynne 310 Hedlund, Erik 264 Heylen, Dirk 341 Hildebrand, Michiel 13 Ho, Wan Ching 182 Hook, Kristina 62 Hornung, Alexander 236 Hu, Weihua 355 Huang, Zhisheng 13, 80, 150 Ibanez, Jesus 249 Ishizuka, Mitsuru 226, 283 Iurgel, Ido A. 254 Jung, Bernhard 23 Kadleˇcek, David 274 Kim, In-Cheol 192 Kipp, Michael 48 ´ ad 5 Kiss, Arp´ Klesen, Martin 48 Kopp, Stefan 23 Kr¨ amer, Nicole C. 292 Krenn, Brigitte 18 Kr¨ uger, Antonio 217 Kurihara, Masahito 106 Laaksolahti, Jarmo 264 Lakemeyer, Gerhard 236 Laufer, L´ aszl´ o5 Liang, Ronghua 141 Liu, Feng 141 Liu, Zhen 359 Louchart, Sandy 244 Lozano, Miguel 356 Lucas, Antonio 356 Lucas, Caro 361 Lucia, Rafael 356 Luna de Oliveira, Luiz Paulo 164 Mac Namee, Brian 159 Machado, Isabel 333
364
Author Index
Manabu, Okumura 127 Mao, Wenji 39 Marsella, Stacy C. 1 Marichal, Xavier 231 Martin, Alan N. 218 Martin, Olivier 231 Martinho, Carlos 57 Mayer, Sonja 283 Mead, Steven J. 231 M´endez, Gonzalo 325 Morgado, Lu´ıs 31 Mori, Junichiro 283 Mukherjee, Suman 226 Nahodil, Pavel 274 Nedel, Luciana P. 197 Nehaniv, Chrystopher L. 182 Neumayr, Barbara 18 Nijholt, Anton 341 Nomura, Tatsuya 67 Nonaka, Hidetoshi 106 O’Hare, Gregory M.P. 218 Oldroyd, Amanda 259 O’Sullivan, Carol 159 Paiva, Ana 57, 62, 72, 207, 333 Pan, Zhi Geng 355, 359 Panayiotopoulos, Themis 202 Payr, Sabine 320 Pelachaud, Catherine 93 Pel´e, Danielle 111 Prada, Rui 62 Prendinger, Helmut 283 Raupp Musse, Soraia 164 Rehm, Matthias 348 ˇ Rehoˇ r, David 274 Reinecke, Alexander 181 Rickel, Jeff 119, 325 Rist, Thomas 48, 358
Ruiz-Rodarte, Rocio 249 Ruttkay, Zs` ofia 80 Saeyor, Santi 226 Sampath, Dasarathi 119 Sch¨ afer, Leonie 259, 354 Schaub, Harald 360 Schmitt, Markus 358 Sch¨ on, Bianca 218 Schwichtenberg, Stephan 170 Slav´ık, Pavel 274 Sobral, Daniel 310, 333 Stanney, Kay M. 301 Suguru, Saito 127 Szal´ o, Attila 5 Takenobu, Tokunaga 127 Tambellini, William 175 Tanguy, Emmanuel 101 Tatai, G´ abor 5 Thalmann, Daniel 175 Tietz, Bernd 292 Tomofumi, Koyama 127 Torres, Jorge A. 197 Trappl, Robert 320 Trogemann, Georg 236 Uchiyama, Koki 226 Vala, Marco 62, 72 Visser, Cees 13, 150 Vissers, Maarten 341 Vos, Eveliene de 315 Vosinakis, Spyros 202 Willis, Philip 101 Witteman, Cilia 315 Wolke, Dieter 310 Woods, Sarah 310 Zhu, Jiejie 355 Zoll, Carsten 360