default search action
Catherine Pelachaud
Person information
- affiliation: Sorbonne University, Paris, France
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j61]Pierre Raimbaud, Béatrice Biancardi, Iana Podkosova, Catherine Pelachaud:
Editorial: Virtual agents in virtual reality: design and implications for VR users. Frontiers Virtual Real. 5 (2024) - [j60]Jieyeon Woo, Kazuhiro Shidara, Catherine Achard, Hiroki Tanaka, Satoshi Nakamura, Catherine Pelachaud:
Adaptive virtual agent: Design and evaluation for real-time human-agent interaction. Int. J. Hum. Comput. Stud. 190: 103321 (2024) - [j59]Lucrezia Tosato, Victor Fortier, Isabelle Bloch, Catherine Pelachaud:
Exploiting temporal information to detect conversational groups in videos and predict the next speaker. Pattern Recognit. Lett. 177: 164-168 (2024) - [j58]Paulo Ricardo Knob, Natália Dal Pizzol, Soraia Raupp Musse, Catherine Pelachaud:
Arthur and Bella: multi-purpose empathetic AI assistants for daily conversations. Vis. Comput. 40(4): 2933-2948 (2024) - [c226]Nezih Younsi, Catherine Pelachaud, Laurence Chaby:
Diffusion models for virtual agent facial expression generation in Motivational interviewing. AVI 2024: 37:1-37:5 - [c225]Nezih Younsi, Catherine Pelachaud, Laurence Chaby:
Beyond Words: Decoding Facial Expression Dynamics in Motivational Interviewing. LREC/COLING 2024: 2365-2374 - [c224]Lucie Galland, Catherine Pelachaud, Florian Pecune:
Seeing and Hearing What Has Not Been Said: A multimodal client behavior classifier in Motivational Interviewing with interpretable fusion. FG 2024: 1-9 - [c223]Yann Munro, Isabelle Bloch, Mohamed Chetouani, Catherine Pelachaud, Marie-Jeanne Lesot:
Sémantique agrégative graduelle pour les systèmes d'argumentation bipolaires pondérés. JIAF-JFPDA 2024: 104-112 - [c222]Lucie Galland, Catherine Pelachaud, Florian Pecune:
Generating Unexpected yet Relevant User Dialog Acts. SIGDIAL 2024: 192-203 - [c221]Anh Ngo, Dirk Heylen, Nicolas Rollet, Catherine Pelachaud, Chloé Clavel:
Exploration of Human Repair Initiation in Task-oriented Dialogue: A Linguistic Feature-based Approach. SIGDIAL 2024: 603-609 - [i18]Teo Guichoux, Laure Soulier, Nicolas Obin, Catherine Pelachaud:
Investigating the impact of 2D gesture representation on co-speech gesture generation. CoRR abs/2406.15111 (2024) - [i17]Lucie Galland, Catherine Pelachaud, Florian Pecune:
EMMI - Empathic Multimodal Motivational Interviews Dataset: Analyses and Annotations. CoRR abs/2406.16478 (2024) - [i16]Lucrezia Tosato, Victor Fortier, Isabelle Bloch, Catherine Pelachaud:
Exploiting temporal information to detect conversational groups in videos and predict the next speaker. CoRR abs/2408.16380 (2024) - [i15]Teo Guichoux, Laure Soulier, Nicolas Obin, Catherine Pelachaud:
2D or not 2D: How Does the Dimensionality of Gesture Representation Affect 3D Co-Speech Gesture Generation? CoRR abs/2409.10357 (2024) - [i14]Yann Munro, Camilo Sarmiento, Isabelle Bloch, Gauvain Bourgne, Catherine Pelachaud, Marie-Jeanne Lesot:
An action language-based formalisation of an abstract argumentation framework. CoRR abs/2409.19625 (2024) - 2023
- [j57]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding. Frontiers Artif. Intell. 6 (2023) - [j56]Celso M. de Melo, Jonathan Gratch, Stacy Marsella, Catherine Pelachaud:
Social Functions of Machine Emotional Expressions. Proc. IEEE 111(10): 1382-1397 (2023) - [c220]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
Zero-Shot Style Transfer for Multimodal Data-Driven Gesture Synthesis. FG 2023: 1-4 - [c219]Jieyeon Woo, Liu Yang, Catherine Achard, Catherine Pelachaud:
Are we in sync during turn switch? FG 2023: 1-4 - [c218]Fabien Boucaud, Catherine Pelachaud, Indira Thouvenin:
"\"It patted my arm\": Investigating Social Touch from a Virtual Agent". HAI 2023: 72-80 - [c217]Vladislav Maraev, Chiara Mazzocconi, Christine Howes, Catherine Pelachaud:
Towards investigating gaze and laughter coordination in socially interactive agents. HAI 2023: 473-475 - [c216]Jieyeon Woo, Liu Yang, Catherine Pelachaud, Catherine Achard:
Is Turn-Shift Distinguishable with Synchrony? HCI (41) 2023: 419-432 - [c215]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
I-Brow: Hierarchical and Multimodal Transformer Model for Eyebrows Animation Synthesis. HCI (41) 2023: 435-452 - [c214]Jieyeon Woo, Catherine Pelachaud, Catherine Achard:
Reciprocal Adaptation Measures for Human-Agent Interaction Evaluation. ICAART (1) 2023: 114-125 - [c213]Takeshi Saga, Jieyeon Woo, Alexis Gerard, Hiroki Tanaka, Catherine Achard, Satoshi Nakamura, Catherine Pelachaud:
An Adaptive Virtual Agent Platform for Automated Social Skills Training. ICMI Companion 2023: 109-111 - [c212]Hiroki Tanaka, Satoshi Nakamura, Jean-Claude Martin, Catherine Pelachaud:
4th Workshop on Social Affective Multimodal Interaction for Health (SAMIH). ICMI 2023: 816-817 - [c211]Jieyeon Woo, Catherine Pelachaud, Catherine Achard:
ASAP: Endowing Adaptation Capability to Agent in Human-Agent Interaction. IUI 2023: 464-475 - [c210]Jieyeon Woo, Michele Grimaldi, Catherine Pelachaud, Catherine Achard:
IAVA: Interactive and Adaptive Virtual Agent. IVA 2023: 17:1-17:8 - [c209]Remi Poivet, Catherine Pelachaud, Malika Auvray:
The influence of conversational agents' role and behaviors on narrative experiences. IVA 2023: 40:1-40:4 - [c208]Liu Yang, Catherine Achard, Catherine Pelachaud:
Now or When?: Interruption timing prediction in dyadic interaction. IVA 2023: 44:1-44:4 - [c207]Jieyeon Woo, Michele Grimaldi, Catherine Pelachaud, Catherine Achard:
Conducting Cognitive Behavioral Therapy with an Adaptive Virtual Agent. IVA 2023: 62:1-62:3 - [e17]Birgit Lugrin, Marc Erich Latoschik, Sebastion von Mammen, Stefan Kopp, Florian Pécune, Catherine Pelachaud:
Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, IVA 2023, Würzburg, Germany, September 19-22, 2023. ACM 2023, ISBN 978-1-4503-9994-4 [contents] - [i13]Jieyeon Woo, Mireille Fares, Catherine Pelachaud, Catherine Achard:
AMII: Adaptive Multimodal Inter-personal and Intra-personal Model for Adapted Behavior Synthesis. CoRR abs/2305.11310 (2023) - [i12]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
ZS-MSTM: Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding. CoRR abs/2305.12887 (2023) - [i11]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
TranSTYLer: Multimodal Behavioral Style Transfer for Facial and Body Gestures Generation. CoRR abs/2308.10843 (2023) - [i10]Lucie Galland, Catherine Pelachaud, Florian Pecune:
Seeing and hearing what has not been said; A multimodal client behavior classifier in Motivational Interviewing with interpretable fusion. CoRR abs/2309.14398 (2023) - [i9]Liu Yang, Jieyeon Woo, Catherine Achard, Catherine Pelachaud:
Exchanging... Watch out! CoRR abs/2311.04747 (2023) - [i8]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
META4: Semantically-Aligned Generation of Metaphoric Gestures Using Self-Supervised Text and Speech Representation. CoRR abs/2311.05481 (2023) - 2022
- [j55]Lucie Galland, Catherine Pelachaud, Florian Pecune:
Adapting conversational strategies in information-giving human-agent interaction. Frontiers Artif. Intell. 5 (2022) - [j54]Angelo Cafaro, Brian Ravenet, Catherine Pelachaud:
Exploiting Evolutionary Algorithms to Model Nonverbal Reactions to Conversational Interruptions in User-Agent Interactions. IEEE Trans. Affect. Comput. 13(1): 485-495 (2022) - [j53]Soumia Dermouche, Catherine Pelachaud:
Leveraging the Dynamics of Non-Verbal Behaviors For Social Attitude Modeling. IEEE Trans. Affect. Comput. 13(2): 1072-1085 (2022) - [j52]Marc Teyssier, Gilles Bailly, Catherine Pelachaud, Eric Lecolinet:
Conveying Emotions Through Device-Initiated Touch. IEEE Trans. Affect. Comput. 13(3): 1477-1488 (2022) - [c206]Yann Munro, Isabelle Bloch, Mohamed Chetouani, Marie-Jeanne Lesot, Catherine Pelachaud:
Argumentation and Causal Models in Human-Machine Interaction: A Round Trip. AIC 2022: 93-106 - [c205]Mireille Fares, Catherine Pelachaud, Nicolas Obin:
Transformer Network for Semantically-Aware and Speech-Driven Upper-Face Generation. EUSIPCO 2022: 593-597 - [c204]Liu Yang, Catherine Achard, Catherine Pelachaud:
Multimodal Analysis of Interruptions. HCI (18) 2022: 306-325 - [c203]Catherine Pelachaud:
Interacting with Socially Interactive Agents. ICAART (1) 2022: 9 - [c202]Sooraj Krishna, Catherine Pelachaud:
Impact of Error-making Peer Agent Behaviours in a Multi-agent Shared Learning Interaction for Self-Regulated Learning. ICAART (1) 2022: 337-344 - [c201]Liu Yang, Catherine Achard, Catherine Pelachaud:
Multimodal classification of interruptions in humans' interaction. ICMI 2022: 597-604 - [c200]Hiroki Tanaka, Satoshi Nakamura, Kazuhiro Shidara, Jean-Claude Martin, Catherine Pelachaud:
3rd Workshop on Social Affective Multimodal Interaction for Health (SAMIH). ICMI 2022: 805-806 - [c199]Victor Fortier, Isabelle Bloch, Catherine Pelachaud:
Robust Detection of Conversational Groups Using a Voting Scheme and a Memory Process. ICPRAI (2) 2022: 162-173 - [c198]Lucie Galland, Catherine Pelachaud, Florian Pecune:
Adapting conversational strategies to co-optimize agent's task performance and user's engagement. IVA 2022: 23:1-23:3 - [c197]Liu Yang, Catherine Achard, Catherine Pelachaud:
Annotating Interruption in Dyadic Human Interaction. LREC 2022: 2292-2297 - [p7]Birgit Lugrin, Catherine Pelachaud, Elisabeth André, Ruth Aylett, Timothy W. Bickmore, Cynthia Breazeal, Joost Broekens, Kerstin Dautenhahn, Jonathan Gratch, Stefan Kopp, Jacqueline Nadel, Ana Paiva, Agnieszka Wykowska:
Challenge Discussion on Socially Interactive Agents: Considerations on Social Interaction, Computational Architectures, Evaluation, and Ethics. The Handbook on Socially Interactive Agents (2) 2022: 561-626 - [e16]Birgit Lugrin, Catherine Pelachaud, David Traum:
The Handbook on Socially Interactive Agents:20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application. ACM Books 48, ACM / Morgan & Claypool 2022, ISBN 978-1-4503-9896-1 [contents] - [e15]Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew E. Taylor:
21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9-13, 2022. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) 2022, ISBN 978-1-4503-9213-6 [contents] - [i7]Fajrian Yunus, Chloé Clavel, Catherine Pelachaud:
Representation Learning of Image Schema. CoRR abs/2207.08256 (2022) - [i6]Mireille Fares, Michele Grimaldi, Catherine Pelachaud, Nicolas Obin:
Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding. CoRR abs/2208.01917 (2022) - 2021
- [j51]Béatrice Biancardi, Soumia Dermouche, Catherine Pelachaud:
Adaptation Mechanisms in Human-Agent Interaction: Effects on User's Impressions and Engagement. Frontiers Comput. Sci. 3: 696682 (2021) - [j50]Mark Snaith, Nicholas Conway, Tessa Beinema, Dominic De Franco, Alison Pease, Reshmashree B. Kantharaju, Mathilde Janier, Gerwin Huizing, Catherine Pelachaud, Harm op den Akker:
A multimodal corpus of simulated consultations between a patient and multiple healthcare professionals. Lang. Resour. Evaluation 55(4): 1077-1092 (2021) - [c196]Fabien Boucaud, Catherine Pelachaud, Indira Thouvenin:
Decision Model for a Virtual Agent that can Touch and be Touched. AAMAS 2021: 232-241 - [c195]Reshmashree B. Kantharaju, Catherine Pelachaud:
Evaluation of Multi-party Virtual Agents. CHItaly Workshops 2021 - [c194]Fajrian Yunus, Chloé Clavel, Catherine Pelachaud:
Sequence-to-Sequence Predictive Model: From Prosody to Communicative Gestures. HCI (16) 2021: 355-374 - [c193]Jennifer Hamet Bagnou, Elise Prigent, Jean-Claude Martin, Jieyeon Woo, Liu Yang, Catherine Achard, Catherine Pelachaud, Céline Clavel:
A Framework for the Assessment and Training of Collaborative Problem-Solving Social Skills. ICMI Companion 2021: 381-384 - [c192]Hiroki Tanaka, Satoshi Nakamura, Jean-Claude Martin, Catherine Pelachaud:
2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH). ICMI 2021: 853-854 - [c191]Reshmashree B. Kantharaju, Catherine Pelachaud:
Social Signals of Cohesion in Multi-party Interactions. IVA 2021: 9-16 - [c190]Tessa Beinema, Daniel P. Davison, Dennis Reidsma, Oresti Banos, Merijn Bruijnes, Brice Donval, Álvaro Fides-Valero, Dirk Heylen, Dennis Hofs, Gerwin Huizing, Reshmashree B. Kantharaju, Randy Klaassen, Jan Kolkmeier, Kostas Konsolakis, Alison Pease, Catherine Pelachaud, Donatella Simonetti, Mark Snaith, Vicente Traver, Jorien van Loon, Jacky Visser, Marcel Weusthof, Fajrian Yunus, Hermie Hermens, Harm op den Akker:
Agents United: An Open Platform for Multi-Agent Conversational Systems. IVA 2021: 17-24 - [c189]Michele Grimaldi, Catherine Pelachaud:
Generation of Multimodal Behaviors in the Greta platform. IVA 2021: 98-100 - [c188]Liu Yang, Catherine Achard, Catherine Pelachaud:
Interruptions in Human-Agent Interaction. IVA 2021: 206-208 - [c187]Maxime Grandidier, Fabien Boucaud, Indira Thouvenin, Catherine Pelachaud:
Softly: Simulated Empathic Touch between an Agent and a Human. ACM Multimedia 2021: 2795-2797 - [p6]Catherine Pelachaud, Carlos Busso, Dirk Heylen:
Multimodal Behavior Modeling for Socially Interactive Agents. The Handbook on Socially Interactive Agents (1) 2021: 259-310 - [e14]Birgit Lugrin, Catherine Pelachaud, David R. Traum:
The Handbook on Socially Interactive Agents: 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM Books 37, ACM / Morgan & Claypool 2021, ISBN 978-1-4503-8720-0 [contents] - [e13]Zakia Hammal, Carlos Busso, Catherine Pelachaud, Sharon L. Oviatt, Albert Ali Salah, Guoying Zhao:
ICMI '21: International Conference on Multimodal Interaction, Montréal, QC, Canada, October 18-22, 2021. ACM 2021, ISBN 978-1-4503-8481-0 [contents] - [e12]Zakia Hammal, Carlos Busso, Catherine Pelachaud, Sharon L. Oviatt, Albert Ali Salah, Guoying Zhao:
ICMI '21 Companion: Companion Publication of the 2021 International Conference on Multimodal Interaction, Montreal, QC, Canada, October 18 - 22, 2021. ACM 2021, ISBN 978-1-4503-8471-1 [contents] - 2020
- [j49]Catharine Oertel, Ginevra Castellano, Mohamed Chetouani, Jauwairia Nasir, Mohammad Obaid, Catherine Pelachaud, Christopher E. Peters:
Engagement in Human-Agent Interaction: An Overview. Frontiers Robotics AI 7: 92 (2020) - [j48]J. Ross Beveridge, Mohamed Daoudi, Catherine Pelachaud, Richa Singh:
Selected Best Works From Automated Face and Gesture Recognition 2019. IEEE Trans. Biom. Behav. Identity Sci. 2(2): 83-84 (2020) - [c186]Chen Wang, Béatrice Biancardi, Maurizio Mancini, Angelo Cafaro, Catherine Pelachaud, Thierry Pun, Guillaume Chanel:
Impression Detection and Management Using an Embodied Conversational Agent. HCI (2) 2020: 260-278 - [c185]Sooraj Krishna, Catherine Pelachaud:
CardBot: Towards an affordable humanoid robot platform for Wizard of Oz Studies in HRI. HRI (Companion) 2020: 73 - [c184]Sooraj Krishna, Catherine Pelachaud, Arvid Kappas:
FRACTOS: Learning to be a Better Learner by Building Fractions. HRI (Companion) 2020: 314-316 - [c183]Tanvi Dinkar, Ioana Vasilescu, Catherine Pelachaud, Chloé Clavel:
How confident are you? Exploring the role of fillers in the automatic prediction of a speaker's confidence. ICASSP 2020: 8104-8108 - [c182]Hiroki Tanaka, Satoshi Nakamura, Jean-Claude Martin, Catherine Pelachaud:
Social Affective Multimodal Interaction for Health. ICMI 2020: 893-894 - [c181]Sahba Zojaji, Christopher E. Peters, Catherine Pelachaud:
Influence of virtual agent politeness behaviors on how users join small conversational groups. IVA 2020: 59:1-59:8 - [c180]Reshmashree Bangalore Kantharaju, Caroline Langlet, Mukesh Barange, Chloé Clavel, Catherine Pelachaud:
Multimodal Analysis of Cohesion in Multi-party Interactions. LREC 2020: 498-507 - [c179]Harry Bunt, Volha Petukhova, Emer Gilmartin, Catherine Pelachaud, Alex Chengyu Fang, Simon Keizer, Laurent Prévot:
The ISO Standard for Dialogue Act Annotation, Second Edition. LREC 2020: 549-558 - [i5]Fajrian Yunus, Chloé Clavel, Catherine Pelachaud:
Sequence-to-Sequence Predictive Model: From Prosody To Communicative Gestures. CoRR abs/2008.07643 (2020)
2010 – 2019
- 2019
- [j47]Béatrice Biancardi, Maurizio Mancini, Paul Lerner, Catherine Pelachaud:
Managing an Agent's Self-Presentational Strategies During an Interaction. Frontiers Robotics AI 6: 93 (2019) - [j46]Mathieu Chollet, Magalie Ochs, Catherine Pelachaud:
A Methodology for the Automatic Extraction and Generation of Non-Verbal Signals Sequences Conveying Interpersonal Attitudes. IEEE Trans. Affect. Comput. 10(4): 585-598 (2019) - [c178]Béatrice Biancardi, Chen Wang, Maurizio Mancini, Angelo Cafaro, Guillaume Chanel, Catherine Pelachaud:
A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time. ACII 2019: 1-7 - [c177]Nesrine Fourati, Catherine Pelachaud, Patrice Darmon:
Contribution of temporal and multi-level body cues to emotion classification. ACII 2019: 116-122 - [c176]Fabien Boucaud, Quentin Tafiani, Catherine Pelachaud, Indira Thouvenin:
Social Touch in Human-agent Interactions in an Immersive Virtual Environment. VISIGRAPP (2: HUCAPP) 2019: 129-136 - [c175]Soumia Dermouche, Catherine Pelachaud:
Generative Model of Agent's Behaviors in Human-Agent Interaction. ICMI 2019: 375-384 - [c174]Soumia Dermouche, Catherine Pelachaud:
Engagement Modeling in Dyadic Interaction. ICMI 2019: 440-445 - [c173]Reshmashree B. Kantharaju, Alison Pease, Dennis Reidsma, Catherine Pelachaud, Mark Snaith, Merijn Bruijnes, Randy Klaassen, Tessa Beinema, Gerwin Huizing, Donatella Simonetti, Dirk Heylen, Harm op den Akker:
Integrating Argumentation with Social Conversation between Multiple Virtual Coaches. IVA 2019: 203-205 - [c172]Maurizio Mancini, Béatrice Biancardi, Soumia Dermouche, Paul Lerner, Catherine Pelachaud:
Managing Agent's Impression Based on User's Engagement Detection. IVA 2019: 209-211 - [c171]Fajrian Yunus, Chloé Clavel, Catherine Pelachaud:
Gesture Class Prediction by Recurrent Neural Network and Attention Mechanism. IVA 2019: 233-235 - [c170]Sooraj Krishna, Catherine Pelachaud, Arvid Kappas:
Towards an Adaptive Regulation Scaffolding through Role-based Strategies. IVA 2019: 264-267 - [c169]Marc Teyssier, Gilles Bailly, Catherine Pelachaud, Eric Lecolinet, Andrew Conn, Anne Roudaut:
Skin-On Interfaces: A Bio-Driven Approach for Artificial Skin Design to Cover Interactive Devices. UIST 2019: 307-322 - [p5]Angelo Cafaro, Catherine Pelachaud, Stacy C. Marsella:
Nonverbal behavior in multimodal performances. The Handbook of Multimodal-Multisensor Interfaces, Volume 3 (3) 2019 - [e11]Catherine Pelachaud, Jean-Claude Martin, Hendrik Buschmeier, Gale M. Lucas, Stefan Kopp:
Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, IVA 2019, Paris, France, July 2-5, 2019. ACM 2019, ISBN 978-1-4503-6672-4 [contents] - [i4]Reshmashree B. Kantharaju, Dominic De Franco, Alison Pease, Catherine Pelachaud:
Is Two Better than One? Effects of Multiple Agents on User Persuasion. CoRR abs/1904.05248 (2019) - 2018
- [j45]Nadine Glas, Catherine Pelachaud:
Topic management for an engaging conversational agent. Int. J. Hum. Comput. Stud. 120: 107-124 (2018) - [j44]Nesrine Fourati, Catherine Pelachaud:
Perception of Emotions and Body Movement in the Emilya Database. IEEE Trans. Affect. Comput. 9(1): 90-101 (2018) - [c168]Reshmashree Bangalore Kantharaju, Catherine Pelachaud:
Towards Developing a Model to Handle Multiparty Conversations for Healthcare Agents. ICAHGCA@AAMAS 2018: 30-34 - [c167]Brian Ravenet, Chloé Clavel, Catherine Pelachaud:
Automatic Nonverbal Behavior Generation from Image Schemas. AAMAS 2018: 1667-1674 - [c166]Catherine Pelachaud:
Modeling Human-agent Interaction. VISIGRAPP 2018: 11 - [c165]Harm op den Akker, Rieks op den Akker, Tessa Beinema, Oresti Baños, Dirk Heylen, Björn Bedsted, Alison Pease, Catherine Pelachaud, Vicente Traver Salcedo, Sofoklis A. Kyriazakos, Hermie Hermens:
Council of Coaches - A Novel Holistic Behavior Change Coaching Approach. ICT4AWE 2018: 219-226 - [c164]Reshmashree B. Kantharaju, Dominic De Franco, Alison Pease, Catherine Pelachaud:
Is Two Better than One?: Effects of Multiple Agents on User Persuasion. IVA 2018: 255-262 - [c163]Soumia Dermouche, Catherine Pelachaud:
From analysis to modeling of engagement as sequences of multimodal behaviors. LREC 2018 - [c162]Soumia Dermouche, Catherine Pelachaud:
Attitude Modeling for Virtual Character Based on Temporal Sequence Mining: Extraction and Evaluation. MOCO 2018: 23:1-23:8 - [c161]Marc Teyssier, Gilles Bailly, Catherine Pelachaud, Eric Lecolinet:
MobiLimb: Augmenting Mobile Devices with a Robotic Limb. UIST 2018: 53-63 - [e10]Anton Bogdanovych, Deborah Richards, Simeon Simoff, Catherine Pelachaud, Dirk Heylen, Tomas Trescak:
Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA 2018, Sydney, NSW, Australia, November 05-08, 2018. ACM 2018, ISBN 978-1-4503-6013-5 [contents] - 2017
- [j43]Michael Neff, Catherine Pelachaud:
Animation of Natural Virtual Characters. IEEE Computer Graphics and Applications 37(4): 14-16 (2017) - [j42]Jing Huang, Qi Wang, Marco Fratarcangeli, Ke Yan, Catherine Pelachaud:
Multi-Variate Gaussian-Based Inverse Kinematics. Comput. Graph. Forum 36(8): 418-428 (2017) - [j41]Yu Ding, Jing Huang, Catherine Pelachaud:
Audio-Driven Laughter Behavior Controller. IEEE Trans. Affect. Comput. 8(4): 546-558 (2017) - [j40]Magalie Ochs, Catherine Pelachaud, Gary McKeown:
A User Perception-Based Approach to Create Smiling Embodied Conversational Agents. ACM Trans. Interact. Intell. Syst. 7(1): 4:1-4:33 (2017) - [j39]Maurizio Mancini, Béatrice Biancardi, Florian Pecune, Giovanna Varni, Yu Ding, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Implementing and Evaluating a Laughing Virtual Character. ACM Trans. Internet Techn. 17(1): 3:1-3:22 (2017) - [j38]Jing Huang, Marco Fratarcangeli, Yu Ding, Catherine Pelachaud:
Inverse kinematics using dynamic joint parameters: inverse kinematics animation synthesis learnt from sub-divided motion micro-segments. Vis. Comput. 33(12): 1541-1553 (2017) - [c160]Catherine Pelachaud:
Greta: a conversing socio-emotional agent. ISIAA@ICMI 2017: 9-10 - [c159]Béatrice Biancardi, Angelo Cafaro, Catherine Pelachaud:
Could a virtual agent be warm and competent? investigating user's impressions of agent's non-verbal behaviours. ISIAA@ICMI 2017: 22-24 - [c158]Béatrice Biancardi, Angelo Cafaro, Catherine Pelachaud:
Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions. ICMI 2017: 341-349 - [c157]Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres, Catherine Pelachaud, Elisabeth André, Michel F. Valstar:
The NoXi database: multimodal recordings of mediated novice-expert interactions. ICMI 2017: 350-359 - [c156]Marc Teyssier, Gilles Bailly, Éric Lecolinet, Catherine Pelachaud:
Survey and perspectives of social touch in HCI. IHM 2017: 93-104 - [c155]Catherine Pelachaud:
Conversing with Social Agents That Smile and Laugh. INTERSPEECH 2017: 2052 - [c154]Amyr B. Fortes Neto, Catherine Pelachaud, Soraia Raupp Musse:
Giving Emotional Contagion Ability to Virtual Agents in Crowds. IVA 2017: 63-72 - [c153]Angelo Cafaro, Merijn Bruijnes, Jelte van Waterschoot, Catherine Pelachaud, Mariët Theune, Dirk Heylen:
Selecting and Expressing Communicative Functions in a SAIBA-Compliant Agent Framework. IVA 2017: 73-82 - [c152]Béatrice Biancardi, Angelo Cafaro, Catherine Pelachaud:
Gérer les premières impressions de compétence et de chaleur à travers des indices non verbaux. RJCIA 2017 - 2016
- [j37]Gabor Aranyi, Florian Pecune, Fred Charles, Catherine Pelachaud, Marc Cavazza:
Affective Interaction with a Virtual Character Through an fNIRS Brain-Computer Interface. Frontiers Comput. Neurosci. 10: 70 (2016) - [j36]Angelo Cafaro, Brian Ravenet, Magalie Ochs, Hannes Högni Vilhjálmsson, Catherine Pelachaud:
The Effects of Interpersonal Attitude of a Group of Agents on User's Presence and Proxemics Behavior. ACM Trans. Interact. Intell. Syst. 6(2): 12:1-12:33 (2016) - [c151]Angelo Cafaro, Nadine Glas, Catherine Pelachaud:
The Effects of Interrupting Behavior on Interpersonal Attitude and Engagement in Dyadic Interactions. AAMAS 2016: 911-920 - [c150]Florian Pecune, Magalie Ochs, Stacy Marsella, Catherine Pelachaud:
SOCRATES: from SOCial Relation to ATtitude ExpressionS. AAMAS 2016: 921-930 - [c149]David Jan Mercado, Gilles Bailly, Catherine Pelachaud:
"Hold My Hand, Baby": Understanding Engagement through the Illusion of Touch between Human and Agent. CHI Extended Abstracts 2016: 1438-1444 - [c148]Soumia Dermouche, Catherine Pelachaud:
Sequence-based multimodal behavior modeling for social agents. ICMI 2016: 29-36 - [c147]Michel F. Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew P. Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn W. Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, Jelte van Waterschoot:
Ask Alice: an artificial retrieval of information agent. ICMI 2016: 419-420 - [c146]Florian Pecune, Angelo Cafaro, Magalie Ochs, Catherine Pelachaud:
Evaluating Social Attitudes of a Virtual Tutor. IVA 2016: 245-255 - [c145]Brian Ravenet, Elisabetta Bevacqua, Angelo Cafaro, Magalie Ochs, Catherine Pelachaud:
Perceiving attitudes expressed through nonverbal behaviors in immersive virtual environments. MIG 2016: 175-180 - [p4]Chloé Clavel, Angelo Cafaro, Sabrina Campano, Catherine Pelachaud:
Fostering User Engagement in Face-to-Face Human-Agent Interactions: A Survey. Toward Robotic Socially Believable Behaving Systems (II) 2016: 93-120 - [e9]Yukiko I. Nakano, Elisabeth André, Toyoaki Nishida, Louis-Philippe Morency, Carlos Busso, Catherine Pelachaud:
Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI 2016, Tokyo, Japan, November 12-16, 2016. ACM 2016, ISBN 978-1-4503-4556-9 [contents] - 2015
- [j35]Radoslaw Niewiadomski, Catherine Pelachaud:
The Effect of Wrinkles, Presentation Mode, and Intensity on the Perception of Facial Actions and Full-Face Expressions of Laughter. ACM Trans. Appl. Percept. 12(1): 2:1-2:21 (2015) - [c144]Florian Pecune, Béatrice Biancardi, Yu Ding, Catherine Pelachaud, Maurizio Mancini, Giovanna Varni, Antonio Camurri, Gualtiero Volpe:
LOL - Laugh Out Loud. AAAI 2015: 4309-4310 - [c143]Nesrine Fourati, Catherine Pelachaud:
Relevant body cues for the classification of emotional body expression in daily actions. ACII 2015: 267-273 - [c142]Marc Schröder, Elisabetta Bevacqua, Roddy Cowie, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Gary McKeown, Sathish Pammi, Maja Pantic, Catherine Pelachaud, Björn W. Schuller, Etienne de Sevin, Michel F. Valstar, Martin Wöllmer:
Building autonomous sensitive artificial listeners (Extended abstract). ACII 2015: 456-462 - [c141]Radoslaw Niewiadomski, Yu Ding, Maurizio Mancini, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Perception of intensity incongruence in synthesized multimodal expressions of laughter. ACII 2015: 684-690 - [c140]Nadine Glas, Catherine Pelachaud:
Definitions of engagement in human-agent interaction. ACII 2015: 944-949 - [c139]Sabrina Campano, Caroline Langlet, Nadine Glas, Chloé Clavel, Catherine Pelachaud:
An ECA expressing appreciations. ACII 2015: 962-967 - [c138]Catherine Pelachaud:
Greta: an Interactive Expressive Embodied Conversational Agent. AAMAS 2015: 5 - [c137]Sabrina Campano, Chloé Clavel, Catherine Pelachaud:
"I like this painting too": When an ECA Shares Appreciations to Engage Users. AAMAS 2015: 1649-1650 - [c136]Florian Pecune, Maurizio Mancini, Béatrice Biancardi, Giovanna Varni, Yu Ding, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Laughing with a Virtual Agent. AAMAS 2015: 1817-1818 - [c135]Yu Ding, Catherine Pelachaud:
Lip animation synthesis: a unified framework for speaking and laughing virtual agent. AVSP 2015: 78-83 - [c134]Nadine Glas, Catherine Pelachaud:
Topic Transition Strategies for an Information-Giving Agent. ENLG 2015: 146-155 - [c133]Nesrine Fourati, Catherine Pelachaud:
Multi-level classification of emotional body expression. FG 2015: 1-8 - [c132]Fred Charles, Florian Pecune, Gabor Aranyi, Catherine Pelachaud, Marc Cavazza:
ECA Control using a Single Affective User Dimension. ICMI 2015: 183-190 - [c131]Atef Ben Youssef, Mathieu Chollet, Hazaël Jones, Nicolas Sabouret, Catherine Pelachaud, Magalie Ochs:
Towards a Socially Adaptive Virtual Agent. IVA 2015: 3-16 - [c130]Herwin van Welbergen, Yu Ding, Kai Sattler, Catherine Pelachaud, Stefan Kopp:
Real-Time Visual Prosody for Interactive Virtual Agents. IVA 2015: 139-151 - [c129]Brian Ravenet, Angelo Cafaro, Béatrice Biancardi, Magalie Ochs, Catherine Pelachaud:
Conversational Behavior Reflecting Interpersonal Attitudes in Small Group Interactions. IVA 2015: 375-388 - [c128]Catherine Pelachaud:
Invited Talk: Modeling Socio-Emotional Humanoid Agent. NODALIDA 2015 - [i3]Kevin Sanlaville, Gérard Assayag, Frédéric Bevilacqua, Catherine Pelachaud:
Emergence of synchrony in an Adaptive Interaction Model. CoRR abs/1506.05573 (2015) - 2014
- [c127]Zoraida Callejas, Brian Ravenet, Magalie Ochs, Catherine Pelachaud:
A computational model of social attitudes for a virtual recruiter. AAMAS 2014: 93-100 - [c126]Yu Ding, Ken Prepin, Jing Huang, Catherine Pelachaud, Thierry Artières:
Laughter animation synthesis. AAMAS 2014: 773-780 - [c125]Hazaël Jones, Mathieu Chollet, Magalie Ochs, Nicolas Sabouret, Catherine Pelachaud:
Expressing social attitudes in virtual agents for social coaching. AAMAS 2014: 1409-1410 - [c124]Brian Ravenet, Magalie Ochs, Catherine Pelachaud:
Architecture of a socio-conversational agent in virtual worlds. ICIP 2014: 3983-3987 - [c123]Radoslaw Niewiadomski, Maurizio Mancini, Yu Ding, Catherine Pelachaud, Gualtiero Volpe:
Rhythmic Body Movements of Laughter. ICMI 2014: 299-306 - [c122]Catherine Pelachaud:
Interacting with Socio-emotional Agents. IHCI 2014: 4-7 - [c121]Angelo Cafaro, Hannes Högni Vilhjálmsson, Timothy W. Bickmore, Dirk Heylen, Catherine Pelachaud:
Representing Communicative Functions in SAIBA with a Unified Function Markup Language. IVA 2014: 81-94 - [c120]Mathieu Chollet, Magalie Ochs, Catherine Pelachaud:
From Non-verbal Signals Sequence Mining to Bayesian Networks for Interpersonal Attitudes Expression. IVA 2014: 120-133 - [c119]Yu Ding, Jing Huang, Nesrine Fourati, Thierry Artières, Catherine Pelachaud:
Upper Body Animation Synthesis for a Laughing Character. IVA 2014: 164-173 - [c118]Florian Pecune, Magalie Ochs, Catherine Pelachaud:
A Cognitive Model of Social Relations for Artificial Companions. IVA 2014: 325-328 - [c117]Brian Ravenet, Angelo Cafaro, Magalie Ochs, Catherine Pelachaud:
Interpersonal Attitude of a Speaking Agent in Simulated Group Conversations. IVA 2014: 345-349 - [c116]Yuyu Xu, Catherine Pelachaud, Stacy Marsella:
Compound Gesture Generation: A Model Based on Ideational Units. IVA 2014: 477-491 - [c115]Mathieu Chollet, Magalie Ochs, Catherine Pelachaud:
Mining a multimodal corpus for non-verbal behavior sequences conveying attitudes. LREC 2014: 3417-3424 - [c114]Nesrine Fourati, Catherine Pelachaud:
Emilya: Emotional body expression in daily actions database. LREC 2014: 3486-3493 - [c113]Zoraida Callejas, Brian Ravenet, Magalie Ochs, Catherine Pelachaud:
A model to generate adaptive multimodal job interviews with a virtual recruiter. LREC 2014: 3615-3619 - [c112]Nesrine Fourati, Catherine Pelachaud:
Collection and characterization of emotional body behaviors. MOCO 2014: 49 - [e8]Fillia Makedon, Mark Clements, Catherine Pelachaud, Vana Kalogeraki, Ilias Maglogiannis:
Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2014, Island of Rhodes, Greece, May 27 - 30, 2014. ACM 2014, ISBN 978-1-4503-2746-6 [contents] - [i2]Nicolas Sabouret, Hazaël Jones, Magalie Ochs, Mathieu Chollet, Catherine Pelachaud:
Expressing social attitudes in virtual agents for social training games. CoRR abs/1402.5045 (2014) - 2013
- [j34]Magalie Ochs, Catherine Pelachaud:
Socially Aware Virtual Characters: The Social Signal of Smiles [Social Sciences]. IEEE Signal Process. Mag. 30(2): 128-132 (2013) - [c111]Keith Anderson, Elisabeth André, Tobias Baur, Sara Bernardini, Mathieu Chollet, Evi Chryssafidou, Ionut Damian, Cathy Ennis, Arjan Egges, Patrick Gebhard, Hazaël Jones, Magalie Ochs, Catherine Pelachaud, Kaska Porayska-Pomsta, Paola Rizzo, Nicolas Sabouret:
The TARDIS Framework: Intelligent Virtual Agents for Social Coaching in Job Interviews. Advances in Computer Entertainment 2013: 476-491 - [c110]Mathieu Chollet, Magalie Ochs, Chloé Clavel, Catherine Pelachaud:
A Multimodal Corpus Approach to the Design of Virtual Recruiters. ACII 2013: 19-24 - [c109]Magalie Ochs, Ken Prepin, Catherine Pelachaud:
From Emotions to Interpersonal Stances: Multi-level Analysis of Smiling Virtual Characters. ACII 2013: 258-263 - [c108]Ken Prepin, Magalie Ochs, Catherine Pelachaud:
Beyond backchannels: co-construction of dyadic stancce by reciprocal reinforcement of smiles between virtual agents. CogSci 2013 - [c107]Nesrine Fourati, Catherine Pelachaud:
Head, Shoulders and Hips Behaviors during Turning. HBU 2013: 223-234 - [c106]Yu Ding, Mathieu Radenen, Thierry Artières, Catherine Pelachaud:
Speech-driven eyebrow motion synthesis with contextual Markovian models. ICASSP 2013: 3756-3760 - [c105]Maurizio Mancini, Laurent Ach, Emeline Bantegnie, Tobias Baur, Nadia Berthouze, Debajyoti Datta, Yu Ding, Stéphane Dupont, Harry J. Griffin, Florian Lingenfelser, Radoslaw Niewiadomski, Catherine Pelachaud, Olivier Pietquin, Bilal Piot, Jérôme Urbain, Gualtiero Volpe, Johannes Wagner:
Laugh When You're Winning. eNTERFACE 2013: 50-79 - [c104]Magalie Ochs, Yu Ding, Nesrine Fourati, Mathieu Chollet, Brian Ravenet, Florian Pecune, Nadine Glas, Ken Prepin, Chloé Clavel, Catherine Pelachaud:
Vers des Agents Conversationnels Animés Socio-Affectifs. IHM 2013: 69-78 - [c103]Yu Ding, Catherine Pelachaud, Thierry Artières:
Modeling Multimodal Behaviors from Speech Prosody. IVA 2013: 217-228 - [c102]Brian Ravenet, Magalie Ochs, Catherine Pelachaud:
From a User-created Corpus of Virtual Agent's Non-verbal Behavior to a Computational Model of Interpersonal Attitudes. IVA 2013: 263-274 - [c101]Magalie Ochs, Catherine Pelachaud, Ken Prepin:
Social stances by virtual smiles. WIAMIS 2013: 1-4 - [p3]Sylwia Julia Hyniewska, Radoslaw Niewiadomski, Catherine Pelachaud:
Modeling Facial Expressions of Emotions. Emotion-Oriented Systems 2013: 169-190 - [e7]Catherine Pelachaud:
Emotion-Oriented Systems. Wiley 2013, ISBN 978-1-84821-258-9 [contents] - [e6]Ruth Aylett, Brigitte Krenn, Catherine Pelachaud, Hiroshi Shimodaira:
Intelligent Virtual Agents - 13th International Conference, IVA 2013, Edinburgh, UK, August 29-31, 2013. Proceedings. Lecture Notes in Computer Science 8108, Springer 2013, ISBN 978-3-642-40414-6 [contents] - 2012
- [j33]Magalie Ochs, David Sadek, Catherine Pelachaud:
A formal model of emotions for an empathic rational dialog agent. Auton. Agents Multi Agent Syst. 24(3): 410-440 (2012) - [j32]Magalie Ochs, Radoslaw Niewiadomski, Paul M. Brunet, Catherine Pelachaud:
Smiling virtual agent in social context. Cogn. Process. 13(Supplement-2): 519-532 (2012) - [j31]Elisabeth André, Marc Cavazza, Catherine Pelachaud:
Preface. J. Multimodal User Interfaces 6(1-2): 1 (2012) - [j30]Elisabetta Bevacqua, Etienne de Sevin, Sylwia Julia Hyniewska, Catherine Pelachaud:
A listener model: introducing personality traits. J. Multimodal User Interfaces 6(1-2): 27-38 (2012) - [j29]Alessandro Vinciarelli, Maja Pantic, Dirk Heylen, Catherine Pelachaud, Isabella Poggi, Francesca D'Errico, Marc Schröder:
Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing. IEEE Trans. Affect. Comput. 3(1): 69-87 (2012) - [j28]Marc Schröder, Elisabetta Bevacqua, Roddy Cowie, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Gary McKeown, Sathish Pammi, Maja Pantic, Catherine Pelachaud, Björn W. Schuller, Etienne de Sevin, Michel François Valstar, Martin Wöllmer:
Building Autonomous Sensitive Artificial Listeners. IEEE Trans. Affect. Comput. 3(2): 165-183 (2012) - [j27]Margaret McRorie, Ian Sneddon, Gary McKeown, Elisabetta Bevacqua, Etienne de Sevin, Catherine Pelachaud:
Evaluation of Four Designed Virtual Agent Personalities. IEEE Trans. Affect. Comput. 3(3): 311-322 (2012) - [j26]Etienne de Sevin, Elisabetta Bevacqua, Sylwia Julia Hyniewska, Catherine Pelachaud:
Un modèle d'interlocuteur virtuel avec des comportements d'écoute crédibles. Tech. Sci. Informatiques 31(4): 503-531 (2012) - [c100]Magalie Ochs, Catherine Pelachaud:
Model of the perception of smiling virtual character. AAMAS 2012: 87-94 - [c99]Ken Prepin, Catherine Pelachaud:
Live generation of interactive non-verbal behaviours. AAMAS 2012: 1179-1180 - [c98]Catherine Soladié, Hanan Salam, Catherine Pelachaud, Nicolas Stoiber, Renaud Séguier:
A multimodal fuzzy inference system using a continuous facial expression representation for emotion detection. ICMI 2012: 493-500 - [c97]Radoslaw Niewiadomski, Catherine Pelachaud:
Towards Multimodal Expression of Laughter. IVA 2012: 231-244 - [c96]Jing Huang, Catherine Pelachaud:
Expressive Body Animation Pipeline for Virtual Agent. IVA 2012: 355-362 - [c95]Jing Huang, Catherine Pelachaud:
An Efficient Energy Transfer Inverse Kinematics Solution. MIG 2012: 278-289 - [c94]Ken Prepin, Magalie Ochs, Catherine Pelachaud:
Mutual Stance Building in Dyad of Virtual Agents: Smile Alignment and Synchronisation. SocialCom/PASSAT 2012: 938-943 - 2011
- [j25]Virginie Demeure, Radoslaw Niewiadomski, Catherine Pelachaud:
How Is Believability of a Virtual Agent Related to Warmth, Competence, Personification, and Embodiment? Presence Teleoperators Virtual Environ. 20(5): 431-448 (2011) - [j24]Radoslaw Niewiadomski, Sylwia Julia Hyniewska, Catherine Pelachaud:
Constraint-Based Model for Synthesis of Multimodal Sequential Expressions of Emotions. IEEE Trans. Affect. Comput. 2(3): 134-146 (2011) - [c93]Le Quoc Anh, Catherine Pelachaud:
Expressive Gesture Model for Humanoid Robot. ACII (2) 2011: 224-231 - [c92]Marc Schröder, Paolo Baggia, Felix Burkhardt, Catherine Pelachaud, Christian Peter, Enrico Zovato:
EmotionML - An Upcoming Standard for Representing Emotions and Related States. ACII (1) 2011: 316-325 - [c91]Ken Prepin, Catherine Pelachaud:
Effect of time delays on agents' interaction dynamics. AAMAS 2011: 1055-1062 - [c90]Marc Schröder, Sathish Pammi, Hatice Gunes, Maja Pantic, Michel François Valstar, Roddy Cowie, Gary McKeown, Dirk Heylen, Mark ter Maat, Florian Eyben, Björn W. Schuller, Martin Wöllmer, Elisabetta Bevacqua, Catherine Pelachaud, Etienne de Sevin:
Come and have an emotional workout with sensitive artificial listeners! FG 2011: 646 - [c89]Quoc Anh Le, Catherine Pelachaud:
Generating Co-speech Gestures for the Humanoid Robot NAO through BML. Gesture Workshop 2011: 228-237 - [c88]Le Quoc Anh, Souheil Hanoune, Catherine Pelachaud:
Design and implementation of an expressive gesture model for a humanoid robot. Humanoids 2011: 134-140 - [c87]Ken Prepin, Catherine Pelachaud:
Shared Understanding and Synchrony Emergence - Synchrony as an Indice of the Exchange of Meaning between Dialog Partners. ICAART (2) 2011: 25-34 - [c86]Ken Prepin, Catherine Pelachaud:
Basics of Intersubjectivity Dynamics: Model of Synchrony Emergence When Dialogue Partners Understand Each Other. ICAART (Revised Selected Papers) 2011: 302-318 - [c85]Manoj Kumar Rajagopal, Patrick Horain, Catherine Pelachaud:
Virtually Cloning Real Human with Motion Style. IHCI 2011: 125-136 - [c84]Laurent Ach, Laurent Durieu, Benoît Morel, Karine Chevreau, Hugues de Mazancourt, Bernard Normier, Catherine Pelachaud, André-Marie Pez:
My Presenting Avatar. INTETAIN 2011: 240-242 - [c83]Elisabetta Bevacqua, Florian Eyben, Dirk Heylen, Mark ter Maat, Sathish Pammi, Catherine Pelachaud, Marc Schröder, Björn W. Schuller, Etienne de Sevin, Martin Wöllmer:
Interacting with Emotional Virtual Agents. INTETAIN 2011: 243-245 - [c82]Jérémy Rivière, Carole Adam, Sylvie Pesty, Catherine Pelachaud, Nadine Guiraud, Dominique Longin, Emiliano Lorini:
Expressive Multimodal Conversational Acts for SAIBA Agents. IVA 2011: 316-323 - [c81]Mohammad Obaid, Radoslaw Niewiadomski, Catherine Pelachaud:
Perception of Spatial Relations and of Coexistence with Virtual Agents. IVA 2011: 363-369 - [c80]Manoj Kumar Rajagopal, Patrick Horain, Catherine Pelachaud:
Animating a Conversational Agent with User Expressivity. IVA 2011: 464-465 - [c79]Radoslaw Niewiadomski, Mohammad Obaid, Elisabetta Bevacqua, Julian Looser, Le Quoc Anh, Catherine Pelachaud:
Cross-media agent platform. Web3D 2011: 11-19 - [p2]Maja Pantic, Roderick Cowie, Francesca D'Errico, Dirk Heylen, Marc Mehu, Catherine Pelachaud, Isabella Poggi, Marc Schröder, Alessandro Vinciarelli:
Social Signal Processing: The Research Agenda. Visual Analysis of Humans 2011: 511-538 - [e5]Anna Esposito, Alessandro Vinciarelli, Klára Vicsi, Catherine Pelachaud, Anton Nijholt:
Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues - COST 2102 International Conference, Budapest, Hungary, September 7-10, 2010, Revised Selected Papers. Lecture Notes in Computer Science 6800, Springer 2011, ISBN 978-3-642-25774-2 [contents] - 2010
- [j23]Stefan Kopp, Ruth Aylett, Jonathan Gratch, Patrick Olivier, Catherine Pelachaud:
Guest editorial of the special issue on intelligent virtual agents. Auton. Agents Multi Agent Syst. 20(1): 1-2 (2010) - [j22]Catherine Pelachaud, Tamy Boubekeur:
Guest Editors' Introduction: Digital Human Faces: From Creation to Emotion. IEEE Computer Graphics and Applications 30(4): 18-19 (2010) - [j21]Radoslaw Niewiadomski, Catherine Pelachaud:
Affect expression in ECAs: Application to politeness displays. Int. J. Hum. Comput. Stud. 68(11): 851-871 (2010) - [j20]Jérôme Urbain, Radoslaw Niewiadomski, Elisabetta Bevacqua, Thierry Dutoit, Alexis Moinet, Catherine Pelachaud, Benjamin Picart, Joëlle Tilmanne, Johannes Wagner:
AVLaughterCycle. J. Multimodal User Interfaces 4(1): 47-58 (2010) - [j19]Etienne de Sevin, Radoslaw Niewiadomski, Elisabetta Bevacqua, André-Marie Pez, Maurizio Mancini, Catherine Pelachaud:
Greta, une plateforme d'agent conversationnel expressif et interactif. Tech. Sci. Informatiques 29(7): 751-776 (2010) - [c78]Rodolphe Gelin, Christophe d'Alessandro, Quoc Anh Le, Olivier Deroo, David Doukhan, Jean-Claude Martin, Catherine Pelachaud, Albert Rilliard, Sophie Rosset:
Towards a Storytelling Humanoid Robot. AAAI Fall Symposium: Dialog with Robots 2010 - [c77]Etienne de Sevin, Sylwia Julia Hyniewska, Catherine Pelachaud:
Influence of Personality Traits on Backchannel Selection. IVA 2010: 187-193 - [c76]Elisabetta Bevacqua, Sathish Pammi, Sylwia Julia Hyniewska, Marc Schröder, Catherine Pelachaud:
Multimodal Backchannels for Embodied Conversational Agents. IVA 2010: 194-200 - [c75]Radoslaw Niewiadomski, Virginie Demeure, Catherine Pelachaud:
Warmth, Competence, Believability and Virtual Agents. IVA 2010: 272-285 - [c74]Magalie Ochs, Radoslaw Niewiadomski, Catherine Pelachaud:
How a Virtual Agent Should Smile? - Morphological and Dynamic Characteristics of Virtual Agent's Smiles. IVA 2010: 427-440 - [c73]Jérôme Urbain, Elisabetta Bevacqua, Thierry Dutoit, Alexis Moinet, Radoslaw Niewiadomski, Catherine Pelachaud, Benjamin Picart, Joëlle Tilmanne, Johannes Wagner:
The AVLaughterCycle Database. LREC 2010 - [c72]Radoslaw Niewiadomski, Ken Prepin, Elisabetta Bevacqua, Magalie Ochs, Catherine Pelachaud:
Towards a smiling ECA: studies on mimicry, timing and types of smiles. SSPW@MM 2010: 65-70 - [e4]Jan M. Allbeck, Norman I. Badler, Timothy W. Bickmore, Catherine Pelachaud, Alla Safonova:
Intelligent Virtual Agents, 10th International Conference, IVA 2010, Philadelphia, PA, USA, September 20-22, 2010. Proceedings. Lecture Notes in Computer Science 6356, Springer 2010, ISBN 978-3-642-15891-9 [contents]
2000 – 2009
- 2009
- [j18]Maurizio Mancini, Catherine Pelachaud:
Generating distinctive behavior for Embodied Conversational Agents. J. Multimodal User Interfaces 3(4): 249-261 (2009) - [j17]Catherine Pelachaud:
Studies on gesture expressivity for a virtual agent. Speech Commun. 51(7): 630-639 (2009) - [c71]Radoslaw Niewiadomski, Sylwia Julia Hyniewska, Catherine Pelachaud:
Evaluation of multimodal sequential expressions of emotions in ECA. ACII 2009: 1-7 - [c70]Marc Schröder, Elisabetta Bevacqua, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Sathish Pammi, Maja Pantic, Catherine Pelachaud, Björn W. Schuller, Etienne de Sevin, Michel F. Valstar, Martin Wöllmer:
A demonstration of audiovisual sensitive artificial listeners. ACII 2009: 1-2 - [c69]Radoslaw Niewiadomski, Elisabetta Bevacqua, Maurizio Mancini, Catherine Pelachaud:
Greta: an interactive expressive ECA system. AAMAS (2) 2009: 1399-1400 - [c68]Zhenbo Li, Patrick Horain, André-Marie Pez, Catherine Pelachaud:
Statistical Gesture Models for 3D Motion Capture from a Library of Gestures with Variants. Gesture Workshop 2009: 219-230 - [c67]Sames Al Moubayed, Malek Baklouti, Mohamed Chetouani, Thierry Dutoit, Ammar Mahdhaoui, Jean-Claude Martin, Stanislav Ondás, Catherine Pelachaud, Jérôme Urbain, Mehmet Yilmaz:
Generating Robot/Agent backchannels during a storytelling experiment. ICRA 2009: 3749-3754 - [c66]Margaret McRorie, Ian Sneddon, Etienne de Sevin, Elisabetta Bevacqua, Catherine Pelachaud:
A Model of Personality and Emotional Traits. IVA 2009: 27-33 - [c65]Radoslaw Niewiadomski, Sylwia Julia Hyniewska, Catherine Pelachaud:
Modeling Emotional Expressions as Sequences of Behaviors. IVA 2009: 316-322 - [c64]Etienne de Sevin, Catherine Pelachaud:
Real-Time Backchannel Selection for ECAs According to User's Level of Interest. IVA 2009: 494-495 - 2008
- [c63]Magalie Ochs, Catherine Pelachaud, David Sadek:
An empathic virtual dialog agent to improve human-machine interaction. AAMAS (1) 2008: 89-96 - [c62]Maurizio Mancini, Catherine Pelachaud:
Distinctiveness in multimodal behaviors. AAMAS (1) 2008: 159-166 - [c61]Gersende Georg, Catherine Pelachaud, Marc Cavazza:
Emotional reading of medical texts using conversational agents. AAMAS (3) 2008: 1285-1288 - [c60]Gérard Chollet, Anna Esposito, Annie Gentes, Patrick Horain, Walid Karam, Zhenbo Li, Catherine Pelachaud, Patrick Perrot, Dijana Petrovska-Delacrétaz, Dianle Zhou, Leila Zouari:
Multimodal Human Machine Interactions in Virtual and Augmented Reality. COST 2102 School (Vietri) 2008: 1-23 - [c59]Radoslaw Niewiadomski, Magalie Ochs, Catherine Pelachaud:
Expressions of Empathy in ECAs. IVA 2008: 37-44 - [c58]Elisabetta Bevacqua, Maurizio Mancini, Catherine Pelachaud:
A Listening Agent Exhibiting Variable Behaviour. IVA 2008: 262-269 - [c57]Dirk Heylen, Stefan Kopp, Stacy Marsella, Catherine Pelachaud, Hannes Högni Vilhjálmsson:
The Next Step towards a Function Markup Language. IVA 2008: 270-280 - [c56]Gersende Georg, Marc Cavazza, Catherine Pelachaud:
Visualizing the Importance of Medical Recommendations with Conversational Agents. IVA 2008: 380-393 - 2007
- [j16]Nicolas Ech Chafai, Catherine Pelachaud, Danielle Pelé:
A case study of gesture expressivity breaks. Lang. Resour. Evaluation 41(3-4): 341-365 (2007) - [j15]George Caridakis, Amaryllis Raouzaiou, Elisabetta Bevacqua, Maurizio Mancini, Kostas Karpouzis, Lori Malatesta, Catherine Pelachaud:
Virtual agent multimodal mimicry of humans. Lang. Resour. Evaluation 41(3-4): 367-388 (2007) - [j14]Maurizio Mancini, Roberto Bresin, Catherine Pelachaud:
A Virtual Head Driven by Music Expressivity. IEEE Trans. Speech Audio Process. 15(6): 1833-1841 (2007) - [c55]Radoslaw Niewiadomski, Catherine Pelachaud:
Model of Facial Expressions Management for an Embodied Conversational Agent. ACII 2007: 12-23 - [c54]Magalie Ochs, Catherine Pelachaud, David Sadek:
An Empathic Rational Dialog Agent. ACII 2007: 338-349 - [c53]Marc Schröder, Laurence Devillers, Kostas Karpouzis, Jean-Claude Martin, Catherine Pelachaud, Christian Peter, Hannes Pirker, Björn W. Schuller, Jianhua Tao, Ian Wilson:
What Should a Generic Emotion Markup Language Be Able to Represent? ACII 2007: 440-451 - [c52]Maurizio Mancini, Catherine Pelachaud:
Implementing Distinctive Behavior for Conversational Agents. Gesture Workshop 2007: 163-174 - [c51]Nicolas Ech Chafai, Magalie Ochs, Christopher E. Peters, Maurizio Mancini, Elisabetta Bevacqua, Catherine Pelachaud:
Des agents virtuels sociaux et émotionnels pour l'interaction humain-machine. IHM 2007: 207-214 - [c50]Radoslaw Niewiadomski, Catherine Pelachaud:
Fuzzy Similarity of Facial Expressions of Embodied Agents. IVA 2007: 86-98 - [c49]Hannes Högni Vilhjálmsson, Nathan Cantelmo, Justine Cassell, Nicolas Ech Chafai, Michael Kipp, Stefan Kopp, Maurizio Mancini, Stacy Marsella, Andrew N. Marshall, Catherine Pelachaud, Zsófia Ruttkay, Kristinn R. Thórisson, Herwin van Welbergen, Rick J. van der Werf:
The Behavior Markup Language: Recent Developments and Challenges. IVA 2007: 99-111 - [c48]Maurizio Mancini, Catherine Pelachaud:
Dynamic Behavior Qualifiers for Conversational Agents. IVA 2007: 112-124 - [c47]Dirk Heylen, Elisabetta Bevacqua, Marion Tellier, Catherine Pelachaud:
Searching for Prototypical Facial Feedback Signals. IVA 2007: 147-153 - [c46]Nicolas Ech Chafai, Catherine Pelachaud, Danielle Pelé:
Towards the Specification of an ECA with Variants of Gestures. IVA 2007: 366-367 - [c45]Fred Charles, Samuel Lemercier, Thurid Vogt, Nikolaus Bee, Maurizio Mancini, Jérôme Urbain, Marc Price, Elisabeth André, Catherine Pelachaud, Marc Cavazza:
Affective Interactive Narrative in the CALLAS Project. International Conference on Virtual Storytelling 2007: 210-213 - [e3]Catherine Pelachaud, Jean-Claude Martin, Elisabeth André, Gérard Chollet, Kostas Karpouzis, Danielle Pelé:
Intelligent Virtual Agents, 7th International Conference, IVA 2007, Paris, France, September 17-19, 2007, Proceedings. Lecture Notes in Computer Science 4722, Springer 2007, ISBN 978-3-540-74996-7 [contents] - 2006
- [j13]Kristinn R. Thórisson, Hannes Högni Vilhjálmsson, Catherine Pelachaud, Stefan Kopp, Norman I. Badler, W. Lewis Johnson, Stacy Marsella, Brigitte Krenn:
Representations for Multimodal Generation: A Workshop Report. AI Mag. 27(1): 108 (2006) - [j12]Jean-Claude Martin, Radoslaw Niewiadomski, Laurence Devillers, Stéphanie Buisine, Catherine Pelachaud:
Multimodal Complex Emotions: Gesture Expressivity and Blended Facial Expressions. Int. J. Humanoid Robotics 3(3): 269-291 (2006) - [j11]Jean-Claude Martin, Sarkis Abrilian, Laurence Devillers, Myriam Lamolle, Maurizio Mancini, Catherine Pelachaud:
Du corpus vidéo à l'agent expressif. Utilisation des différents niveaux de représentation multimodale et émotionnelle. Rev. d'Intelligence Artif. 20(4-5): 477-498 (2006) - [j10]Magalie Ochs, Radoslaw Niewiadomski, Catherine Pelachaud, David Sadek:
Expressions intelligentes des émotions. Rev. d'Intelligence Artif. 20(4-5): 607-620 (2006) - [j9]Stéphanie Buisine, Björn Hartmann, Maurizio Mancini, Catherine Pelachaud:
Conception et évaluation d'un modèle d'expressivité pour les gestes des agents conversationnels. Rev. d'Intelligence Artif. 20(4-5): 621-638 (2006) - [c44]Stéphanie Buisine, Sarkis Abrilian, Radoslaw Niewiadomski, Jean-Claude Martin, Laurence Devillers, Catherine Pelachaud:
Perception of Blended Emotions: From Video Corpus to Expressive Agent. IVA 2006: 93-106 - [c43]Nicolas Ech Chafai, Catherine Pelachaud, Danielle Pelé, Gaspard Breton:
Gesture Expressivity Modulations in an ECA Application. IVA 2006: 181-192 - [c42]Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N. Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R. Thórisson, Hannes Högni Vilhjálmsson:
Towards a Common Framework for Multimodal Generation: The Behavior Markup Language. IVA 2006: 205-217 - [c41]Elisabetta Bevacqua, Amaryllis Raouzaiou, Christopher E. Peters, George Caridakis, Kostas Karpouzis, Catherine Pelachaud, Maurizio Mancini:
Multimodal Sensing, Interpretation and Copying of Movements by a Virtual Agent. PIT 2006: 164-174 - [c40]Nasser Rezzoug, Philippe Gorce, Alexis Héloir, Sylvie Gibet, Nicolas Courty, Jean-François Kamp, Franck Multon, Catherine Pelachaud:
Virtual humanoids endowed with expressive communication gestures : the HuGEx project. SMC 2006: 4445-4450 - [c39]Isabella Poggi, Radoslaw Niewiadomski, Catherine Pelachaud:
Facial Deception in Humans and ECAs. ZiF Workshop 2006: 198-221 - [e2]Zsófia Ruttkay, Elisabeth André, W. Lewis Johnson, Catherine Pelachaud:
Evaluating Embodied Conversational Agents, 14.03. - 19.03.2004. Dagstuhl Seminar Proceedings 04121, Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl, Germany 2006 [contents] - 2005
- [c38]Magalie Ochs, Radoslaw Niewiadomski, Catherine Pelachaud, David Sadek:
Intelligent Expressions of Emotions. ACII 2005: 707-714 - [c37]Björn Hartmann, Maurizio Mancini, Stéphanie Buisine, Catherine Pelachaud:
Design and evaluation of expressive gesture synthesis for embodied conversational agents. AAMAS 2005: 1095-1096 - [c36]Myriam Lamolle, Maurizio Mancini, Catherine Pelachaud, Sarkis Abrilian, Jean-Claude Martin, Laurence Devillers:
Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-level Specification of Emotion and Expressivity in Embodied Conversational Agents. CONTEXT 2005: 225-239 - [c35]Björn Hartmann, Maurizio Mancini, Catherine Pelachaud:
Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. Gesture Workshop 2005: 188-199 - [c34]Maurizio Mancini, Roberto Bresin, Catherine Pelachaud:
From Acoustic Cues to an Expressive Agent. Gesture Workshop 2005: 280-291 - [c33]Maurizio Mancini, Björn Hartmann, Catherine Pelachaud, Amaryllis Raouzaiou, Kostas Karpouzis:
Expressive avatars in MPEG-4. ICME 2005: 800-803 - [c32]Christopher E. Peters, Catherine Pelachaud, Elisabetta Bevacqua, Maurizio Mancini, Isabella Poggi:
A Model of Attention and Interest Using Gaze Behavior. IVA 2005: 229-240 - [c31]Jean-Claude Martin, Sarkis Abrilian, Laurence Devillers, Myriam Lamolle, Maurizio Mancini, Catherine Pelachaud:
Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs. IVA 2005: 405-417 - [c30]Catherine Pelachaud:
Multimodal expressive embodied conversational agents. ACM Multimedia 2005: 683-689 - 2004
- [j8]Elisabetta Bevacqua, Catherine Pelachaud:
Expressive audio-visual speech. Comput. Animat. Virtual Worlds 15(3-4): 297-304 (2004) - [c29]Vincent Maya, Myriam Lamolle, Catherine Pelachaud:
Influences and Embodied Conversational Agents. AAMAS 2004: 1306-1307 - [c28]Vincent Maya, Myriam Lamolle, Catherine Pelachaud:
Embodied Conversational Agents and Influences. ECAI 2004: 1057-1058 - [c27]Walid Karam, Chafic Mokbel, Hanna Greige, Guido Aversano, Catherine Pelachaud, Gérard Chollet:
An Audio-Visual Imposture Scenario by Talking Face Animation. Summer School on Neural Networks 2004: 365-369 - [p1]Berardina De Carolis, Catherine Pelachaud, Isabella Poggi, Mark Steedman:
APML, a Markup Language for Believable Behavior Generation. Life-like characters 2004: 65-86 - [e1]Zsófia Ruttkay, Catherine Pelachaud:
From Brows to Trust - Evaluating Embodied Conversational Agents. Human-Computer Interaction Series 7, Kluwer 2004, ISBN 978-1-4020-2729-1 [contents] - [i1]Zsófia Ruttkay, Elisabeth André, W. Lewis Johnson, Catherine Pelachaud:
04121 Abstracts Collection -- Evaluating Embodied Conversational Agents. Evaluating Embodied Conversational Agents 2004 - 2003
- [j7]Fiorella de Rosis, Catherine Pelachaud, Isabella Poggi, Valeria Carofiglio, Berardina De Carolis:
From Greta's mind to her face: modelling the dynamics of affective states in a conversational embodied agent. Int. J. Hum. Comput. Stud. 59(1-2): 81-118 (2003) - [j6]Colin Matheson, Catherine Pelachaud, Fiorella de Rosis, Thomas Rist:
MagiCster: Believable Agents and Dialogue. Künstliche Intell. 17(4): 24- (2003) - [c26]Catherine Pelachaud, Massimo Bilvi:
Computational Model of Believable Conversational Agents. Communication in Multiagent Systems 2003: 300-317 - [c25]Isabella Poggi, Catherine Pelachaud, Emanuela Magno Caldognetto:
Gestural mind markers in ECAs. AAMAS 2003: 1098-1099 - [c24]Elisabetta Bevacqua, Catherine Pelachaud:
Triphone-based coarticulation model. AVSP 2003: 221-226 - [c23]Thomas Rist, Markus Schmitt, Catherine Pelachaud, Massimo Bilvi:
Towards a Simulation of Conversations with Expressive Embodied Speakers and Listeners. CASA 2003: 5-10 - [c22]Isabella Poggi, Catherine Pelachaud, Emanuela Magno Caldognetto:
Gestural Mind Markers in ECAs. Gesture Workshop 2003: 338-349 - [c21]Alain Goyé, Eric Lecolinet, Shiuan-Sung Lin, Gérard Chollet, Catherine Pelachaud, Xiaoqing Ding, Yang Ni:
Multimodal user interfaces for a travel assistant. IHM 2003: 244-247 - [c20]Catherine Pelachaud, Massimo Bilvi:
Modelling Gaze Behaviour for Conversational Agents. IVA 2003: 93-100 - 2002
- [j5]Catherine Pelachaud, Isabella Poggi:
Subtleties of facial expressions in embodied agents. Comput. Animat. Virtual Worlds 13(5): 301-312 (2002) - [j4]Catherine Pelachaud, Isabella Poggi:
Multimodal embodied agents. Knowl. Eng. Rev. 17(2): 181-196 (2002) - [c19]Catherine Pelachaud, Valeria Carofiglio, Berardina De Carolis, Fiorella de Rosis, Isabella Poggi:
Embodied contextual agent in information delivering application. AAMAS 2002: 758-765 - [c18]Björn Hartmann, Maurizio Mancini, Catherine Pelachaud:
Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis. CA 2002: 111-119 - [c17]Berardina De Carolis, Valeria Carofiglio, Catherine Pelachaud:
From Discourse Plans to Believable Behavior Generation. INLG 2002: 65-72 - 2001
- [c16]Catherine Pelachaud, Isabella Poggi, Berardina De Carolis, Fiorella de Rosis:
A reflexive, not impulsive agent. Agents 2001: 186-187 - [c15]Catherine Pelachaud, Emanuela Magno Caldognetto, Claudio Zmarich, Piero Cosi:
Modelling an Italian talking head. AVSP 2001: 72-77 - [c14]Berardina De Carolis, Catherine Pelachaud, Isabella Poggi, Fiorella de Rosis:
Behavior Planning for a Reflexive Agent. IJCAI 2001: 1059-1066 - [c13]Catherine Pelachaud, Emanuela Magno Caldognetto, Claudio Zmarich, Piero Cosi:
An approach to an Italian talking head. INTERSPEECH 2001: 1035-1038 - 2000
- [j3]Isabella Poggi, Catherine Pelachaud, Fiorella de Rosis:
Eye Communication in a Conversational 3D Synthetic Agent. AI Commun. 13(3): 169-182 (2000) - [c12]Catherine Pelachaud:
Contextually Embodied Agents. DEFORM/AVATARS 2000: 98-108
1990 – 1999
- 1999
- [c11]Isabella Poggi, Catherine Pelachaud:
Emotional Meaning and Expression in Animated Faces. IWAI 1999: 182-195 - 1998
- [j2]Isabella Poggi, Catherine Pelachaud:
Performative faces. Speech Commun. 26(1-2): 5-21 (1998) - [c10]Catherine Pelachaud, Isabella Poggi:
Multimodal communication between synthetic agents. AVI 1998: 156-163 - 1997
- [c9]Isabella Poggi, Catherine Pelachaud:
Context sensitive faces. AVSP 1997: 17-20 - 1996
- [j1]Catherine Pelachaud, Norman I. Badler, Mark Steedman:
Generating Facial Expressions for Speech. Cogn. Sci. 20(1): 1-46 (1996) - [c8]Catherine Pelachaud:
Simulation of face-to-face interaction. AVI 1996: 269-271 - 1995
- [c7]Catherine Pelachaud, Justine Cassell, Norman I. Badler, Mark Steedman, Scott Prevost, Matthew Stone:
Synthesizing Cooperative Conversation. Multimodal Human-Computer Communication 1995: 68-88 - [c6]Catherine Pelachaud, Scott Prevost:
Coordinating Vocal and Visual Parameters for 3D Virtual Agents. Virtual Environments 1995: 99-106 - 1994
- [c5]Catherine Pelachaud, Cornelius W. A. M. van Overveld, Chin Seah:
Modeling and animating the human tongue during speech production. CA 1994: 40-49 - [c4]Justine Cassell, Catherine Pelachaud, Norman I. Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, Matthew Stone:
Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. SIGGRAPH 1994: 413-420 - [c3]Catherine Pelachaud, Scott Prevost:
Sight and sound: generating facial expressions and spoken intonation from context. SSW 1994: 216-219 - 1993
- [c2]Catherine Pelachaud, Marie-Luce Viaud, Hussein M. Yahia:
Rule-Structured Facial Animation System. IJCAI 1993: 1610-1617 - 1992
- [c1]Catherine Pelachaud:
Functional Decomposition of Facial Expressions for an Animation System. Advanced Visual Interfaces 1992: 26-49
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-18 19:32 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint