default search action
AVSP 2013: Annecy, France
- Slim Ouni, Frédéric Berthommier, Alexandra Jesse:
Auditory-Visual Speech Processing, AVSP 2013, Annecy, France, August 29 - September 1, 2013. ISCA 2013
Invited Papers
- Angelo Cangelosi:
Embodied language learning with the humanoid robot icub. 1 - Charles Spence:
Audiovisual speech integration: modulatory factors and the link to sound symbolism. 3
Audiovisual Prosody
- Mandy Visser, Emiel Krahmer, Marc Swerts:
Who presents worst? a study on expressions of negative feedback in different intergroup contexts. 5-10 - Adela Barbulescu, Thomas Hueber, Gérard Bailly, Rémi Ronfard:
Audio-visual speaker conversion using prosody features. 11-16 - Gregory Zelic, Jeesun Kim, Chris Davis:
Spontaneous synchronisation between repetitive speech and rhythmic gesture. 17-20 - Phoebe Mui, Martijn Goudbeek, Marc Swerts, Per van der Wijst:
Culture and nonverbal cues: how does power distance influence facial expressions in game contexts? 21-26 - Angelika Hönemann, Diego A. Evin, Alejandro J. Hadad, Hansjörg Mixdorff, Sascha Fagel:
Predicting head motion from prosodic and linguistic features. 27-30
Audiovisual Speech by Machines
- Jakob J. Hollenstein, Michael Pucher, Dietmar Schabus:
Visual control of hidden-semi-Markov-model based acoustic speech synthesis. 31-36 - Dietmar Schabus, Michael Pucher, Gregor Hofer:
Objective and subjective feature evaluation for speaker-adaptive visual speech synthesis. 37-42 - Peng Shen, Satoshi Tamura, Satoru Hayamizu:
Audio-visual interaction in sparse representation features for noise robust audio-visual speech recognition. 43-48 - Paula Dornhofer Paro Costa, José Mario De Martino:
Assessing the visual speech perception of sampled-based talking heads. 49-54 - Ingmar Steiner, Korin Richmond, Slim Ouni:
Speech animation using electromagnetic articulography as motion capture data. 55-60
Development of Audiovisual Speech Perception
- Martijn Baart, Jean Vroomen, Kathleen E. Shaw, Heather Bortfeld:
Phonetic information in audiovisual speech is more important for adults than for infants; preliminary findings. 61-64 - Julia Irwin, Lawrence Brancazio:
Audiovisual speech perception in children with autism spectrum disorders and typical controls. 65-70 - Mathilde Fort, Alexa Weiß, Alexander Martin, Sharon Peperkamp:
Looking for the bouba-kiki effect in prelexical infants. 71-76 - Margriet A. Groen, Alexandra Jesse:
Audiovisual speech perception in children and adolescents with developmental dyslexia: no deficit with McGurk stimuli. 77-80
Audiovisual Speech Perception in Adverse Listening
- Natalie Fecher, Dominic Watt:
Effects of forensically-realistic facial concealment on auditory-visual consonant recognition in quiet and noise conditions. 81-86 - Clémence Bayard, Cécile Colin, Jacqueline Leybaert:
Impact of cued speech on audio-visual speech integration in deaf and hearing adults. 87-92 - Valérie Hazan, Jeesun Kim:
Acoustic and visual adaptations in speech produced to counter adverse listening conditions. 93-98 - Pascal Barone, Kuzma Strelnikov, Olivier Déguine:
Role of audiovisual plasticity in speech recovery after adult cochlear implantation. 99-104 - Michael Fitzpatrick, Jeesun Kim, Chris Davis:
Auditory and auditory-visual Lombard speech perception by younger and older adults. 105-110
Binding of Audiovisual Speech Information
- Hansjörg Mixdorff, Angelika Hönemann, Sascha Fagel:
Integration of acoustic and visual cues in prominence perception. 111-116 - Chris Davis, Jeesun Kim:
Detecting auditory-visual speech synchrony: how precise? 117-122 - Jeesun Kim, Chris Davis:
How far out? the effect of peripheral visual speech on speech perception. 123-128 - Ragnhild Eg, Dawn M. Behne:
Temporal integration for live conversational speech. 129-134 - Jérémy Miranda, Slim Ouni:
Mixing faces and voices: a study of the influence of faces and voices on audiovisual intelligibility. 135-140
Neuropsychology and Multimodality
- Avril Treille, Camille Cordeboeuf, Coriandre Vilain, Marc Sato:
The touch of your lips: haptic information speeds up auditory speech processing. 141-146 - Jean-Luc Schwartz, Christophe Savariaux:
Data and simulations about audiovisual asynchrony and predictability in speech perception. 147-152 - Kaisa Tiippana, Kaupo Viitanen, Riia Kivimäki:
The effect of musical aptitude on the integration of audiovisual speech and non-speech signals in children. 153-156 - Avril Treille, Coriandre Vilain, Thomas Hueber, Jean-Luc Schwartz, Laurent Lamalle, Marc Sato:
The sight of your tongue: neural correlates of audio-lingual speech perception. 157-162
Poster Sessions
- Shahram Kalantari, Rajitha Navarathna, David Dean, Sridha Sridharan:
Visual front-endwars: Viola-Jones face detector vs Fourier Lucas-Kanade. 163-168 - Simon Alexanderson, David House, Jonas Beskow:
Aspects of co-occurring syllables and head nods in spontaneous dialogue. 169-172 - Sascha Fagel, Andreas Hilbert, Christopher C. Mayer, Martin Morandell, Matthias Gira, Martin Petzold:
Avatar user interfaces in an OSGi-based system for health care services. 173-174 - Utpala Musti, Vincent Colotte, Slim Ouni, Caroline Lavecchia, Brigitte Wrobel-Dautcourt, Marie-Odile Berger:
Automatic feature selection for acoustic-visual concatenative speech synthesis: towards a perceptual objective measure. 175-180 - Olha Nahorna, Ganesh Attigodu Chandrashekara, Frédéric Berthommier, Jean-Luc Schwartz:
Modulating fusion in the McGurk effect by binding processes and contextual noise. 181-186 - Bart Joosten, Eric O. Postma, Emiel Krahmer:
Visual voice activity detection at different speeds. 187-190 - Zuheng Ming, Denis Beautemps, Gang Feng:
GMM mapping of visual features of cued speech from speech spectral features. 191-196 - Dominic Howell, Barry-John Theobald, Stephen J. Cox:
Confusion modelling for automated lip-reading usingweighted finite-state transducers. 197-202 - Felix Shaw, Barry-John Theobald:
Transforming neutral visual speech into expressive visual speech. 203-208 - Martin Heckmann, Keisuke Nakamura, Kazuhiro Nakadai:
Differences in the audio-visual detection of word prominence from Japanese and English speakers. 209-214 - Faheem Khan, Ben Milner:
Speaker separation using visually-derived binary masks. 215-220 - Seko Takumi, Naoya Ukai, Satoshi Tamura, Satoru Hayamizu:
Improvement of lipreading performance using discriminative feature and speaker adaptation. 221-226 - Takeshi Saitoh:
Efficient face model for lip reading. 227-232
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.