default search action
HCI 2022: Virtual Event - Volume 44
- Masaaki Kurosu, Sakae Yamamoto, Hirohiko Mori, Dylan D. Schmorrow, Cali M. Fidopiastis, Norbert A. Streitz, Shin'ichi Konomi:
HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments - 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, June 26 - July 1, 2022, Proceedings. Lecture Notes in Computer Science 13519, Springer 2022, ISBN 978-3-031-17617-3
Multimodal Interaction and Psychophysiological Computing
- Yuta Abe, Takashi Komuro:
3D Hand Pose Recognition Over a Wide Area Using Two Omnidirectional Cameras with Field-of-view Division. 3-17 - Isabel M. Barradas, Reinhard Tschiesner, Angelika Peer:
Towards a Dynamic Model for the Prediction of Emotion Intensity from Peripheral Physiological Signals. 18-35 - Cornelia Ebert, Andy Lücking, Alexander Mehler:
Introduction to the 2nd Edition of "Semantic, Artificial and Computational Interaction Studies". 36-47 - Miguel Ángel García-Ruíz, Laura S. Gaytán-Lugo, Pedro C. Santana-Mancilla, Raul Aquino-Santos:
Towards Efficient Odor Diffusion with an Olfactory Display Using an Electronic Nose. 48-56 - Akira Hashimoto, Jun-Li Lu, Yoichi Ochiai:
Rendering Personalized Real-Time Expressions While Speaking Under a Mask. 57-66 - Panikos Heracleous, Satoru Fukayama, Jun Ogata, Yasser Mohammad:
Applying Generative Adversarial Networks and Vision Transformers in Speech Emotion Recognition. 67-75 - Yutaka Ishii, Satoshi Kurokawa, Miwako Kitamura, Tomio Watanabe:
Development of a Web-Based Interview Support System Using Characters Nodding with Various Movements. 76-87 - Lana Jalal, Angelika Peer:
Emotion Recognition from Physiological Signals Using Continuous Wavelet Transform and Deep Learning. 88-99 - Jai Prakash Kushvah, Gerhard Rinkenauer:
Surrogate Sensory Feedback of Grip Force in Older and Younger Participants only Influences Fine Motor Control, but Not the Object Weight Perception. 100-112 - Chiara Mazzocconi:
Laughter Meaning Construction and Use in Development: Children and Spoken Dialogue Systems. 113-133 - Yuriya Nakamura, Lei Jing:
Skeleton-Based Sign Language Recognition with Graph Convolutional Networks on Small Data. 134-142 - Francisco Vinicius Nascimento da Silva, Francisco C. de Mattos Brito Oliveira, Robson de Moraes Alves, Gabriela de Castro Quintinho:
Gesture Elicitation for Augmented Reality Environments. 143-159 - Mehdi Ousmer, Arthur Sluÿters, Nathan Magrofuoco, Paolo Roselli, Jean Vanderdonckt:
A Systematic Procedure for Comparing Template-Based Gesture Recognizers. 160-179 - Jiayu Su:
An Elderly User-Defined Gesture Set for Audio Natural Interaction in Square Dance. 180-191 - Miona Tabuchi, Tetsuya Hirotomi:
Using Fiducial Marker for Analyzing Wearable Eye-Tracker Gaze Data Measured While Cooking. 192-204 - Tao Wang, Hanling Zhang:
Using Wearable Devices for Emotion Recognition in Mobile Human- Computer Interaction: A Review. 205-227
Human-Robot Interaction
- Katharina Gleichauf, Ramona Schmid, Verena Wagner-Hartl:
Human-Robot-Collaboration in the Healthcare Environment: An Exploratory Study. 231-240 - Peicheng Guo, Iskander Smit:
Towards an Active Predictive Relation by Reconceptualizing a Vacuum Robot: Research on the Transparency and Acceptance of the Predictive Behaviors. 241-256 - Kuo-Liang Huang, Jinchen Jiang, Yune-Yu Cheng:
On Improving the Acceptance of Intelligent Companion Robots Among Chinese Empty-Nesters with the Application of Emotional Design. 257-270 - Masashi Inoue:
Human Interpretation of Inter-robot Communication. 271-279 - Kristiina Jokinen:
Conversational Agents and Robot Interaction. 280-292 - Marian Obuseh, Vincent G. Duffy:
Surgical Human-Robot Interaction: A Bibliometric Review. 293-312 - Luka Orsag, Tomislav Stipancic, Leon Koren, Karlo Posavec:
Human Intention Recognition for Safe Robot Action Planning Using Head Pose. 313-327 - José Varela Aldás, Jorge Buele, Santiago Guerrero-Núñez, Víctor H. Andaluz:
Mobile Manipulator for Hospital Care Using Firebase. 328-341 - Yoo Jin Won, Seunghee Hwang, Serin Ko, Jung-Mi Park:
Iterative Design Process for HRI: Serving Robot in Restaurant. 342-353
Brain-Computer Interfaces
- Julia Elizabeth Calderón-Reyes, Francisco Javier Álvarez Rodríguez, María Lorena Barba-González, Héctor Cardona Reyes:
Methodology Design of the Correlation Between EEG Signals and Brain Regions Mapping in Panic Attacks. 357-370 - Elizabeth Clark, Adrienne Czaplewski, Khoa Nguyen, Patrick Pasciucco, Marimar Rios, Milena Korostenskaja:
Establishing Clinical Protocols for BCI-Based Motor Rehabilitation in Individuals Post Stroke - The Impact of Feedback Type and Selected Outcome Measures: A Systematic Review. 371-390 - Taisija Demchenko, Milena Korostenskaja:
Training CNN to Detect Motor Imagery in ECoG Data Recorded During Dreaming. 391-414 - Guangyao Dou, Zheng Zhou, Xiaodong Qu:
Time Majority Voting, a PC-Based EEG Classifier for Non-expert Users. 415-428 - Alexandra Fischmann, Sydney Levy:
It's Easy as ABC Framework for User Feedback. 429-441 - Joseph R. Geraghty, George Schoettle:
Single-Subject vs. Cross-Subject Motor Imagery Models. 442-452 - Nikita Gordienko, Oleksandr Rokovyi, Yuri G. Gordienko, Sergii G. Stirenko:
Hybrid Convolutional, Recurrent and Attention-Based Architectures of Deep Neural Networks for Classification of Human-Computer Interaction by Electroencephalography. 453-468 - Marshall McArthur, Xavier Serrano, Viktoriia Zakharova:
Predicting the Future: A ML MI Replication Study. 469-481 - Ian McDiarmid-Sterling, Luca Cerbin:
High-Powered Ocular Artifact Detection with C-LSTM-E. 482-496 - Anarsaikhan Tuvshinjargal, Elliot Kim:
ML vs DL: Accuracy and Testing Runtime Trade-offs in BCI. 497-511 - Xuduo Wang, Ziji Wang:
CNN with Self-attention in EEG Classification. 512-526 - Troy R. Weekes, Thomas C. Eskridge:
Design Thinking the Human-AI Experience of Neurotechnology for Knowledge Workers. 527-545 - Yang Windhorse, Nader Almadbooh:
Optimizing ML Algorithms Under CSP and Riemannian Covariance in MI-BCIs. 546-556
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.