default search action
AISafety@IJCAI 2022: Vienna, Austria
- Gabriel Pedroza, Xin Cynthia Chen, José Hernández-Orallo, Xiaowei Huang, Huáscar Espinoza, Richard Mallah, John A. McDermid, Mauricio Castillo-Effen:
Proceedings of the Workshop on Artificial Intelligence Safety 2022 (AISafety 2022) co-located with the Thirty-First International Joint Conference on Artificial Intelligence and the Twenty-Fifth European Conference on Artificial Intelligence (IJCAI-ECAI-2022), Vienna, Austria, July 24-25, 2022. CEUR Workshop Proceedings 3215, CEUR-WS.org 2022
Session 1: AI Ethics: Fairness, Bias, and Accountability
- Mattias Brännström, Andreas Theodorou, Virginia Dignum:
Let it RAIN for Social Good. - Palak Malhotra, Amita Misra:
Accountability and Responsibility of Artificial Intelligence Decision-making Models in Indian Policy Landscape. - Iris Dominguez-Catena, Daniel Paternain, Mikel Galar:
Assessing Demographic Bias Transfer from Dataset to Model: A Case Study in Facial Expression Recognition.
Session 2: Short Presentations - Safety Assessment of AI-enabled systems
- Yi Qi, Philippa Ryan Conmy, Wei Huang, Xingyu Zhao, Xiaowei Huang:
A Hierarchical HAZOP-Like Safety Analysis for Learning-Enabled Systems. - Jérôme Hugues, Daniela Cancila:
Increasingly Autonomous CPS: Taming Emerging Behaviors from an Architectural Perspective. - Julien Girard-Satabin, Michele Alberti, François Bobot, Zakaria Chihani, Augustin Lemesle:
CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness.
Session 3: Machine Learning for Safety-critical AI
- Patrick Feifel, Benedikt Franke, Arne P. Raulf, Friedhelm Schwenker, Frank Bonarens, Frank Köster:
Revisiting the Evaluation of Deep Neural Networks for Pedestrian Detection. - Daniel Scholz, Florian Hauer, Klaus Knobloch, Christian Mayr:
Improvement of Rejection for AI Safety through Loss-Based Monitoring.
Session 4: Short Presentations - ML Robustness, Criticality and Uncertainty
- Georg Siedel, Silvia Vock, Andrey Morozov, Stefan Voß:
Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers. - Prajit T. Rajendran, Guillaume Ollier, Huáscar Espinoza, Morayo Adedjouma, Agnès Delaborde, Chokri Mraidha:
Safety-aware Active Learning with Perceptual Ambiguity and Criticality Assessment. - Juan Shu, Bowei Xi, Charles A. Kamhoua:
Understanding Adversarial Examples Through Deep Neural Network's Classification Boundary and Uncertainty Regions.
Session 5: AI Robustness, Generative Models and Adversarial Learning
- Adrien Le-Coz, Stéphane Herbin, Faouzi Adjed:
Leveraging generative models to characterize the failure conditions of image classifiers. - Svetlana Pavlitskaya, Bianca-Marina Codau, J. Marius Zöllner:
Feasibility of Inconspicuous GAN-generated Adversarial Patches against Object Detection. - Jonghu Jeong, Minyong Cho, Philipp Benz, Jinwoo Hwang, Jeewook Kim, Seungkwan Lee, Taehoon Kim:
Privacy Safe Representation Learning via Frequency Filtering Encoder. - Pol Labarbarie, Adrien Chan-Hon-Tong, Stéphane Herbin, Milad Leyli-Abadi:
Benchmarking and deeper analysis of adversarial patch attack on object detectors.
Session 6: AI Accuracy, Diversity, Causality and Optimization
- Cedrique Rovile Njieutcheu Tassi, Jakob Gawlikowski, Auliya Unnisa Fitri, Rudolph Triebel:
The impact of averaging logits over probabilities on ensembles of neural networks. - Michal Filipiuk, Vasu Singh:
Exploring Diversity in Neural Architectures for Safety. - Mohammad Kachuee, Sungjin Lee:
Constrained Policy Optimization for Controlled Contextual Bandit Exploration. - Francis Rhys Ward, Francesco Belardinelli, Francesca Toni:
A causal perspective on AI deception in games.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.