default search action
Chaowei Xiao
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j8]Jianwei Liu, Yinghui He, Chaowei Xiao, Jinsong Han, Kui Ren:
Time to Think the Security of WiFi-Based Behavior Recognition Systems. IEEE Trans. Dependable Secur. Comput. 21(1): 449-462 (2024) - [j7]Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar:
Prismer: A Vision-Language Model with Multi-Task Experts. Trans. Mach. Learn. Res. 2024 (2024) - [j6]Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar:
Voyager: An Open-Ended Embodied Agent with Large Language Models. Trans. Mach. Learn. Res. 2024 (2024) - [c65]Jiongxiao Wang, Junlin Wu, Muhao Chen, Yevgeniy Vorobeychik, Chaowei Xiao:
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models. ACL (1) 2024: 2551-2570 - [c64]Chulin Xie, De-An Huang, Wenda Chu, Daguang Xu, Chaowei Xiao, Bo Li, Anima Anandkumar:
Perada: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees. CVPR 2024: 23838-23848 - [c63]Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, Marco Pavone:
RealGen: Retrieval Augmented Generation for Controllable Traffic Scenarios. ECCV (62) 2024: 93-110 - [c62]Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Z. Morley Mao, Chaowei Xiao, Atul Prakash:
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation. ECCV (24) 2024: 166-182 - [c61]Shengchao Liu, Jiongxiao Wang, Yijin Yang, Chengpeng Wang, Ling Liu, Hongyu Guo, Chaowei Xiao:
Conversational Drug Editing Using Retrieval and Domain Feedback. ICLR 2024 - [c60]Xiaogeng Liu, Nan Xu, Muhao Chen, Chaowei Xiao:
AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models. ICLR 2024 - [c59]Jiachen Sun, Haizhong Zheng, Qingzhao Zhang, Atul Prakash, Zhuoqing Mao, Chaowei Xiao:
CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception. ICLR 2024 - [c58]Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao:
Position: TrustLLM: Trustworthiness in Large Language Models. ICML 2024 - [c57]Yulong Cao, Boris Ivanovic, Chaowei Xiao, Marco Pavone:
Reinforcement Learning with Human Feedback for Realistic Traffic Simulation. ICRA 2024: 14428-14434 - [c56]Qin Liu, Fei Wang, Chaowei Xiao, Muhao Chen:
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE. NAACL-HLT 2024: 483-496 - [c55]Jiazhao Li, Yijin Yang, Zhuofeng Wu, V. G. Vinod Vydiswaran, Chaowei Xiao:
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger. NAACL-HLT 2024: 2985-3004 - [c54]Jiashu Xu, Mingyu Derek Ma, Fei Wang, Chaowei Xiao, Muhao Chen:
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models. NAACL-HLT 2024: 3111-3126 - [c53]Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, Muhao Chen:
Instructional Fingerprinting of Large Language Models. NAACL-HLT 2024: 3277-3306 - [c52]Nan Xu, Fei Wang, Ben Zhou, Bangzheng Li, Chaowei Xiao, Muhao Chen:
Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking. NAACL-HLT (Findings) 2024: 3526-3548 - [c51]Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, Ning Zhang:
Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models. USENIX Security Symposium 2024 - [c50]Zelun Luo, Yuliang Zou, Yijin Yang, Zane Durante, De-An Huang, Zhiding Yu, Chaowei Xiao, Li Fei-Fei, Animashree Anandkumar:
Differentially Private Video Activity Recognition. WACV 2024: 6643-6653 - [i90]Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yue Zhao:
TrustLLM: Trustworthiness in Large Language Models. CoRR abs/2401.05561 (2024) - [i89]Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, Muhao Chen:
Instructional Fingerprinting of Large Language Models. CoRR abs/2401.12255 (2024) - [i88]Hong Guan, Summer Gautier, Deepti Gupta, Rajan Hari Ambrish, Yancheng Wang, Harsha Lakamsani, Dhanush Giriyan, Saajan Maslanka, Chaowei Xiao, Yingzhen Yang, Jia Zou:
A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management. CoRR abs/2401.12393 (2024) - [i87]Junlin Wu, Jiongxiao Wang, Chaowei Xiao, Chenguang Wang, Ning Zhang, Yevgeniy Vorobeychik:
Preference Poisoning Attacks on Reward Model Learning. CoRR abs/2402.01920 (2024) - [i86]Lingbo Mo, Zeyi Liao, Boyuan Zheng, Yu Su, Chaowei Xiao, Huan Sun:
A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents. CoRR abs/2402.10196 (2024) - [i85]Zizheng Pan, Bohan Zhuang, De-An Huang, Weili Nie, Zhiding Yu, Chaowei Xiao, Jianfei Cai, Anima Anandkumar:
T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching. CoRR abs/2402.14167 (2024) - [i84]Jiongxiao Wang, Jiazhao Li, Yiquan Li, Xiangyu Qi, Junjie Hu, Yixuan Li, Patrick McDaniel, Muhao Chen, Bo Li, Chaowei Xiao:
Mitigating Fine-tuning Jailbreak Attack with Backdoor Enhanced Alignment. CoRR abs/2402.14968 (2024) - [i83]Fangzhou Wu, Shutong Wu, Yulong Cao, Chaowei Xiao:
WIPI: A New Web Threat for LLM-Driven Web Agents. CoRR abs/2402.16965 (2024) - [i82]Fangzhou Wu, Ning Zhang, Somesh Jha, Patrick D. McDaniel, Chaowei Xiao:
A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems. CoRR abs/2402.18649 (2024) - [i81]Xiaogeng Liu, Zhiyuan Yu, Yizhe Zhang, Ning Zhang, Chaowei Xiao:
Automatic and Universal Prompt Injection Attacks against Large Language Models. CoRR abs/2403.04957 (2024) - [i80]Yu Wang, Xiaogeng Liu, Yu Li, Muhao Chen, Chaowei Xiao:
AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting. CoRR abs/2403.09513 (2024) - [i79]Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, Ning Zhang:
Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models. CoRR abs/2403.17336 (2024) - [i78]Weidi Luo, Siyuan Ma, Xiaogeng Liu, Xiaoyu Guo, Chaowei Xiao:
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks. CoRR abs/2404.03027 (2024) - [i77]Jiachen Sun, Changsheng Wang, Jiongxiao Wang, Yiwei Zhang, Chaowei Xiao:
Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors. CoRR abs/2405.10529 (2024) - [i76]Xiangyu Qi, Yangsibo Huang, Yi Zeng, Edoardo Debenedetti, Jonas Geiping, Luxi He, Kaixuan Huang, Udari Madhushani, Vikash Sehwag, Weijia Shi, Boyi Wei, Tinghao Xie, Danqi Chen, Pin-Yu Chen, Jeffrey Ding, Ruoxi Jia, Jiaqi Ma, Arvind Narayanan, Weijie J. Su, Mengdi Wang, Chaowei Xiao, Bo Li, Dawn Song, Peter Henderson, Prateek Mittal:
AI Risk Management Should Incorporate Both Safety and Security. CoRR abs/2405.19524 (2024) - [i75]Siyuan Ma, Weidi Luo, Yu Wang, Xiaogeng Liu, Muhao Chen, Bo Li, Chaowei Xiao:
Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Characte. CoRR abs/2405.20773 (2024) - [i74]Fei Wang, Xingyu Fu, James Y. Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, Tianyi Lorena Yan, Wenjie Jacky Mo, Hsiang-Hui Liu, Pan Lu, Chunyuan Li, Chaowei Xiao, Kai-Wei Chang, Dan Roth, Sheng Zhang, Hoifung Poon, Muhao Chen:
MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding. CoRR abs/2406.09411 (2024) - [i73]Siyuan Wu, Yue Huang, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Xiangliang Zhang, Jianfeng Gao, Chaowei Xiao, Lichao Sun:
UniGen: A Unified Framework for Textual Dataset Generation Using Large Language Models. CoRR abs/2406.18966 (2024) - [i72]Yiquan Li, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Bo Li, Chaowei Xiao:
Consistency Purification: Effective and Efficient Diffusion Purification towards Certified Robustness. CoRR abs/2407.00623 (2024) - [i71]Zhaorun Chen, Zhen Xiang, Chaowei Xiao, Dawn Song, Bo Li:
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases. CoRR abs/2407.12784 (2024) - [i70]Canyu Chen, Baixiang Huang, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, Xifeng Yan, William Yang Wang, Philip Torr, Dawn Song, Kai Shu:
Can Editing LLMs Inject Harm? CoRR abs/2407.20224 (2024) - [i69]Hong Guan, Yancheng Wang, Lulu Xie, Soham Nag, Rajeev Goel, Niranjan Erappa Narayana Swamy, Yingzhen Yang, Chaowei Xiao, Jonathan Prisby, Ross Maciejewski, Jia Zou:
IDNet: A Novel Dataset for Identity Document Analysis and Fraud Detection. CoRR abs/2408.01690 (2024) - [i68]Zeyi Liao, Lingbo Mo, Chejian Xu, Mintong Kang, Jiawei Zhang, Chaowei Xiao, Yuan Tian, Bo Li, Huan Sun:
EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage. CoRR abs/2409.11295 (2024) - [i67]Xuefeng Du, Chaowei Xiao, Yixuan Li:
HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection. CoRR abs/2409.17504 (2024) - [i66]Fangzhou Wu, Ethan Cecchetti, Chaowei Xiao:
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective. CoRR abs/2409.19091 (2024) - [i65]Qin Liu, Wenjie Mo, Terry Tong, Jiashu Xu, Fei Wang, Chaowei Xiao, Muhao Chen:
Mitigating Backdoor Threats to Large Language Models: Advancement and Challenges. CoRR abs/2409.19993 (2024) - 2023
- [j5]Sina Mohseni, Haotao Wang, Chaowei Xiao, Zhiding Yu, Zhangyang Wang, Jay Yadawa:
Taxonomy of Machine Learning Safety: A Survey and Primer. ACM Comput. Surv. 55(8): 157:1-157:38 (2023) - [j4]Maxim Zvyagin, Alexander Brace, Kyle Hippe, Yuntian Deng, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael W. Irvin, Defne G. Ozgulbas, Natalia Vassilieva, J. Gregory Pauloski, Logan T. Ward, Valérie Hayot-Sasson, Murali Emani, Sam Foreman, Zhen Xie, Diangen Lin, Maulik Shukla, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Ian T. Foster, James J. Davis, Michael E. Papka, Thomas S. Brettin, Rick Stevens, Anima Anandkumar, Venkatram Vishwanath, Arvind Ramanathan:
GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics. Int. J. High Perform. Comput. Appl. 37(6): 683-705 (2023) - [j3]Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, Animashree Anandkumar:
Multi-modal molecule structure-text model for text-based retrieval and editing. Nat. Mac. Intell. 5(12): 1447-1457 (2023) - [j2]Jianwei Liu, Chaowei Xiao, Kaiyan Cui, Jinsong Han, Xian Xu, Kui Ren:
Behavior Privacy Preserving in RF Sensing. IEEE Trans. Dependable Secur. Comput. 20(1): 784-796 (2023) - [c49]Jiazhao Li, Zhuofeng Wu, Wei Ping, Chaowei Xiao, V. G. Vinod Vydiswaran:
Defending against Insertion-based Textual Backdoor Attacks via Attribution. ACL (Findings) 2023: 8818-8833 - [c48]Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew C. Stamm, Kaidi Xu:
Semantic Adversarial Attacks via Diffusion Models. BMVC 2023: 271 - [c47]Yiming Li, Zhiding Yu, Christopher B. Choy, Chaowei Xiao, José M. Álvarez, Sanja Fidler, Chen Feng, Anima Anandkumar:
VoxFormer: Sparse Voxel Transformer for Camera-Based 3D Semantic Scene Completion. CVPR 2023: 9087-9098 - [c46]Xiaogeng Liu, Minghui Li, Haoyu Wang, Shengshan Hu, Dengpan Ye, Hai Jin, Libing Wu, Chaowei Xiao:
Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency. CVPR 2023: 16363-16372 - [c45]Zhuofeng Wu, Chaowei Xiao, V. G. Vinod Vydiswaran:
HiCL: Hierarchical Contrastive Learning of Unsupervised Sentence Embeddings. EMNLP (Findings) 2023: 2461-2476 - [c44]Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro:
Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. EMNLP 2023: 7763-7786 - [c43]Zhuolin Yang, Wei Ping, Zihan Liu, Vijay Korthikanti, Weili Nie, De-An Huang, Linxi Fan, Zhiding Yu, Shiyi Lan, Bo Li, Mohammad Shoeybi, Ming-Yu Liu, Yuke Zhu, Bryan Catanzaro, Chaowei Xiao, Anima Anandkumar:
Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning. EMNLP (Findings) 2023: 11844-11857 - [c42]Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard G. Baraniuk, Anima Anandkumar:
Retrieval-based Controllable Molecule Generation. ICLR 2023 - [c41]Shutong Wu, Jiongxiao Wang, Wei Ping, Weili Nie, Chaowei Xiao:
Defending against Adversarial Audio via Diffusion Model. ICLR 2023 - [c40]Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, Dawn Song:
DensePure: Understanding Diffusion Models for Adversarial Robustness. ICLR 2023 - [c39]Jiachen Sun, Jiongxiao Wang, Weili Nie, Zhiding Yu, Zhuoqing Mao, Chaowei Xiao:
A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification. ICML 2023: 33100-33114 - [c38]Zhiyuan Yu, Yuhao Wu, Ning Zhang, Chenguang Wang, Yevgeniy Vorobeychik, Chaowei Xiao:
CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models. ICML 2023: 40373-40389 - [c37]Manli Shu, Jiongxiao Wang, Chen Zhu, Jonas Geiping, Chaowei Xiao, Tom Goldstein:
On the Exploitability of Instruction Tuning. NeurIPS 2023 - [c36]Zhiyuan Yu, Yuanhaur Chang, Ning Zhang, Chaowei Xiao:
SMACK: Semantically Meaningful Adversarial Audio Attack. USENIX Security Symposium 2023: 3799-3816 - [c35]Jiawei Zhang, Zhongzhu Chen, Huan Zhang, Chaowei Xiao, Bo Li:
DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing. USENIX Security Symposium 2023: 4787-4804 - [i64]Shengchao Liu, Yutao Zhu, Jiarui Lu, Zhao Xu, Weili Nie, Anthony Gitter, Chaowei Xiao, Jian Tang, Hongyu Guo, Anima Anandkumar:
A Text-guided Protein Design Framework. CoRR abs/2302.04611 (2023) - [i63]Zhuolin Yang, Wei Ping, Zihan Liu, Vijay Korthikanti, Weili Nie, De-An Huang, Linxi Fan, Zhiding Yu, Shiyi Lan, Bo Li, Ming-Yu Liu, Yuke Zhu, Mohammad Shoeybi, Bryan Catanzaro, Chaowei Xiao, Anima Anandkumar:
Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning. CoRR abs/2302.04858 (2023) - [i62]Chulin Xie, De-An Huang, Wenda Chu, Daguang Xu, Chaowei Xiao, Bo Li, Anima Anandkumar:
PerAda: Parameter-Efficient and Generalizable Federated Learning Personalization with Guarantees. CoRR abs/2302.06637 (2023) - [i61]Yiming Li, Zhiding Yu, Christopher B. Choy, Chaowei Xiao, José M. Álvarez, Sanja Fidler, Chen Feng, Anima Anandkumar:
VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion. CoRR abs/2302.12251 (2023) - [i60]Shutong Wu, Jiongxiao Wang, Wei Ping, Weili Nie, Chaowei Xiao:
Defending against Adversarial Audio via Diffusion Model. CoRR abs/2303.01507 (2023) - [i59]Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar:
Prismer: A Vision-Language Model with An Ensemble of Experts. CoRR abs/2303.02506 (2023) - [i58]Ethan Wisdom, Tejas Gokhale, Chaowei Xiao, Yezhou Yang:
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling. CoRR abs/2303.17080 (2023) - [i57]Xiaogeng Liu, Minghui Li, Haoyu Wang, Shengshan Hu, Dengpan Ye, Hai Jin, Libing Wu, Chaowei Xiao:
Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency. CoRR abs/2303.18191 (2023) - [i56]Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro:
Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. CoRR abs/2304.06762 (2023) - [i55]Jiazhao Li, Yijin Yang, Zhuofeng Wu, V. G. Vinod Vydiswaran, Chaowei Xiao:
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger. CoRR abs/2304.14475 (2023) - [i54]Jiazhao Li, Zhuofeng Wu, Wei Ping, Chaowei Xiao, V. G. Vinod Vydiswaran:
Defending against Insertion-based Textual Backdoor Attacks via Attribution. CoRR abs/2305.02394 (2023) - [i53]Jiashu Xu, Mingyu Derek Ma, Fei Wang, Chaowei Xiao, Muhao Chen:
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models. CoRR abs/2305.14710 (2023) - [i52]Qin Liu, Fei Wang, Chaowei Xiao, Muhao Chen:
From Shortcuts to Triggers: Backdoor Defense with Denoised PoE. CoRR abs/2305.14910 (2023) - [i51]Jiongxiao Wang, Zichen Liu, Keun Hee Park, Muhao Chen, Chaowei Xiao:
Adversarial Demonstration Attacks on Large Language Models. CoRR abs/2305.14950 (2023) - [i50]Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar:
Voyager: An Open-Ended Embodied Agent with Large Language Models. CoRR abs/2305.16291 (2023) - [i49]Shengchao Liu, Jiongxiao Wang, Yijin Yang, Chengpeng Wang, Ling Liu, Hongyu Guo, Chaowei Xiao:
ChatGPT-powered Conversational Drug Editing Using Retrieval and Domain Feedback. CoRR abs/2305.18090 (2023) - [i48]Jiachen Sun, Haizhong Zheng, Qingzhao Zhang, Atul Prakash, Z. Morley Mao, Chaowei Xiao:
CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception. CoRR abs/2306.00349 (2023) - [i47]Zelun Luo, Yuliang Zou, Yijin Yang, Zane Durante, De-An Huang, Zhiding Yu, Chaowei Xiao, Li Fei-Fei, Animashree Anandkumar:
Differentially Private Video Activity Recognition. CoRR abs/2306.15742 (2023) - [i46]Manli Shu, Jiongxiao Wang, Chen Zhu, Jonas Geiping, Chaowei Xiao, Tom Goldstein:
On the Exploitability of Instruction Tuning. CoRR abs/2306.17194 (2023) - [i45]Jiawei Zhang, Zhongzhu Chen, Huan Zhang, Chaowei Xiao, Bo Li:
DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing. CoRR abs/2308.14333 (2023) - [i44]Yulong Cao, Boris Ivanovic, Chaowei Xiao, Marco Pavone:
Reinforcement Learning with Human Feedback for Realistic Traffic Simulation. CoRR abs/2309.00709 (2023) - [i43]Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew C. Stamm, Kaidi Xu:
Semantic Adversarial Attacks via Diffusion Models. CoRR abs/2309.07398 (2023) - [i42]Zhuoyuan Wu, Jiachen Sun, Chaowei Xiao:
CSI: Enhancing the Robustness of 3D Point Cloud Recognition against Corruption. CoRR abs/2310.03360 (2023) - [i41]Xiaogeng Liu, Nan Xu, Muhao Chen, Chaowei Xiao:
AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models. CoRR abs/2310.04451 (2023) - [i40]Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan A. Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael W. Irvin, J. Gregory Pauloski, Logan T. Ward, Valérie Hayot-Sasson, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian T. Foster, James J. Davis, Michael E. Papka, Thomas S. Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi A. Hanson, Thomas E. Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton D. Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin M. Aji, Angela Dalton, Michael J. Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens:
DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies. CoRR abs/2310.04610 (2023) - [i39]Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Zhuoqing Mao, Chaowei Xiao, Atul Prakash:
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation. CoRR abs/2310.07506 (2023) - [i38]Zhuofeng Wu, Chaowei Xiao, V. G. Vinod Vydiswaran:
HiCL: Hierarchical Contrastive Learning of Unsupervised Sentence Embeddings. CoRR abs/2310.09720 (2023) - [i37]Jiongxiao Wang, Junlin Wu, Muhao Chen, Yevgeniy Vorobeychik, Chaowei Xiao:
On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models. CoRR abs/2311.09641 (2023) - [i36]Wenjie Mo, Jiashu Xu, Qin Liu, Jiongxiao Wang, Jun Yan, Chaowei Xiao, Muhao Chen:
Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations. CoRR abs/2311.09763 (2023) - [i35]Nan Xu, Fei Wang, Ben Zhou, Bangzheng Li, Chaowei Xiao, Muhao Chen:
Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking. CoRR abs/2311.09827 (2023) - [i34]Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, Chaowei Xiao:
Dolphins: Multimodal Language Model for Driving. CoRR abs/2312.00438 (2023) - [i33]Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao:
DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions. CoRR abs/2312.04730 (2023) - [i32]Fangzhou Wu, Qingzhao Zhang, Ati Priya Bajaj, Tiffany Bao, Ning Zhang, Ruoyu Wang, Chaowei Xiao:
Exploring the Limits of ChatGPT in Software Security Applications. CoRR abs/2312.05275 (2023) - [i31]Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, Marco Pavone:
RealGen: Retrieval Augmented Generation for Controllable Traffic Scenarios. CoRR abs/2312.13303 (2023) - 2022
- [c34]Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li, Dawn Song:
Characterizing Attacks on Deep Reinforcement Learning. AAMAS 2022: 1010-1018 - [c33]Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone:
Robust Trajectory Prediction against Adversarial Attacks. CoRL 2022: 128-137 - [c32]Yulong Cao, Chaowei Xiao, Anima Anandkumar, Danfei Xu, Marco Pavone:
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction. ECCV (5) 2022: 36-52 - [c31]Zhuowen Yuan, Fan Wu, Yunhui Long, Chaowei Xiao, Bo Li:
SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination. ECCV (5) 2022: 139-155 - [c30]Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu, Song-Chun Zhu, Anima Anandkumar:
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning. ICLR 2022 - [c29]Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, Animashree Anandkumar:
Diffusion Models for Adversarial Purification. ICML 2022: 16805-16827 - [c28]Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Animashree Anandkumar, Jiashi Feng, José M. Álvarez:
Understanding The Robustness in Vision Transformers. ICML 2022: 27378-27394 - [c27]Jianwei Liu, Yinghui He, Chaowei Xiao, Jinsong Han, Le Cheng, Kui Ren:
Physical-World Attack towards WiFi-based Behavior Recognition. INFOCOM 2022: 400-409 - [c26]Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, Chaowei Xiao:
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. NeurIPS 2022 - [c25]Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro:
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. NeurIPS 2022 - [i30]Jiachen Sun, Qingzhao Zhang, Bhavya Kailkhura, Zhiding Yu, Chaowei Xiao, Z. Morley Mao:
Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions. CoRR abs/2201.12296 (2022) - [i29]Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro:
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models. CoRR abs/2202.04173 (2022) - [i28]Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu, Song-Chun Zhu, Anima Anandkumar:
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning. CoRR abs/2204.11167 (2022) - [i27]Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Anima Anandkumar, Jiashi Feng, José M. Álvarez:
Understanding The Robustness in Vision Transformers. CoRR abs/2204.12451 (2022) - [i26]Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, Anima Anandkumar:
Diffusion Models for Adversarial Purification. CoRR abs/2205.07460 (2022) - [i25]Zhuowen Yuan, Fan Wu, Yunhui Long, Chaowei Xiao, Bo Li:
SecretGen: Privacy Recovery on Pre-Trained Models via Distribution Discrimination. CoRR abs/2207.12263 (2022) - [i24]Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone:
Robust Trajectory Prediction against Adversarial Attacks. CoRR abs/2208.00094 (2022) - [i23]Jiachen Sun, Weili Nie, Zhiding Yu, Z. Morley Mao, Chaowei Xiao:
PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition. CoRR abs/2208.09801 (2022) - [i22]Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard G. Baraniuk, Anima Anandkumar:
Retrieval-based Controllable Molecule Generation. CoRR abs/2208.11126 (2022) - [i21]Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, Chaowei Xiao:
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models. CoRR abs/2209.07511 (2022) - [i20]Yulong Cao, Chaowei Xiao, Anima Anandkumar, Danfei Xu, Marco Pavone:
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction. CoRR abs/2209.08744 (2022) - [i19]Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, Dawn Song:
DensePure: Understanding Diffusion Models towards Adversarial Robustness. CoRR abs/2211.00322 (2022) - [i18]Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, Anima Anandkumar:
Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing. CoRR abs/2212.10789 (2022) - 2021
- [c24]Aria Rezaei, Chaowei Xiao, Jie Gao, Bo Li, Sirajum Munir:
Application-driven Privacy-preserving Data Publishing with Correlated Attributes. EWSN 2021: 91-102 - [c23]Mingjie Sun, Zichao Li, Chaowei Xiao, Haonan Qiu, Bhavya Kailkhura, Mingyan Liu, Bo Li:
Can Shape Structure Features Improve Model Robustness under Diverse Adversarial Settings? ICCV 2021: 7506-7515 - [c22]Jianwei Liu, Chaowei Xiao, Kaiyan Cui, Jinsong Han, Xian Xu, Kui Ren, XuFei Mao:
A Behavior Privacy Preserving Method towards RF Sensing. IWQoS 2021: 1-10 - [c21]Aishan Liu, Xinyun Chen, Yingwei Li, Chaowei Xiao, Xun Yang, Xianglong Liu, Dawn Song, Dacheng Tao, Alan L. Yuille, Anima Anandkumar:
ADVM'21: 1st International Workshop on Adversarial Learning for Multimedia. ACM Multimedia 2021: 5686-5687 - [c20]Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang:
AugMax: Adversarial Composition of Random Augmentations for Robust Training. NeurIPS 2021: 237-250 - [c19]Jiachen Sun, Yulong Cao, Christopher B. Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Morley Mao, Chaowei Xiao:
Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions. NeurIPS 2021: 15498-15512 - [c18]Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro:
Long-Short Transformer: Efficient Transformers for Language and Vision. NeurIPS 2021: 17723-17736 - [c17]Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, Bo Li:
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. SP 2021: 176-194 - [e1]Dawn Song, Dacheng Tao, Alan L. Yuille, Anima Anandkumar, Aishan Liu, Xinyun Chen, Yingwei Li, Chaowei Xiao, Xun Yang, Xianglong Liu:
ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia, Virtual Event, China, 20 October 2021. ACM 2021, ISBN 978-1-4503-8672-2 [contents] - [i17]Sina Mohseni, Haotao Wang, Zhiding Yu, Chaowei Xiao, Zhangyang Wang, Jay Yadawa:
Practical Machine Learning Safety: A Survey and Primer. CoRR abs/2106.04823 (2021) - [i16]Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, Bo Li:
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks. CoRR abs/2106.09249 (2021) - [i15]Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro:
Long-Short Transformer: Efficient Transformers for Language and Vision. CoRR abs/2107.02192 (2021) - [i14]Homanga Bharadhwaj, De-An Huang, Chaowei Xiao, Anima Anandkumar, Animesh Garg:
Auditing AI models for Verified Deployment under Semantic Specifications. CoRR abs/2109.12456 (2021) - [i13]Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang:
AugMax: Adversarial Composition of Random Augmentations for Robust Training. CoRR abs/2110.13771 (2021) - 2020
- [b1]Chaowei Xiao:
Machine Learning in Adversarial Environments. University of Michigan, USA, 2020 - [c16]Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li:
SemanticAdv: Generating Adversarial Examples via Attribute-Conditioned Image Editing. ECCV (14) 2020: 19-37 - [c15]Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane S. Boning, Cho-Jui Hsieh:
Towards Stable and Efficient Training of Verifiably Robust Neural Networks. ICLR 2020 - [c14]Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane S. Boning, Cho-Jui Hsieh:
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations. NeurIPS 2020 - [i12]Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Duane S. Boning, Cho-Jui Hsieh:
Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations. CoRR abs/2003.08938 (2020)
2010 – 2019
- 2019
- [c13]Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao:
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving. CCS 2019: 2267-2281 - [c12]Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, Mingyan Liu:
MeshAdv: Adversarial Meshes for Visual Recognition. CVPR 2019: 6898-6907 - [c11]Chaowei Xiao, Ruizhi Deng, Bo Li, Taesung Lee, Benjamin Edwards, Jinfeng Yi, Dawn Song, Mingyan Liu, Ian M. Molloy:
AdvIT: Adversarial Frames Identifier Based on Temporal Consistency in Videos. ICCV 2019: 3967-3976 - [c10]Kin Sum Liu, Chaowei Xiao, Bo Li, Jie Gao:
Performing Co-membership Attacks Against Deep Generative Models. ICDM 2019: 459-467 - [c9]Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik:
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features. USENIX Security Symposium 2019: 285-302 - [i11]Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Duane S. Boning, Cho-Jui Hsieh:
Towards Stable and Efficient Training of Verifiably Robust Neural Networks. CoRR abs/1906.06316 (2019) - [i10]Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li:
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing. CoRR abs/1906.07927 (2019) - [i9]Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li:
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems. CoRR abs/1907.05418 (2019) - [i8]Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao:
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving. CoRR abs/1907.06826 (2019) - [i7]Chaowei Xiao, Xinlei Pan, Warren He, Jian Peng, Mingjie Sun, Jinfeng Yi, Mingyan Liu, Bo Li, Dawn Song:
Characterizing Attacks on Deep Reinforcement Learning. CoRR abs/1907.09470 (2019) - 2018
- [j1]Chenshu Wu, Zheng Yang, Chaowei Xiao:
Automatic Radio Map Adaptation for Indoor Localization Using Smartphones. IEEE Trans. Mob. Comput. 17(3): 517-528 (2018) - [c8]Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song:
Robust Physical-World Attacks on Deep Learning Visual Classification. CVPR 2018: 1625-1634 - [c7]Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, Dawn Song:
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation. ECCV (10) 2018: 220-237 - [c6]Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song:
Spatially Transformed Adversarial Examples. ICLR (Poster) 2018 - [c5]Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song:
Generating Adversarial Examples with Adversarial Networks. IJCAI 2018: 3905-3911 - [c4]Chaowei Xiao, Armin Sarabi, Yang Liu, Bo Li, Mingyan Liu, Tudor Dumitras:
From Patching Delays to Infection Symptoms: Using Risk Profiles for an Early Discovery of Vulnerabilities Exploited in the Wild. USENIX Security Symposium 2018: 903-918 - [i6]Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song:
Generating Adversarial Examples with Adversarial Networks. CoRR abs/1801.02610 (2018) - [i5]Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song:
Spatially Transformed Adversarial Examples. CoRR abs/1801.02612 (2018) - [i4]Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, Dawn Song:
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation. CoRR abs/1810.05162 (2018) - [i3]Dawei Yang, Chaowei Xiao, Bo Li, Jia Deng, Mingyan Liu:
Realistic Adversarial Examples in 3D Meshes. CoRR abs/1810.05206 (2018) - [i2]Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, Dawn Song:
Data Poisoning Attack against Unsupervised Node Embedding Methods. CoRR abs/1810.12881 (2018) - [i1]Aria Rezaei, Chaowei Xiao, Jie Gao, Bo Li:
Protecting Sensitive Attributes via Generative Adversarial Networks. CoRR abs/1812.10193 (2018) - 2017
- [c3]Armin Sarabi, Ziyun Zhu, Chaowei Xiao, Mingyan Liu, Tudor Dumitras:
Patch Me If You Can: A Study on the Effects of Individual User Behavior on the End-Host Vulnerability State. PAM 2017: 113-125 - 2015
- [c2]Chenshu Wu, Zheng Yang, Chaowei Xiao, Chaofan Yang, Yunhao Liu, Mingyan Liu:
Static power of mobile devices: Self-updating radio maps for wireless indoor localization. INFOCOM 2015: 2497-2505 - 2014
- [c1]Lei Yang, Yekui Chen, Xiang-Yang Li, Chaowei Xiao, Mo Li, Yunhao Liu:
Tagoram: real-time tracking of mobile RFID tags to high precision using COTS devices. MobiCom 2014: 237-248
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-15 20:34 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint