SAME: Sample Reconstruction against Model Extraction Attacks
DOI:
https://doi.org/10.1609/aaai.v38i18.29974Keywords:
PEAI: Privacy & Security, CV: Adversarial Attacks & Robustness, CV: Bias, Fairness & Privacy, ML: Privacy, PEAI: Safety, Robustness & TrustworthinessAbstract
While deep learning models have shown significant performance across various domains, their deployment needs extensive resources and advanced computing infrastructure. As a solution, Machine Learning as a Service (MLaaS) has emerged, lowering the barriers for users to release or productize their deep learning models. However, previous studies have highlighted potential privacy and security concerns associated with MLaaS, and one primary threat is model extraction attacks. To address this, there are many defense solutions but they suffer from unrealistic assumptions and generalization issues, making them less practical for reliable protection. Driven by these limitations, we introduce a novel defense mechanism, SAME, based on the concept of sample reconstruction. This strategy imposes minimal prerequisites on the defender's capabilities, eliminating the need for auxiliary Out-of-Distribution (OOD) datasets, user query history, white-box model access, and additional intervention during model training. It is compatible with existing active defense methods. Our extensive experiments corroborate the superior efficacy of SAME over state-of-the-art solutions. Our code is available at https://github.com/xythink/SAME.Downloads
Published
2024-03-24
How to Cite
Xie, Y., Zhang, J., Zhao, S., Zhang, T., & Chen, X. (2024). SAME: Sample Reconstruction against Model Extraction Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 19974-19982. https://doi.org/10.1609/aaai.v38i18.29974
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI