Contrastive Model Invertion for Data-Free Knolwedge Distillation
Contrastive Model Invertion for Data-Free Knolwedge Distillation
Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, Mingli Song
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2374-2380.
https://doi.org/10.24963/ijcai.2021/327
Model inversion, whose goal is to recover training data from a pre-trained model, has been recently proved feasible. However, existing inversion methods usually suffer from the mode collapse problem, where the synthesized instances are highly similar to each other and thus show limited effectiveness for downstream tasks, such as knowledge distillation. In this paper, we propose Contrastive Model Inversion (CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue. Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination. To this end, we introduce in CMI a contrastive learning objective that encourages the synthesizing instances to be distinguishable from the already synthesized ones in previous batches. Experiments of pre-trained models on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI not only generates more visually plausible instances than the state of the arts, but also achieves significantly superior performance when the generated data are used for knowledge distillation. Code is available at https://github.com/zju-vipa/DataFree.
Keywords:
Machine Learning: Deep Learning
Machine Learning: Explainable/Interpretable Machine Learning
Machine Learning: Transfer, Adaptation, Multi-task Learning