Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2023•openaccess.thecvf.com
Finetuning a large vision language model (VLM) on a target dataset after large scale
pretraining is a dominant paradigm in visual question answering (VQA). Datasets for
specialized tasks such as knowledge-based VQA or VQA in non natural-image domains are
orders of magnitude smaller than those for general-purpose VQA. While collecting additional
labels for specialized tasks or domains can be challenging, unlabeled images are often
available. We introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning …
pretraining is a dominant paradigm in visual question answering (VQA). Datasets for
specialized tasks such as knowledge-based VQA or VQA in non natural-image domains are
orders of magnitude smaller than those for general-purpose VQA. While collecting additional
labels for specialized tasks or domains can be challenging, unlabeled images are often
available. We introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning …
Abstract
Finetuning a large vision language model (VLM) on a target dataset after large scale pretraining is a dominant paradigm in visual question answering (VQA). Datasets for specialized tasks such as knowledge-based VQA or VQA in non natural-image domains are orders of magnitude smaller than those for general-purpose VQA. While collecting additional labels for specialized tasks or domains can be challenging, unlabeled images are often available. We introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning large VLMs on small-scale VQA datasets. SelTDA uses the VLM and target dataset to build a teacher model that can generate question-answer pseudolabels directly conditioned on an image alone, allowing us to pseudolabel unlabeled images. SelTDA then finetunes the initial VLM on the original dataset augmented with freshly pseudolabeled images. We describe a series of experiments showing that our self-taught data augmentation increases robustness to adversarially searched questions, counterfactual examples, and rephrasings, it improves domain generalization, and results in greater retention of numerical reasoning skills. The proposed strategy requires no additional annotations or architectural modifications, and is compatible with any modern encoder-decoder multimodal transformer. Code available at https://github. com/codezakh/SelTDA
openaccess.thecvf.com