Semi-supervised Domain Adaptation via Prototype-based Multi-level Learning
Xinyang Huang
Chuang Zhu*
Wenkai Chen
[Paper]
[GitHub]


Abstract

In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain. Many existing methods ignore the benefits of making full use of the labeled target samples from multi-level. To make better use of this additional data, we propose a novel Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples. To achieve intra-domain adaptation, we first introduce a pseudo-label aggregation based on the intra-domain optimal transport to help the model align the feature distribution of unlabeled target samples and the prototype. At the inter-domain level, we propose a cross-domain alignment loss to help the model use the target prototype for cross-domain knowledge transfer. We further propose a dual consistency based on prototype similarity and linear classifier to promote discriminative learning of compact target feature representation at the batch level. Extensive experiments on three datasets, including \textit{DomainNet}, \textit{VisDA2017}, and \textit{Office-Home}, demonstrate that our proposed method achieves state-of-the-art performance in SSDA.


Overall Framework

The structure of our ProML framework. First, the target samples are weakly and strongly augmented, and then pass through the classifier together with the source samples to calculate the base loss. For the intra-domain level, the weakly augmented target samples generate pseudo-labels with the optimal transfer plan computed with the target prototype and compute the consistency loss with the strongly augmented samples. For the inter-domain level, the similarity loss between source samples and target prototype of corresponding categories is computed to achieve cross-domain knowledge transfer. Finally, the dual consistency loss of the two augmented views in each mini-batch is considered from the perspective of linear and prototype-based classifier.

Motivation

The difference is between us and existing work and our motivations.


Citation

@misc{huang2023semisupervised,
title={Semi-supervised Domain Adaptation via Prototype-based Multi-level Learning},
author={Xinyang Huang and Chuang Zhu and Wenkai Chen},
year={2023},
eprint={2305.02693},
archivePrefix={arXiv},
primaryClass={cs.CV}
}


Results

*Comparison with the state-of-the-art methods.