Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation
Abstract
Traditional unsupervised domain adaptation (UDA) struggles to extract rich semantics due to backbone limitations. Recent large-scale pre-trained visual-language models (VLMs) have shown strong zero-shot learning capabilities in UDA tasks. However, directly using VLMs results in a mixture of semantic and domain-specific information, complicating knowledge transfer. Complex scenes with subtle semantic differences are prone to misclassification, which in turn can result in the loss of features that are crucial for distinguishing between classes. To address these challenges, we propose a novel counterfactual knowledge maintenance UDA framework. Specifically, we employ counterfactual disentanglement to separate the representation of semantic information from domain features, thereby reducing domain bias. Furthermore, to clarify ambiguous visual information specific to classes, we maintain the discriminative knowledge of both visual and textual information. This approach synergistically leverages multimodal information to preserve modality-specific distinguishable features. We conducted extensive experimental evaluations on several public datasets to demonstrate the effectiveness of our method. The source code is available at https://github.com/LiYaolab/CMKUDA
Cite
Text
Li et al. "Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/165Markdown
[Li et al. "Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/li2025ijcai-counterfactual/) doi:10.24963/IJCAI.2025/165BibTeX
@inproceedings{li2025ijcai-counterfactual,
title = {{Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation}},
author = {Li, Yao and Zhou, Yong and Zhao, Jiaqi and Du, Wen-Liang and Yao, Rui and Liu, Bing},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {1476-1484},
doi = {10.24963/IJCAI.2025/165},
url = {https://mlanthology.org/ijcai/2025/li2025ijcai-counterfactual/}
}