UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions
Abstract
Visual object tracking has gained promising progress in past decades. Most of the existing approaches focus on learning target representation in well-conditioned daytime data, while for the unconstrained real-world scenarios with adverse weather conditions, e.g. nighttime or foggy environment, the tremendous domain shift leads to significant performance degradation. In this paper, we propose UMDATrack, which is capable of maintaining high-quality target state prediction under various adverse weather conditions within a unified domain adaptation framework. Specifically, we first use a controllable scenario generator to synthesize a small amount of unlabeled videos (less than 2% frames in source daytime datasets) in multiple weather conditions under the guidance of different text prompts. Afterwards, we design a simple yet effective domain-customized adapter (DCA), allowing the target objects' representation to rapidly adapt to various weather conditions without redundant model updating. Furthermore, to enhance the localization consistency between source and target domains, we propose a target-aware confidence alignment module (TCA) following optimal transport theorem. Extensive experiments demonstrate that UMDATrack can surpass existing advanced visual trackers and lead new state-of-the-art performance by a significant margin.
Cite
Text
Yao et al. "UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions." International Conference on Computer Vision, 2025.Markdown
[Yao et al. "UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yao2025iccv-umdatrack/)BibTeX
@inproceedings{yao2025iccv-umdatrack,
title = {{UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions}},
author = {Yao, Siyuan and Zhu, Rui and Wang, Ziqi and Ren, Wenqi and Yan, Yanyang and Cao, Xiaochun},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {6466-6475},
url = {https://mlanthology.org/iccv/2025/yao2025iccv-umdatrack/}
}