\textsc{PGO-BEN}: Proxy-Guided Orthogonalization and Beta Ensembling for Few-Shot Domain-Incremental Learning
Abstract
Continual adaptation to evolving domains with minimal supervision is essential for real-world deployment of machine learning systems. We formalize this objective as \textbf{Few-Shot Domain-Incremental Learning (FSDIL)}, where a model must adapt to each new domain using only a few labeled samples while retaining prior knowledge without access to previous data. This setting mirrors practical constraints in domains such as autonomous driving and medical imaging, where annotations are expensive and data retention is restricted by privacy regulations. Pre-trained vision–language models such as CLIP provide a strong initialization for FSDIL due to their transferable multi-modal representations. However, adapting CLIP incrementally under domain shifts remains challenging: few-shot updates often trigger \emph{catastrophic forgetting} and insufficient \emph{plasticity} across evolving distributions. To address these challenges, we introduce \textbf{\textsc{PGO-BEn}} (\textit{Proxy-Guided Orthogonalization and Beta Ensembling})—a rehearsal-free framework that leverages CLIP’s semantic priors via prompt learning while preserving prior domain knowledge through two key mechanisms. (1) \textbf{Proxy-Guided Orthogonalization (PGO):} identifies conflicts between current gradients and proxy representations of past knowledge, inferred from current samples, and projects conflicting updates into an orthogonal subspace to prevent knowledge degradation. (2) \textbf{Beta Ensembling (BEn):} introduces a Beta-function-based temporal ensembling strategy that adaptively balances stability and plasticity, outperforming conventional exponential moving average (EMA) approaches in retaining early-domain knowledge. We extensively evaluate \textsc{PGO-BEn} on three diverse benchmarks—\textbf{DomainNet}, \textbf{CoRE50}, and \textbf{CDDB-Hard}—and demonstrate consistent improvements over state-of-the-art domain-incremental and few-shot learning methods across all supervision levels in this challenging setting.
Cite
Text
Mukherjee et al. "\textsc{PGO-BEN}: Proxy-Guided Orthogonalization and Beta Ensembling for Few-Shot Domain-Incremental Learning." Transactions on Machine Learning Research, 2026.Markdown
[Mukherjee et al. "\textsc{PGO-BEN}: Proxy-Guided Orthogonalization and Beta Ensembling for Few-Shot Domain-Incremental Learning." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/mukherjee2026tmlr-pgoben/)BibTeX
@article{mukherjee2026tmlr-pgoben,
title = {{\textsc{PGO-BEN}: Proxy-Guided Orthogonalization and Beta Ensembling for Few-Shot Domain-Incremental Learning}},
author = {Mukherjee, Samrat and Venkateswaran, Thivyanth and Coleman, Eric Nuertey and Quarantiello, Luigi and Hurtado, Julio and Lomonaco, Vincenzo and Roig, Gemma and Chaudhuri, Subhasis and Banerjee, Biplab},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/mukherjee2026tmlr-pgoben/}
}