Meta-OLE: Meta-Learned Orthogonal Low-Rank Embedding
Abstract
We introduce Meta-OLE, a new geometry-regularized method for fast adaptation to novel tasks in few-shot image classification. The proposed method learns to adapt for each few-shot classification task a feature space with simultaneous inter-class orthogonality and intra-class low-rankness. Specifically, a deep feature extractor is trained by explicitly imposing orthogonal low-rank subspace structures among features corresponding to different classes within a given task. To adapt to novel tasks with unseen categories, we further meta-learn a light-weight transformation to enhance the inter-class margins. As an additional benefit, this light-weight transformation lets us exploit the query data for label propagation from labeled to unlabeled data without any auxiliary network components. The explicitly geometry-regularized feature subspaces allow the classifiers on novel tasks to be inferred in a closed form, with an adaptive subspace truncation that selectively discards non-discriminative dimensions. We perform experiments on standard few-shot image classification tasks, and observe performance superior to state-of-the-art meta-learning methods.
Cite
Text
Wang et al. "Meta-OLE: Meta-Learned Orthogonal Low-Rank Embedding." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Wang et al. "Meta-OLE: Meta-Learned Orthogonal Low-Rank Embedding." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/wang2023wacv-metaole/)BibTeX
@inproceedings{wang2023wacv-metaole,
title = {{Meta-OLE: Meta-Learned Orthogonal Low-Rank Embedding}},
author = {Wang, Ze and Lu, Yue and Qiu, Qiang},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {5305-5314},
url = {https://mlanthology.org/wacv/2023/wang2023wacv-metaole/}
}