Cross-Dataset Learning for Generalizable Land Use Scene Classification
Abstract
Few-shot and cross-domain land use scene classification methods propose solutions to classify unseen classes or un-seen visual distributions, but are hardly applicable to real-world situations due to restrictive assumptions. Few-shot methods involve episodic training on restrictive training subsets with small feature extractors, while cross-domain methods are only applied to common classes. The underlying challenge remains open: can we accurately classify new scenes on new datasets? In this paper, we propose a new framework for few-shot, cross-domain classification. Our retrieval-inspired approach1 exploits the interrelations in both the training and testing data to output class labels using compact descriptors. Results show that our method can accurately produce land-use predictions on unseen datasets and unseen classes, going beyond the traditional few-shot or cross-domain formulation, and allowing cross-dataset training.
Cite
Text
Gominski et al. "Cross-Dataset Learning for Generalizable Land Use Scene Classification." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00144Markdown
[Gominski et al. "Cross-Dataset Learning for Generalizable Land Use Scene Classification." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/gominski2022cvprw-crossdataset/) doi:10.1109/CVPRW56347.2022.00144BibTeX
@inproceedings{gominski2022cvprw-crossdataset,
title = {{Cross-Dataset Learning for Generalizable Land Use Scene Classification}},
author = {Gominski, Dimitri and Gouet-Brunet, Valérie and Chen, Liming},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2022},
pages = {1381-1390},
doi = {10.1109/CVPRW56347.2022.00144},
url = {https://mlanthology.org/cvprw/2022/gominski2022cvprw-crossdataset/}
}