HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation

Abstract

An exploration of cross-arbitrary-modal image invariant feature extraction and matching is made, with a purely handcrafted full-chain algorithm, Homomorphism of Organized Major Orientation (HOMO), being proposed. Instead of using deep models to conduct data-driven black-box learning, we introduce a Major Orientation Map (MOM), effectively combating image modal differences. Considering rotation, scale, and texture diversities in cross-modal images, HOMO incorporates a novel, universally designed Generalized-Polar descriptor (GPolar) and a Multi-scale Strategy (MsS) to gain well-rounded capacities. HOMO achieves the best comprehensive performance in feature matching on several generally cross-modal datasets, challenging compared with a set of state-of-the-art methods including 7 traditional algorithms and 10 deep network models. A dataset named General Cross-modal Zone (GCZ) is proposed, which shows practical values. Codes with datasets are available at https://github.com/MrPingQi/HOMO_Feature_ImgMatching.

Cite

Text

Gao et al. "HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation." International Conference on Computer Vision, 2025.

Markdown

[Gao et al. "HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/gao2025iccv-homofeature/)

BibTeX

@inproceedings{gao2025iccv-homofeature,
  title     = {{HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation}},
  author    = {Gao, Chenzhong and Li, Wei and Weng, Desheng},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {10538-10548},
  url       = {https://mlanthology.org/iccv/2025/gao2025iccv-homofeature/}
}