Learning Fine-Grained Domain Generalization via Hyperbolic State Space Hallucination

Abstract

Fine-grained domain generalization (FGDG) aims to learn a fine-grained representation that can be well generalized to unseen target domains when only trained on the source domain data. Compared with generic domain generalization, FGDG is particularly challenging in that the fine-grained category can be only discerned by some subtle and tiny patterns. Such patterns are particularly fragile under the cross-domain style shifts caused by illumination, color and etc. To push this frontier, this paper presents a novel Hyperbolic State Space Hallucination (HSSH) method. It consists of two key components, namely, state space hallucination (SSH) and hyperbolic manifold consistency (HMC). SSH enriches the style diversity for the state embeddings by firstly extrapolating and then hallucinating the source images. Then, the pre- and post- style hallucinate state embeddings are projected into the hyperbolic manifold. The hyperbolic state space models the high-order statistics, and allows a better discernment of the fine-grained patterns. Finally, the hyperbolic distance is minimized, so that the impact of style variation on fine-grained patterns can be eliminated. Experiments on three FGDG benchmarks demonstrate its state-of-the-art performance.

Cite

Text

Bi et al. "Learning Fine-Grained Domain Generalization via Hyperbolic State Space Hallucination." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I2.32180

Markdown

[Bi et al. "Learning Fine-Grained Domain Generalization via Hyperbolic State Space Hallucination." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/bi2025aaai-learning/) doi:10.1609/AAAI.V39I2.32180

BibTeX

@inproceedings{bi2025aaai-learning,
  title     = {{Learning Fine-Grained Domain Generalization via Hyperbolic State Space Hallucination}},
  author    = {Bi, Qi and Yi, Jingjun and Zhan, Haolan and Ji, Wei and Xia, Gui-Song},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {1853-1861},
  doi       = {10.1609/AAAI.V39I2.32180},
  url       = {https://mlanthology.org/aaai/2025/bi2025aaai-learning/}
}