Enhancing Target-Unspecific Tasks Through a Features Matrix

Abstract

Recent developments in prompt learning of large Vision-Language Models (VLMs) have significantly improved performance in target-specific tasks. However, these prompting methods often struggle to tackle the target-unspecific or generalizable tasks effectively. It may be attributed to the fact that overfitting training causes the model to forget its general knowledge. The general knowledge has a strong promotion on target-unspecific tasks. To alleviate this issue, we propose a novel Features Matrix (FM) approach designed to enhance these models on target-unspecific tasks. Our method extracts and leverages general knowledge, shaping a Features Matrix (FM). Specifically, the FM captures the semantics of diverse inputs from a deep and fine perspective, preserving essential general knowledge, which mitigates the risk of overfitting. Representative evaluations demonstrate that: 1) the FM is compatible with existing frameworks as a generic and flexible module, and 2) the FM significantly showcases its effectiveness in enhancing target-unspecific tasks (base-to-novel generalization, domain generalization, and cross-dataset generalization), achieving state-of-the-art performance.

Cite

Text

Cui et al. "Enhancing Target-Unspecific Tasks Through a Features Matrix." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Cui et al. "Enhancing Target-Unspecific Tasks Through a Features Matrix." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/cui2025icml-enhancing/)

BibTeX

@inproceedings{cui2025icml-enhancing,
  title     = {{Enhancing Target-Unspecific Tasks Through a Features Matrix}},
  author    = {Cui, Fangming and Zhang, Yonggang and Wang, Xuan and Tian, Xinmei and Yu, Jun},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {11649-11661},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/cui2025icml-enhancing/}
}