Reproducibility Study of “LICO: Explainable Models with Language-Image Consistency"

Abstract

The growing reproducibility crisis in machine learning has brought forward a need for careful examination of research findings. This paper investigates the claims made by Lei et al. (2023) regarding their proposed method, LICO, for enhancing post-hoc interpretability techniques and improving image classification performance. LICO leverages natural language supervision from a vision-language model to enrich feature representations and guide the learning process. We conduct a comprehensive reproducibility study, employing (Wide) ResNets and established interpretability methods like Grad-CAM and RISE. We were mostly unable to reproduce the authors' results. In particular, we did not find that LICO consistently led to improved classification performance or improvements in quantitative and qualitative measures of interpretability. Thus, our findings highlight the importance of rigorous evaluation and transparent reporting in interpretability research.

Cite

Text

Fletcher et al. "Reproducibility Study of “LICO: Explainable Models with Language-Image Consistency"." Transactions on Machine Learning Research, 2024.

Markdown

[Fletcher et al. "Reproducibility Study of “LICO: Explainable Models with Language-Image Consistency"." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/fletcher2024tmlr-reproducibility/)

BibTeX

@article{fletcher2024tmlr-reproducibility,
  title     = {{Reproducibility Study of “LICO: Explainable Models with Language-Image Consistency"}},
  author    = {Fletcher, Luan and van der Klis, Robert and Sedláček, Martin and Vasilev, Stefan and Athanasiadis, Christos},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/fletcher2024tmlr-reproducibility/}
}