The FIX Benchmark: Extracting Features Interpretable to eXperts
Abstract
Feature-based methods are commonly used to explain model predictions, but these methods often implicitly assume that interpretable features are readily available. However, this is often not the case for high-dimensional data, and it can be hard even for domain experts to mathematically specify which features are important. Can we instead automatically extract collections or groups of features that are aligned with expert knowledge? To address this gap, we present FIX (Features Interpretable to eXperts), a benchmark for measuring how well a collection of features aligns with expert knowledge. In collaboration with domain experts, we propose FIXScore, a unified expert alignment measure applicable to diverse real-world settings across cosmology, psychology, and medicine domains in vision, language, and time series data modalities. With FIXScore, we find that popular feature-based explanation methods have poor alignment with expert-specified knowledge, highlighting the need for new methods that can better identify features interpretable to experts.
Cite
Text
Jin et al. "The FIX Benchmark: Extracting Features Interpretable to eXperts." Data-centric Machine Learning Research, 2025.Markdown
[Jin et al. "The FIX Benchmark: Extracting Features Interpretable to eXperts." Data-centric Machine Learning Research, 2025.](https://mlanthology.org/dmlr/2025/jin2025dmlr-fix/)BibTeX
@article{jin2025dmlr-fix,
title = {{The FIX Benchmark: Extracting Features Interpretable to eXperts}},
author = {Jin, Helen and Havaldar, Shreya and Kim, Chaehyeon and Xue, Anton and You, Weiqiu and Qu, Helen and Gatti, Marco and Hashimoto, Daniel A and Jain, Bhuvnesh and Madani, Amin and Sako, Masao and Ungar, Lyle and Wong, Eric},
journal = {Data-centric Machine Learning Research},
year = {2025},
pages = {1-43},
volume = {2},
url = {https://mlanthology.org/dmlr/2025/jin2025dmlr-fix/}
}