A Unifying Information-Theoretic Perspective on Evaluating Generative Models

Abstract

Considering the difficulty of interpreting generative model output, there is significant current research focused on determining meaningful evaluation metrics. Several recent approaches utilize "precision" and "recall," borrowed from the classification domain, to individually quantify the output fidelity (realism) and output diversity (representation of the real data variation), respectively. With the increase in metric proposals, there is a need for a unifying perspective, allowing for easier comparison and clearer explanation of their benefits and drawbacks. To this end, we unify a class of kth-nearest neighbors (kNN)-based metrics under an information-theoretic lens using approaches from kNN density estimation. Additionally, we propose a tri-dimensional metric composed of Precision Cross-Entropy (PCE), Recall Cross-Entropy (RCE), and Recall Entropy (RE), which separately measure fidelity and two distinct aspects of diversity, inter- and intra-class. Our domain-agnostic metric, derived from the information-theoretic concepts of entropy and cross-entropy, can be dissected for both sample- and mode-level analysis. Our detailed experimental results demonstrate the sensitivity of our metric components to their respective qualities and reveal undesirable behaviors of other metrics.

Cite

Text

Fox et al. "A Unifying Information-Theoretic Perspective on Evaluating Generative Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I16.33827

Markdown

[Fox et al. "A Unifying Information-Theoretic Perspective on Evaluating Generative Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/fox2025aaai-unifying/) doi:10.1609/AAAI.V39I16.33827

BibTeX

@inproceedings{fox2025aaai-unifying,
  title     = {{A Unifying Information-Theoretic Perspective on Evaluating Generative Models}},
  author    = {Fox, Alexis and Swarup, Samarth and Adiga, Abhijin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {16630-16638},
  doi       = {10.1609/AAAI.V39I16.33827},
  url       = {https://mlanthology.org/aaai/2025/fox2025aaai-unifying/}
}