Harmonic Loss Trains Interpretable AI Models
Abstract
In this paper, we introduce harmonic loss as an alternative supervisory signal for training neural networks and large language models (LLMs). Harmonic loss differs from standard cross-entropy loss by (a) replacing the usual SoftMax normalization with a scale-invariant HarMax function and (b) computing logits via Euclidean distance rather than a dot product. Harmonic loss enables improved interpretability and faster convergence, owing to its scale invariance and finite convergence point by design, which can be interpreted as a class center. We first validate the performance of harmonic models across algorithmic, vision, and language datasets. Through extensive experiments, we demonstrate that models trained with harmonic loss perform better than standard models by: (a) enhancing interpretability (i.e. geometry of representations), (b) requiring less data for generalization, and (c) reducing grokking. Moreover, we compare a GPT-2 model trained with harmonic loss to the standard GPT-2, illustrating that the harmonic model develops more interpretable representations. We hope our work will inspire future research exploring various methods to improve the geometry of representations, paving the way toward building more interpretable AI models.
Cite
Text
Baek et al. "Harmonic Loss Trains Interpretable AI Models." Transactions on Machine Learning Research, 2025.Markdown
[Baek et al. "Harmonic Loss Trains Interpretable AI Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/baek2025tmlr-harmonic/)BibTeX
@article{baek2025tmlr-harmonic,
title = {{Harmonic Loss Trains Interpretable AI Models}},
author = {Baek, David D. and Liu, Ziming and Tyagi, Riya and Tegmark, Max},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/baek2025tmlr-harmonic/}
}