Improving Deep Regression with Tightness
Abstract
For deep regression, preserving the ordinality of the targets with respect to the feature representation improves performance across various tasks. However, a theoretical explanation for the benefits of ordinality is still lacking. This work reveals that preserving ordinality reduces the conditional entropy $H(Z|Y)$ of representation $Z$ conditional on the target $Y$. However, our findings reveal that typical regression losses fail to sufficiently reduce $H(Z|Y)$, despite its crucial role in generalization performance. With this motivation, we introduce an optimal transport-based regularizer to preserve the similarity relationships of targets in the feature space to reduce $H(Z|Y)$. Additionally, we introduce a simple yet efficient strategy of duplicating the regressor targets, also with the aim of reducing $H(Z|Y)$. Experiments on three real-world regression tasks verify the effectiveness of our strategies to improve deep regression. Code: https://github.com/needylove/Regression_tightness
Cite
Text
Zhang et al. "Improving Deep Regression with Tightness." International Conference on Learning Representations, 2025.Markdown
[Zhang et al. "Improving Deep Regression with Tightness." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-improving/)BibTeX
@inproceedings{zhang2025iclr-improving,
title = {{Improving Deep Regression with Tightness}},
author = {Zhang, Shihao and Yan, Yuguang and Yao, Angela},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/zhang2025iclr-improving/}
}