Sign-in to the Lottery: Reparameterizing Sparse Training
Abstract
The performance gap between training sparse neural networks from scratch (PaI) and dense-to-sparse training presents a major roadblock for efficient deep learning. According to the Lottery Ticket Hypothesis, PaI hinges on finding a problem specific parameter initialization. As we show, to this end, determining correct parameter signs is sufficient. Yet, they remain elusive to PaI. To address this issue, we propose Sign-In, which employs a dynamic reparameterization that provably induces sign flips. Such sign flips are complementary to the ones that dense-to-sparse training can accomplish, rendering Sign-In as an orthogonal method. While our experiments and theory suggest performance improvements of PaI, they also carve out the main open challenge to close the gap between PaI and dense-to-sparse training.
Cite
Text
Gadhikar et al. "Sign-in to the Lottery: Reparameterizing Sparse Training." Advances in Neural Information Processing Systems, 2025.Markdown
[Gadhikar et al. "Sign-in to the Lottery: Reparameterizing Sparse Training." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/gadhikar2025neurips-signin/)BibTeX
@inproceedings{gadhikar2025neurips-signin,
title = {{Sign-in to the Lottery: Reparameterizing Sparse Training}},
author = {Gadhikar, Advait and Jacobs, Tom and Zhou, Chao and Burkholz, Rebekka},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/gadhikar2025neurips-signin/}
}