A Renormalization Group Framework for Scale-Invariant Feature Learning in Deep Neural Networks (Student Abstract)
Abstract
We propose a framework that uses renormalization group (RG) theory from statistical physics to analyze and optimize the hierarchical feature learning process in deep neural networks. Here, the layer-wise transformations in deep networks can be viewed as analogous to RG transformations, with each layer implementing a coarse-graining operation that extracts increasingly abstract features. We propose an approach to enforce scale invariance in neural networks, introduce scale-aware activation functions, and derive RG flow equations for network parameters. We show that our approach leads to fixed points corresponding to scale-invariant feature representations. Finally, we propose an RG-guided training procedure that converges to these fixed points while minimizing the loss function.
Cite
Text
Liaw. "A Renormalization Group Framework for Scale-Invariant Feature Learning in Deep Neural Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35269Markdown
[Liaw. "A Renormalization Group Framework for Scale-Invariant Feature Learning in Deep Neural Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/liaw2025aaai-renormalization/) doi:10.1609/AAAI.V39I28.35269BibTeX
@inproceedings{liaw2025aaai-renormalization,
title = {{A Renormalization Group Framework for Scale-Invariant Feature Learning in Deep Neural Networks (Student Abstract)}},
author = {Liaw, Sarah},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29410-29411},
doi = {10.1609/AAAI.V39I28.35269},
url = {https://mlanthology.org/aaai/2025/liaw2025aaai-renormalization/}
}