Learning Curves Theory for Hierarchically Compositional Data with Power-Law Distributed Features
Abstract
Recent theories suggest that Neural Scaling Laws arise whenever the task is linearly decomposed into units that are power-law distributed. Alternatively, scaling laws also emerge when data exhibit a hierarchically compositional structure, as is thought to occur in language and images. To unify these views, we consider classification and next-token prediction tasks based on probabilistic context-free grammars—probabilistic models that generate data via a hierarchy of production rules. For classification, we show that having power-law distributed production rules results in a power-law learning curve with an exponent depending on the rules’ distribution and a large multiplicative constant that depends on the hierarchical structure. By contrast, for next-token prediction, the distribution of production rules controls the fine details of the learning curve, but not the exponent describing the large-scale behaviour.
Cite
Text
Cagnetta et al. "Learning Curves Theory for Hierarchically Compositional Data with Power-Law Distributed Features." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Cagnetta et al. "Learning Curves Theory for Hierarchically Compositional Data with Power-Law Distributed Features." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/cagnetta2025icml-learning/)BibTeX
@inproceedings{cagnetta2025icml-learning,
title = {{Learning Curves Theory for Hierarchically Compositional Data with Power-Law Distributed Features}},
author = {Cagnetta, Francesco and Kang, Hyunmo and Wyart, Matthieu},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {6149-6164},
volume = {267},
url = {https://mlanthology.org/icml/2025/cagnetta2025icml-learning/}
}