Learning Polynomial Problems with SL(2)-Equivariance
Abstract
We introduce a set of polynomial learning problems that are equivariant to the non-compact group $SL(2,\mathbb{R})$. $SL(2,\mathbb{R})$ consists of area-preserving linear transformations, and captures the symmetries of a variety of polynomial-based problems not previously studied in the machine learning community, such as verifying positivity (for e.g. sum-of-squares optimization) and minimization. While compact groups admit many architectural building blocks, such as group convolutions, non-compact groups do not fit within this paradigm and are therefore more challenging. We consider several equivariance-based learning approaches for solving polynomial problems, including both data augmentation and a fully $SL(2,\mathbb{R})$-equivariant architecture for solving polynomial problems. In experiments, we broadly demonstrate that machine learning provides a promising alternative to traditional SDP-based baselines, achieving tenfold speedups while retaining high accuracy. Surprisingly, the most successful approaches incorporate only a well-conditioned subset of $SL(2,\mathbb{R})$, rather than the entire group. This provides a rare example of a symmetric problem where data augmentation outperforms full equivariance, and provides interesting lessons for other problems with non-compact symmetries.
Cite
Text
Lawrence and Harris. "Learning Polynomial Problems with SL(2)-Equivariance." ICML 2023 Workshops: TAGML, 2023.Markdown
[Lawrence and Harris. "Learning Polynomial Problems with SL(2)-Equivariance." ICML 2023 Workshops: TAGML, 2023.](https://mlanthology.org/icmlw/2023/lawrence2023icmlw-learning/)BibTeX
@inproceedings{lawrence2023icmlw-learning,
title = {{Learning Polynomial Problems with SL(2)-Equivariance}},
author = {Lawrence, Hannah and Harris, Mitchell Tong},
booktitle = {ICML 2023 Workshops: TAGML},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/lawrence2023icmlw-learning/}
}