Learning Constant-Depth Circuits in Malicious Noise Models
Abstract
The seminal work of Linial, Mansour, and Nisan gave a quasipolynomial-time algorithm for learning constant-depth circuits ($\mathsf{AC}^0$) with respect to the uniform distribution on the hypercube. Extending their algorithm to the setting of malicious noise, where both covariates and labels can be adversarially corrupted, has remained open. Here we achieve such a result, inspired by recent work on learning with distribution shift. Our running time essentially matches their algorithm, which is known to be optimal assuming various cryptographic primitives. Our proof uses a simple outlier-removal method combined with Braverman’s theorem for fooling constant-depth circuits. We attain the best possible dependence on the noise rate and succeed in the harshest possible noise model (i.e., contamination or so-called “nasty noise").
Cite
Text
Klivans et al. "Learning Constant-Depth Circuits in Malicious Noise Models." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.Markdown
[Klivans et al. "Learning Constant-Depth Circuits in Malicious Noise Models." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.](https://mlanthology.org/colt/2025/klivans2025colt-learning/)BibTeX
@inproceedings{klivans2025colt-learning,
title = {{Learning Constant-Depth Circuits in Malicious Noise Models}},
author = {Klivans, Adam and Stavropoulos, Konstantinos and Vasilyan, Arsen},
booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory},
year = {2025},
pages = {3253-3263},
volume = {291},
url = {https://mlanthology.org/colt/2025/klivans2025colt-learning/}
}