Algorithms for Sparse LPN and LSPN Against Low-Noise (extended Abstract)
Abstract
We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN, this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let $n$ be the dimension, $k$ denote the sparsity, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. As a fundamental problem in computational learning theory, Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is $k$-sparse instead of a potentially dense vector. While the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time, previously known results stills need at least ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for any noise rate $\eta$. Our framework provides a LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any noise rate $\eta$, which improves the state-of-the-art of LSPN whenever $\eta \in (k/n,\sqrt{k/n})$. The sparse LPN problem is closely related to the classical problem of refuting random $k$-CSP and has been widely used in cryptography as the hardness assumption. Different from the standard LPN that samples random vectors in $\mathbf{F}_2^n$, it samples random $k$-sparse vectors. Because the number of $k$-sparse vectors is ${n \choose k}<n^k$, sparse LPN has learning algorithms in polynomial time when $m>n^{k/2}$. However, much less is known about learning algorithms for a constant $k$ like 3 and $m<n^{k/2}$ samples, except the Gaussian elimination algorithm and sum-of-squares algorithms. Our framework provides a learning algorithm in $e^{\tilde{O}(\eta \cdot n^{\frac{\delta+1}{2}})}$ time given $\delta \in (0,1)$ and $m=\max\{1,\frac{\eta \cdot n^{\frac{\delta+1}{2}}}{k^2}\} \cdot n^{1+(1-\delta)\cdot \frac{k-1}{2}}$ samples. This improves previous learning algorithms. For example, in the classical setting of $k=3$ and $m=n^{1.4}$, our algorithm would be faster than previous approaches for any $\eta<n^{-0.7}$.
Cite
Text
Chen et al. "Algorithms for Sparse LPN and LSPN Against Low-Noise (extended Abstract)." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.Markdown
[Chen et al. "Algorithms for Sparse LPN and LSPN Against Low-Noise (extended Abstract)." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.](https://mlanthology.org/colt/2025/chen2025colt-algorithms/)BibTeX
@inproceedings{chen2025colt-algorithms,
title = {{Algorithms for Sparse LPN and LSPN Against Low-Noise (extended Abstract)}},
author = {Chen, Xue and Shu, Wenxuan and Zhou, Zhaienhe},
booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory},
year = {2025},
pages = {1091-1093},
volume = {291},
url = {https://mlanthology.org/colt/2025/chen2025colt-algorithms/}
}