Non-Parametric Classification via Expand-and-Sparsify Representation
Abstract
In *expand-and-sparsify* (EaS) representation, a data point in $\mathcal{S}^{d-1}$ is first randomly mapped to higher dimension $\mathbb{R}^m$, where $m>d$, followed by a sparsification operation where the informative $k \ll m$ of the $m$ coordinates are set to one and the rest are set to zero. We propose two algorithms for non-parametric classification using such EaS representation. For our first algorithm, we use *winners-take-all* operation for the sparsification step and show that the proposed classifier admits the form of a locally weighted average classifier and establish its consistency via Stone's Theorem. Further, assuming that the conditional probability function $P(y=1|x)=\eta(x)$ is H\"older continuous and for optimal choice of $m$, we show that the convergence rate of this classifier is minimax-optimal. For our second algorithm, we use *empirical $k$-thresholding* operation for the sparsification step, and under the assumption that data lie on a low dimensional manifold of dimension $d_0\ll d$, we show that the convergence rate of this classifier depends only on $d_0$ and is again minimax-optimal. Empirical evaluations performed on real-world datasets corroborate our theoretical results.
Cite
Text
Sinha. "Non-Parametric Classification via Expand-and-Sparsify Representation." Neural Information Processing Systems, 2024. doi:10.52202/079017-0933Markdown
[Sinha. "Non-Parametric Classification via Expand-and-Sparsify Representation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/sinha2024neurips-nonparametric/) doi:10.52202/079017-0933BibTeX
@inproceedings{sinha2024neurips-nonparametric,
title = {{Non-Parametric Classification via Expand-and-Sparsify Representation}},
author = {Sinha, Kaushik},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0933},
url = {https://mlanthology.org/neurips/2024/sinha2024neurips-nonparametric/}
}