HARD: Hyperplane ARrangement Descent

Abstract

The problem of clustering points on a union of subspaces finds numerous applications in machine learning and computer vision, and it has been extensively studied in the past two decades. When the subspaces are low-dimensional, the problem can be formulated as a convex sparse optimization problem, for which numerous accurate, efficient and robust methods exist. When the subspaces are of high relative dimension (e.g., hyperplanes), the problem is intrinsically non-convex, and existing methods either lack theory, are computationally costly, lack robustness to outliers, or learn hyperplanes one at a time. In this paper, we propose Hyperplane ARangentment Descent (HARD), a method that robustly learns all the hyperplanes simultaneously by solving a novel non-convex non-smooth $\ell_1$ minimization problem. We provide geometric conditions under which the ground-truth hyperplane arrangement is a coordinate-wise minimizer of our objective. Furthermore, we devise efficient algorithms, and give conditions under which they converge to coordinate-wise minimizes. We provide empirical evidence that HARD surpasses state-of-the-art methods and further show an interesting experiment in clustering deep features on CIFAR-10.

Cite

Text

Ding et al. "HARD: Hyperplane ARrangement Descent." Conference on Parsimony and Learning, 2024.

Markdown

[Ding et al. "HARD: Hyperplane ARrangement Descent." Conference on Parsimony and Learning, 2024.](https://mlanthology.org/cpal/2024/ding2024cpal-hard/)

BibTeX

@inproceedings{ding2024cpal-hard,
  title     = {{HARD: Hyperplane ARrangement Descent}},
  author    = {Ding, Tianjiao and Peng, Liangzu and Vidal, Rene},
  booktitle = {Conference on Parsimony and Learning},
  year      = {2024},
  pages     = {134-158},
  volume    = {234},
  url       = {https://mlanthology.org/cpal/2024/ding2024cpal-hard/}
}