Beyond L1: Faster and Better Sparse Models with Skglm

Abstract

We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties. Our algorithm is able to solve problems with millions of samples and features in seconds, by relying on coordinate descent, working sets and Anderson acceleration. It handles previously unaddressed models, and is extensively shown to improve state-of-art algorithms. We provide a flexible, scikit-learn compatible package, which easily handles customized datafits and penalties.

Cite

Text

Bertrand et al. "Beyond L1: Faster and Better Sparse Models with Skglm." Neural Information Processing Systems, 2022.

Markdown

[Bertrand et al. "Beyond L1: Faster and Better Sparse Models with Skglm." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/bertrand2022neurips-beyond/)

BibTeX

@inproceedings{bertrand2022neurips-beyond,
  title     = {{Beyond L1: Faster and Better Sparse Models with Skglm}},
  author    = {Bertrand, Quentin and Klopfenstein, Quentin and Bannier, Pierre-Antoine and Gidel, Gauthier and Massias, Mathurin},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/bertrand2022neurips-beyond/}
}