Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model
Abstract
We study the problem of learning generalized linear models under adversarial corruptions.We analyze a classical heuristic called the \textit{iterative trimmed maximum likelihood estimator} which is known to be effective against \textit{label corruptions} in practice. Under label corruptions, we prove that this simple estimator achieves minimax near-optimal risk on a wide range of generalized linear models, including Gaussian regression, Poisson regression and Binomial regression. Finally, we extend the estimator to the much more challenging setting of \textit{label and covariate corruptions} and demonstrate its robustness and optimality in that setting as well.
Cite
Text
Awasthi et al. "Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model." Neural Information Processing Systems, 2022.Markdown
[Awasthi et al. "Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/awasthi2022neurips-trimmed/)BibTeX
@inproceedings{awasthi2022neurips-trimmed,
title = {{Trimmed Maximum Likelihood Estimation for Robust Generalized Linear Model}},
author = {Awasthi, Pranjal and Das, Abhimanyu and Kong, Weihao and Sen, Rajat},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/awasthi2022neurips-trimmed/}
}