Robust Regression Using Biased Objectives

Abstract

For the regression task in a non-parametric setting, designing the objective function to be minimized by the learner is a critical task. In this paper we propose a principled method for constructing and minimizing robust losses, which are resilient to errant observations even under small samples. Existing proposals typically utilize very strong estimates of the true risk, but in doing so require a priori information that is not available in practice. As we abandon direct approximation of the risk, this lets us enjoy substantial gains in stability at a tolerable price in terms of bias, all while circumventing the computational issues of existing procedures. We analyze existence and convergence conditions, provide practical computational routines, and also show empirically that the proposed method realizes superior robustness over wide data classes with no prior knowledge assumptions.

Cite

Text

Holland and Ikeda. "Robust Regression Using Biased Objectives." Machine Learning, 2017. doi:10.1007/S10994-017-5653-5

Markdown

[Holland and Ikeda. "Robust Regression Using Biased Objectives." Machine Learning, 2017.](https://mlanthology.org/mlj/2017/holland2017mlj-robust/) doi:10.1007/S10994-017-5653-5

BibTeX

@article{holland2017mlj-robust,
  title     = {{Robust Regression Using Biased Objectives}},
  author    = {Holland, Matthew J. and Ikeda, Kazushi},
  journal   = {Machine Learning},
  year      = {2017},
  pages     = {1643-1679},
  doi       = {10.1007/S10994-017-5653-5},
  volume    = {106},
  url       = {https://mlanthology.org/mlj/2017/holland2017mlj-robust/}
}