Smooth E-Intensive Regression by Loss Symmetrization

Abstract

We describe a framework for solving regression problems by reduction to classification. Our reduction is based on symmetrization of margin-based loss functions commonly used in boosting algorithms, namely, the logistic loss and the exponential loss. Our construction yields a smooth version of the ε -insensitive hinge loss that is used in support vector regression. A byproduct of this construction is a new simple form of regularization for boosting-based classification and regression algorithms. We present two parametric families of batch learning algorithms for minimizing these losses. The first family employs a log-additive update and is based on recent boosting algorithms while the second family uses a new form of additive update. We also describe and analyze online gradient descent (GD) and exponentiated gradient (EG) algorithms for the ε -insensitive logistic loss. Our regression framework also has implications on classification algorithms, namely, a new additive batch algorithm for the log-loss and exp-loss used in boosting.

Cite

Text

Dekel et al. "Smooth E-Intensive Regression by Loss Symmetrization." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_32

Markdown

[Dekel et al. "Smooth E-Intensive Regression by Loss Symmetrization." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/dekel2003colt-smooth/) doi:10.1007/978-3-540-45167-9_32

BibTeX

@inproceedings{dekel2003colt-smooth,
  title     = {{Smooth E-Intensive Regression by Loss Symmetrization}},
  author    = {Dekel, Ofer and Shalev-Shwartz, Shai and Singer, Yoram},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2003},
  pages     = {433-447},
  doi       = {10.1007/978-3-540-45167-9_32},
  url       = {https://mlanthology.org/colt/2003/dekel2003colt-smooth/}
}