Orthogonal Statistical Learning with Self-Concordant Loss
Abstract
Orthogonal statistical learning and double machine learning have emerged as general frameworks for two-stage statistical prediction in the presence of a nuisance component. We establish non-asymptotic bounds on the excess risk of orthogonal statistical learning methods with a loss function satisfying a self-concordance property. Our bounds improve upon existing bounds by a dimension factor while lifting the assumption of strong convexity. We illustrate the results with examples from multiple treatment effect estimation and generalized partially linear modeling.
Cite
Text
Liu et al. "Orthogonal Statistical Learning with Self-Concordant Loss." Conference on Learning Theory, 2022.Markdown
[Liu et al. "Orthogonal Statistical Learning with Self-Concordant Loss." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/liu2022colt-orthogonal/)BibTeX
@inproceedings{liu2022colt-orthogonal,
title = {{Orthogonal Statistical Learning with Self-Concordant Loss}},
author = {Liu, Lang and Cinelli, Carlos and Harchaoui, Zaid},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {5253-5277},
volume = {178},
url = {https://mlanthology.org/colt/2022/liu2022colt-orthogonal/}
}