Selecting Optimal Decisions via Distributionally Robust Nearest-Neighbor Regression
Abstract
This paper develops a prediction-based prescriptive model for optimal decision making that (i) predicts the outcome under each action using a robust nonlinear model, and (ii) adopts a randomized prescriptive policy determined by the predicted outcomes. The predictive model combines a new regularized regression technique, which was developed using Distributionally Robust Optimization (DRO) with an ambiguity set constructed from the Wasserstein metric, with the K-Nearest Neighbors (K-NN) regression, which helps to capture the nonlinearity embedded in the data. We show theoretical results that guarantee the out-of-sample performance of the predictive model, and prove the optimality of the randomized policy in terms of the expected true future outcome. We demonstrate the proposed methodology on a hypertension dataset, showing that our prescribed treatment leads to a larger reduction in the systolic blood pressure compared to a series of alternatives. A clinically meaningful threshold level used to activate the randomized policy is also derived under a sub-Gaussian assumption on the predicted outcome.
Cite
Text
Chen and Paschalidis. "Selecting Optimal Decisions via Distributionally Robust Nearest-Neighbor Regression." Neural Information Processing Systems, 2019.Markdown
[Chen and Paschalidis. "Selecting Optimal Decisions via Distributionally Robust Nearest-Neighbor Regression." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/chen2019neurips-selecting-a/)BibTeX
@inproceedings{chen2019neurips-selecting-a,
title = {{Selecting Optimal Decisions via Distributionally Robust Nearest-Neighbor Regression}},
author = {Chen, Ruidi and Paschalidis, Ioannis},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {749-759},
url = {https://mlanthology.org/neurips/2019/chen2019neurips-selecting-a/}
}