Learning Strategy-Aware Linear Classifiers

Abstract

We address the question of repeatedly learning linear classifiers against agents who are \emph{strategically} trying to \emph{game} the deployed classifiers, and we use the \emph{Stackelberg regret} to measure the performance of our algorithms. First, we show that Stackelberg and external regret for the problem of strategic classification are \emph{strongly incompatible}: i.e., there exist worst-case scenarios, where \emph{any} sequence of actions providing \emph{sublinear} external regret might result in \emph{linear} Stackelberg regret and vice versa. Second, we present a strategy-aware algorithm for minimizing the Stackelberg regret for which we prove nearly matching upper and lower regret bounds. Finally, we provide simulations to complement our theoretical analysis. Our results advance the growing literature of learning from revealed preferences, which has so far focused on ``smoother'' assumptions from the perspective of the learner and the agents respectively.

Cite

Text

Chen et al. "Learning Strategy-Aware Linear Classifiers." Neural Information Processing Systems, 2020.

Markdown

[Chen et al. "Learning Strategy-Aware Linear Classifiers." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/chen2020neurips-learning-a/)

BibTeX

@inproceedings{chen2020neurips-learning-a,
  title     = {{Learning Strategy-Aware Linear Classifiers}},
  author    = {Chen, Yiling and Liu, Yang and Podimata, Chara},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/chen2020neurips-learning-a/}
}