Incentive-Aware PAC Learning
Abstract
We study PAC learning in the presence of strategic manipulation, where data points may modify their features in certain predefined ways in order to receive a better outcome. We show that the vanilla ERM principle fails to achieve any nontrivial guarantee in this context. Instead, we propose an incentive-aware version of the ERM principle which has asymptotically optimal sample complexity. We then focus our attention on incentive-compatible classifiers, which provably prevent any kind of strategic manipulation. We give a sample complexity bound that is, curiously, independent of the hypothesis class, for the ERM principle restricted to incentive-compatible classifiers. This suggests that incentive compatibility alone can act as an effective means of regularization. We further show that it is without loss of generality to consider only incentive-compatible classifiers when opportunities for strategic manipulation satisfy a transitivity condition. As a consequence, in such cases, our hypothesis-class-independent sample complexity bound applies even without incentive compatibility. Our results set the foundations of incentive-aware PAC learning.
Cite
Text
Zhang and Conitzer. "Incentive-Aware PAC Learning." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I6.16726Markdown
[Zhang and Conitzer. "Incentive-Aware PAC Learning." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/zhang2021aaai-incentive/) doi:10.1609/AAAI.V35I6.16726BibTeX
@inproceedings{zhang2021aaai-incentive,
title = {{Incentive-Aware PAC Learning}},
author = {Zhang, Hanrui and Conitzer, Vincent},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {5797-5804},
doi = {10.1609/AAAI.V35I6.16726},
url = {https://mlanthology.org/aaai/2021/zhang2021aaai-incentive/}
}