Online Strategic Classification with Noise and Partial Feedback
Abstract
In this paper, we study an online strategic classification problem, where a principal aims to learn an accurate binary linear classifier from sequentially arriving agents. For each agent, the principal announces a classifier. The agent can strategically exercise costly manipulations on his features to be classified as the favorable positive class. The principal is unaware of the true feature-label distribution, but observes all reported features and only labels of positively classified agents. We assume that the true feature-label distribution is given by a halfspace model subject to arbitrary feature-dependent bounded noise (i.e., Massart Noise). This problem faces the combined challenges of agents' strategic feature manipulations, partial label observations, and label noises. We tackle these challenges by a novel learning algorithm. We show that the proposed algorithm yields classifiers that converge to the clairvoyant optimal one and attains a regret rate of $ O(\sqrt{T})$ up to poly-logarithmic and constant factors over $T$ cycles.
Cite
Text
Zhao et al. "Online Strategic Classification with Noise and Partial Feedback." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhao et al. "Online Strategic Classification with Noise and Partial Feedback." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhao2025neurips-online/)BibTeX
@inproceedings{zhao2025neurips-online,
title = {{Online Strategic Classification with Noise and Partial Feedback}},
author = {Zhao, Tianrun and Mao, Xiaojie and Liang, Yong},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhao2025neurips-online/}
}