State Dependent Performative Prediction with Stochastic Approximation
Abstract
This paper studies the performative prediction problem which optimizes a stochastic loss function with data distribution that depends on the decision variable. We consider a setting where the agent(s) provides samples adapted to both the learner’s and agent’s previous states. The samples are then used by the learner to update his/her state to optimize a loss function. Such closed loop update dynamics is studied as a state dependent stochastic approximation (SA) algorithm, which is shown to find a fixed point known as the performative stable solution. Our setting captures the unforgetful nature and reliance on past experiences of agents. Our contributions are three-fold. First, we present a framework for state dependent performative prediction with biased stochastic gradients driven by a controlled Markov chain whose transition probability depends on the learner’s state. Second, we present a new finite-time performance analysis of the SA algorithm. We show that the expected squared distance to the performative stable solution decreases as O(1/k), where k is the iteration number. Third, numerical experiments verify our findings.
Cite
Text
Li and Wai. "State Dependent Performative Prediction with Stochastic Approximation." Artificial Intelligence and Statistics, 2022.Markdown
[Li and Wai. "State Dependent Performative Prediction with Stochastic Approximation." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/li2022aistats-state/)BibTeX
@inproceedings{li2022aistats-state,
title = {{State Dependent Performative Prediction with Stochastic Approximation}},
author = {Li, Qiang and Wai, Hoi-To},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {3164-3186},
volume = {151},
url = {https://mlanthology.org/aistats/2022/li2022aistats-state/}
}