Learning Fair Division from Bandit Feedback

Abstract

This work addresses learning online fair division under uncertainty, where a central planner sequentially allocates items without precise knowledge of agents’ values or utilities. Departing from conventional online algorithms, the planner here relies on noisy, estimated values obtained after allocating items. We introduce wrapper algorithms utilizing dual averaging, enabling gradual learning of both the type distribution of arriving items and agents’ values through bandit feedback. This approach enables the algorithms to asymptotically achieve optimal Nash social welfare in linear Fisher markets with agents having additive utilities. We also empirically verify the performance of the proposed algorithms across synthetic and empirical datasets.

Cite

Text

Yamada et al. "Learning Fair Division from Bandit Feedback." Artificial Intelligence and Statistics, 2024.

Markdown

[Yamada et al. "Learning Fair Division from Bandit Feedback." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/yamada2024aistats-learning/)

BibTeX

@inproceedings{yamada2024aistats-learning,
  title     = {{Learning Fair Division from Bandit Feedback}},
  author    = {Yamada, Hakuei and Komiyama, Junpei and Abe, Kenshi and Iwasaki, Atsushi},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {3106-3114},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/yamada2024aistats-learning/}
}