Manipulation-Robust Selection of Citizens' Assemblies
Abstract
Among the recent work on designing algorithms for selecting citizens' assembly participants, one key property of these algorithms has not yet been studied: their manipulability. Strategic manipulation is a concern because these algorithms must satisfy representation constraints according to volunteers' self-reported features; misreporting these features could thereby increase a volunteer's chance of being selected, decrease someone else's chance, and/or increase the expected number of seats given to their group. Strikingly, we show that Leximin — an algorithm that is widely used for its fairness — is highly manipulable in this way. We then introduce a new class of selection algorithms that use Lp norms as objective functions. We show that the manipulability of the Lp-based algorithm decreases in O(1/n^(1-1/p)) as the number of volunteers n grows, approaching the optimal rate of O(1/n) as p approaches infinity. These theoretical results are confirmed via experiments in eight real-world datasets.
Cite
Text
Flanigan et al. "Manipulation-Robust Selection of Citizens' Assemblies." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I9.28827Markdown
[Flanigan et al. "Manipulation-Robust Selection of Citizens' Assemblies." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/flanigan2024aaai-manipulation/) doi:10.1609/AAAI.V38I9.28827BibTeX
@inproceedings{flanigan2024aaai-manipulation,
title = {{Manipulation-Robust Selection of Citizens' Assemblies}},
author = {Flanigan, Bailey and Liang, Jennifer and Procaccia, Ariel D. and Wang, Sven},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {9696-9703},
doi = {10.1609/AAAI.V38I9.28827},
url = {https://mlanthology.org/aaai/2024/flanigan2024aaai-manipulation/}
}