Dependent Randomized Rounding for Clustering and Partition Systems with Knapsack Constraints
Abstract
Clustering problems are fundamental to unsupervised learning. There is an increased emphasis on \emph{fairness} in machine learning and AI; one representative notion of fairness is that no single demographic group should be over-represented among the cluster-centers. This, and much more general clustering problems, can be formulated with “knapsack" and “partition" constraints. We develop new randomized algorithms targeting such problems, and study two in particular: multi-knapsack median and multi-knapsack center. Our rounding algorithms give new approximation and pseudo-approximation algorithms for these problems. One key technical tool we develop and use, which may be of independent interest, is a new tail bound analogous to Feige (2006) for sums of random variables with unbounded variances. Such bounds are very useful in inferring properties of large networks using few samples.
Cite
Text
Harris et al. "Dependent Randomized Rounding for Clustering and Partition Systems with Knapsack Constraints." Artificial Intelligence and Statistics, 2020.Markdown
[Harris et al. "Dependent Randomized Rounding for Clustering and Partition Systems with Knapsack Constraints." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/harris2020aistats-dependent/)BibTeX
@inproceedings{harris2020aistats-dependent,
title = {{Dependent Randomized Rounding for Clustering and Partition Systems with Knapsack Constraints}},
author = {Harris, David and Pensyl, Thomas and Srinivasan, Aravind and Trinh, Khoa},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {2273-2283},
volume = {108},
url = {https://mlanthology.org/aistats/2020/harris2020aistats-dependent/}
}