Generalizing the Theory of Cooperative Inference
Abstract
Cooperation information sharing is important to theories of human learning and has potential implications for machine learning. Prior work derived conditions for achieving optimal Cooperative Inference given strong, relatively restrictive assumptions. We relax these assumptions by demonstrating convergence for any discrete joint distribution, robustness through equivalence classes and stability under perturbation, and effectiveness by deriving bounds from structural properties of the original joint distribution. We provide geometric interpretations, connections to and implications for optimal transport, and connections to importance sampling, and conclude by outlining open questions and challenges to realizing the promise of Cooperative Inference.
Cite
Text
Wang et al. "Generalizing the Theory of Cooperative Inference." Artificial Intelligence and Statistics, 2019.Markdown
[Wang et al. "Generalizing the Theory of Cooperative Inference." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/wang2019aistats-generalizing/)BibTeX
@inproceedings{wang2019aistats-generalizing,
title = {{Generalizing the Theory of Cooperative Inference}},
author = {Wang, Pei and Paranamana, Pushpi and Shafto, Patrick},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {1841-1850},
volume = {89},
url = {https://mlanthology.org/aistats/2019/wang2019aistats-generalizing/}
}