Submodular Meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets
Abstract
To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a diverse set of high-quality candidate solutions or proposals. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, etc.) is exponentially large. We study greedy algorithms for finding a diverse subset of solutions in structured-output spaces by drawing new connections between submodular functions over combinatorial item sets and High-Order Potentials (HOPs) studied for graphical models. Specifically, we show via examples that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs. We discuss benefits, tradeoffs, and show that our constructions lead to significantly better proposals.
Cite
Text
Prasad et al. "Submodular Meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets." Neural Information Processing Systems, 2014.Markdown
[Prasad et al. "Submodular Meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/prasad2014neurips-submodular/)BibTeX
@inproceedings{prasad2014neurips-submodular,
title = {{Submodular Meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets}},
author = {Prasad, Adarsh and Jegelka, Stefanie and Batra, Dhruv},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {2645-2653},
url = {https://mlanthology.org/neurips/2014/prasad2014neurips-submodular/}
}