UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes

Abstract

We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks. In contrast to previous models, UViM has the same functional form for all tasks; it requires no task-specific modifications which require extensive human expertise. The approach involves two components: (I) a base model (feed-forward) which is trained to directly predict raw vision outputs, guided by a learned discrete code and (II) a language model (autoregressive) that is trained to generate the guiding code. These components complement each other: the language model is well-suited to modeling structured interdependent data, while the base model is efficient at dealing with high-dimensional outputs. We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks: panoptic segmentation, depth prediction and image colorization, where we achieve competitive and near state-of-the-art results. Our experimental results suggest that UViM is a promising candidate for a unified modeling approach in computer vision.

Cite

Text

Kolesnikov et al. "UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes." Neural Information Processing Systems, 2022.

Markdown

[Kolesnikov et al. "UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/kolesnikov2022neurips-uvim/)

BibTeX

@inproceedings{kolesnikov2022neurips-uvim,
  title     = {{UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes}},
  author    = {Kolesnikov, Alexander and Pinto, André Susano and Beyer, Lucas and Zhai, Xiaohua and Harmsen, Jeremiah and Houlsby, Neil},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/kolesnikov2022neurips-uvim/}
}