Learning Generic Prior Models for Visual Computation

Abstract

This paper presents a novel theory for learning generic prior models from a set of observed natural images based on a minimax entropy theory that the authors studied in modeling textures. We start by studying the statistics of natural images including the scale invariant properties, then generic prior models were learnt to duplicate the observed statistics. The learned Gibbs distributions confirm and improve the forms of existing prior models. More interestingly inverted potentials are found to be necessary, and such potentials form patterns and enhance preferred image features. The learned model is compared with existing prior models in experiments of image restoration.

Cite

Text

Zhu and Mumford. "Learning Generic Prior Models for Visual Computation." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1997. doi:10.1109/CVPR.1997.609366

Markdown

[Zhu and Mumford. "Learning Generic Prior Models for Visual Computation." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1997.](https://mlanthology.org/cvpr/1997/zhu1997cvpr-learning/) doi:10.1109/CVPR.1997.609366

BibTeX

@inproceedings{zhu1997cvpr-learning,
  title     = {{Learning Generic Prior Models for Visual Computation}},
  author    = {Zhu, Song Chun and Mumford, David},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {1997},
  pages     = {463-469},
  doi       = {10.1109/CVPR.1997.609366},
  url       = {https://mlanthology.org/cvpr/1997/zhu1997cvpr-learning/}
}