Learning Low-Level Vision
Abstract
We show a learning-based method for low-level vision problems-estimating scenes from images. We generate a synthetic world of scenes and their corresponding rendered images. We model that world with a Markov network, learning the network parameters from the examples. Bayesian belief propagation allows us to efficiently find a local maximum of the posterior probability for the scene, given the image. We call this approach VISTA-Vision by Image/Scene TrAining. We apply VISTA to the "super-resolution" problem (estimating high frequency details from a low-resolution image), showing good results. For the motion estimation problem, we show figure/ground discrimination, solution of the aperture problem, and filling-in arising from application of the same probabilistic machinery.
Cite
Text
Freeman and Pasztor. "Learning Low-Level Vision." IEEE/CVF International Conference on Computer Vision, 1999. doi:10.1109/ICCV.1999.790414Markdown
[Freeman and Pasztor. "Learning Low-Level Vision." IEEE/CVF International Conference on Computer Vision, 1999.](https://mlanthology.org/iccv/1999/freeman1999iccv-learning/) doi:10.1109/ICCV.1999.790414BibTeX
@inproceedings{freeman1999iccv-learning,
title = {{Learning Low-Level Vision}},
author = {Freeman, William T. and Pasztor, Egon C.},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {1999},
pages = {1182-1189},
doi = {10.1109/ICCV.1999.790414},
url = {https://mlanthology.org/iccv/1999/freeman1999iccv-learning/}
}