Shape Recipes: Scene Representations That Refer to the Image

Abstract

The goal of low-level vision is to estimate an underlying scene, given an observed image. Real-world scenes (eg, albedos or shapes) can be very complex, conventionally requiring high dimensional representations which are hard to estimate and store. We propose a low-dimensional rep- resentation, called a scene recipe, that relies on the image itself to de- scribe the complex scene configurations. Shape recipes are an example: these are the regression coefficients that predict the bandpassed shape from image data. We describe the benefits of this representation, and show two uses illustrating their properties: (1) we improve stereo shape estimates by learning shape recipes at low resolution and applying them at full resolution; (2) Shape recipes implicitly contain information about lighting and materials and we use them for material segmentation.

Cite

Text

Freeman and Torralba. "Shape Recipes: Scene Representations That Refer to the Image." Neural Information Processing Systems, 2002.

Markdown

[Freeman and Torralba. "Shape Recipes: Scene Representations That Refer to the Image." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/freeman2002neurips-shape/)

BibTeX

@inproceedings{freeman2002neurips-shape,
  title     = {{Shape Recipes: Scene Representations That Refer to the Image}},
  author    = {Freeman, William T. and Torralba, Antonio},
  booktitle = {Neural Information Processing Systems},
  year      = {2002},
  pages     = {1359-1366},
  url       = {https://mlanthology.org/neurips/2002/freeman2002neurips-shape/}
}