Visual Analogies: A Framework for Defining Aspect Categorization
Abstract
Analogies are common simple word problems (calf is to cow as x is to sheep?) and we use them to identify analogies between images. Let $\mathcal {I}[\mathcal {A},\theta ]$ be an image of object $\mathcal {A}$ at view $\theta $ . We show how to learn to choose an image $\mathcal {I}$ such that $\mathcal {I}[\mathcal {A},\phi ]$ is to $\mathcal {I}[\mathcal {A},\theta ]$ as $\mathcal {I}$ is to $\mathcal {I}[\mathcal {B},\theta ]$ . We introduce a framework to identify an image of a familiar object at an unfamiliar angle and extend our method to treat unfamiliar objects. By doing so, we identify pairs of objects that are good at finding new views of one another. This yields an operational notion of aspectual equivalence: objects are equivalent if they can predict each other’s appearance well.
Cite
Text
Tsatsoulis et al. "Visual Analogies: A Framework for Defining Aspect Categorization." European Conference on Computer Vision Workshops, 2016. doi:10.1007/978-3-319-49409-8_47Markdown
[Tsatsoulis et al. "Visual Analogies: A Framework for Defining Aspect Categorization." European Conference on Computer Vision Workshops, 2016.](https://mlanthology.org/eccvw/2016/tsatsoulis2016eccvw-visual/) doi:10.1007/978-3-319-49409-8_47BibTeX
@inproceedings{tsatsoulis2016eccvw-visual,
title = {{Visual Analogies: A Framework for Defining Aspect Categorization}},
author = {Tsatsoulis, P. Daphne and Plummer, Bryan A. and Forsyth, David A.},
booktitle = {European Conference on Computer Vision Workshops},
year = {2016},
pages = {540-547},
doi = {10.1007/978-3-319-49409-8_47},
url = {https://mlanthology.org/eccvw/2016/tsatsoulis2016eccvw-visual/}
}