Dealing with Small Data and Training Blind Spots in the Manhattan World
Abstract
Leveraging Manhattan assumption we generate metrically rectified novel views from a single image, even for non-box scenarios. Our novel views enable the already trained classifiers to handle training data missing views (blind spots) without additional training. We demonstrate this on end-to-end scene text spotting under perspective. Additionally, utilizing our fronto-parallel views, we discover unsuspended invariant mid-level patches given a few widely separated training examples (small data domain). These invariant patches outperform various baselines on small data image retrieval challenge.
Cite
Text
Hussain et al. "Dealing with Small Data and Training Blind Spots in the Manhattan World." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477649Markdown
[Hussain et al. "Dealing with Small Data and Training Blind Spots in the Manhattan World." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/hussain2016wacv-dealing/) doi:10.1109/WACV.2016.7477649BibTeX
@inproceedings{hussain2016wacv-dealing,
title = {{Dealing with Small Data and Training Blind Spots in the Manhattan World}},
author = {Hussain, Wajahat and Civera, Javier and Montano, Luis and Hebert, Martial},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2016},
pages = {1-9},
doi = {10.1109/WACV.2016.7477649},
url = {https://mlanthology.org/wacv/2016/hussain2016wacv-dealing/}
}