Cosine Meets SoftMax: A Tough-to-Beat Baseline for Visual Grounding
Abstract
In this paper, we present a simple baseline for visual grounding for autonomous driving which outperforms the state of the art methods, while retaining minimal design choices. Our framework minimizes the cross-entropy loss over the cosine distance between multiple image ROI features with a text embedding (representing the give sentence/phrase). We use pre-trained networks for obtaining the initial embeddings and learn a transformation layer on top of the text embedding. We perform experiments on the Talk2Car dataset and achieve 68.7% AP50 accuracy, improving upon the previous state of the art by 8.6%. Our investigation suggests reconsideration towards more approaches employing sophisticated attention mechanisms or multi-stage reasoning or complex metric learning loss functions by showing promise in simpler alternatives.
Cite
Text
Rufus et al. "Cosine Meets SoftMax: A Tough-to-Beat Baseline for Visual Grounding." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66096-3_4Markdown
[Rufus et al. "Cosine Meets SoftMax: A Tough-to-Beat Baseline for Visual Grounding." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/rufus2020eccvw-cosine/) doi:10.1007/978-3-030-66096-3_4BibTeX
@inproceedings{rufus2020eccvw-cosine,
title = {{Cosine Meets SoftMax: A Tough-to-Beat Baseline for Visual Grounding}},
author = {Rufus, Nivedita and Nair, Unni Krishnan R. and Krishna, K. Madhava and Gandhi, Vineet},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {39-50},
doi = {10.1007/978-3-030-66096-3_4},
url = {https://mlanthology.org/eccvw/2020/rufus2020eccvw-cosine/}
}