Dive Deeper into Box for Object Detection
Abstract
Anchor free methods have defined the new frontier in state-of-the-art researches in object detection in which accurate bounding box estimation is the key to the success of these methods. However, even the bounding box has the highest confidence score, it is still far from perfect at localization. This motivates us to investigate a box reorganization method (DDBNet), which can dive deeper into the box to strive for more accurate localization. Specifically, boxes are manipulated via a surgical operation named D&R, which represents box decomposition and recombination toward tightening instances more precisely. It should be noted that this D&R operation is manipulated at the IoU loss. Experimental results show that our method is effective which leads to state-of-the-art performance for one stage object detection.
Cite
Text
Chen et al. "Dive Deeper into Box for Object Detection." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58542-6_25Markdown
[Chen et al. "Dive Deeper into Box for Object Detection." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chen2020eccv-dive/) doi:10.1007/978-3-030-58542-6_25BibTeX
@inproceedings{chen2020eccv-dive,
title = {{Dive Deeper into Box for Object Detection}},
author = {Chen, Ran and Liu, Yong and Zhang, Mengdan and Liu, Shu and Yu, Bei and Tai, Yu-Wing},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58542-6_25},
url = {https://mlanthology.org/eccv/2020/chen2020eccv-dive/}
}