SwiftNet: Real-Time Video Object Segmentation
Abstract
In this work we present SwiftNet for real-time semi-supervised video object segmentation (one-shot VOS), which reports 77.8% J&F and 70 FPS on DAVIS 2017 validation dataset, leading all present solutions in overall accuracy and speed performance. We achieve this by elaborately compressing spatiotemporal redundancy in matching-based VOS via Pixel-Adaptive Memory (PAM). Temporally, PAM adaptively triggers memory updates on frames where objects display noteworthy inter-frame variations. Spatially, PAM selectively performs memory update and match on dynamic pixels while ignoring the static ones, significantly reducing redundant computations wasted on segmentation-irrelevant pixels. To promote efficient reference encoding, light-aggregation encoder is also introduced in SwiftNet deploying reversed sub-pixel. We hope SwiftNet could set a strong and efficient baseline for real-time VOS and facilitate its application in mobile vision. The source code of SwiftNet can be found at https://github.com/haochenheheda/SwiftNet.
Cite
Text
Wang et al. "SwiftNet: Real-Time Video Object Segmentation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00135Markdown
[Wang et al. "SwiftNet: Real-Time Video Object Segmentation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/wang2021cvpr-swiftnet/) doi:10.1109/CVPR46437.2021.00135BibTeX
@inproceedings{wang2021cvpr-swiftnet,
title = {{SwiftNet: Real-Time Video Object Segmentation}},
author = {Wang, Haochen and Jiang, Xiaolong and Ren, Haibing and Hu, Yao and Bai, Song},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {1296-1305},
doi = {10.1109/CVPR46437.2021.00135},
url = {https://mlanthology.org/cvpr/2021/wang2021cvpr-swiftnet/}
}