Video Text Detection and Recognition: Dataset and Benchmark

Abstract

This paper focuses on the problem of text detection and recognition in videos. Even though text detection and recognition in images has seen much progress in recent years, relatively little work has been done to extend these solutions to the video domain. In this work, we extend an existing end-to-end solution for text recognition in natural images to video. We explore a variety of methods for training local character models and explore methods to capitalize on the temporal redundancy of text in video. We present detection performance using the Video Analysis and Content Extraction (VACE) benchmarking framework on the ICDAR 2013 Robust Reading Challenge 3 video dataset and on a new video text dataset. We also propose a new performance metric based on precision-recall curves to measure the performance of text recognition in videos. Using this metric, we provide early video text recognition results on the above mentioned datasets.

Cite

Text

Nguyen et al. "Video Text Detection and Recognition: Dataset and Benchmark." IEEE/CVF Winter Conference on Applications of Computer Vision, 2014. doi:10.1109/WACV.2014.6836024

Markdown

[Nguyen et al. "Video Text Detection and Recognition: Dataset and Benchmark." IEEE/CVF Winter Conference on Applications of Computer Vision, 2014.](https://mlanthology.org/wacv/2014/nguyen2014wacv-video/) doi:10.1109/WACV.2014.6836024

BibTeX

@inproceedings{nguyen2014wacv-video,
  title     = {{Video Text Detection and Recognition: Dataset and Benchmark}},
  author    = {Nguyen, Phuc Xuan and Wang, Kai and Belongie, Serge J.},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2014},
  pages     = {776-783},
  doi       = {10.1109/WACV.2014.6836024},
  url       = {https://mlanthology.org/wacv/2014/nguyen2014wacv-video/}
}