Temporal Convolutional Networks: A Unified Approach to Action Segmentation
Abstract
The dominant paradigm for video-based action segmentation is composed of two steps: first, compute low-level features for each frame using Dense Trajectories or a Convolutional Neural Network to encode local spatiotemporal information, and second, input these features into a classifier such as a Recurrent Neural Network (RNN) that captures high-level temporal relationships. While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN.
Cite
Text
Lea et al. "Temporal Convolutional Networks: A Unified Approach to Action Segmentation." European Conference on Computer Vision Workshops, 2016. doi:10.1007/978-3-319-49409-8_7Markdown
[Lea et al. "Temporal Convolutional Networks: A Unified Approach to Action Segmentation." European Conference on Computer Vision Workshops, 2016.](https://mlanthology.org/eccvw/2016/lea2016eccvw-temporal/) doi:10.1007/978-3-319-49409-8_7BibTeX
@inproceedings{lea2016eccvw-temporal,
title = {{Temporal Convolutional Networks: A Unified Approach to Action Segmentation}},
author = {Lea, Colin and Vidal, René and Reiter, Austin and Hager, Gregory D.},
booktitle = {European Conference on Computer Vision Workshops},
year = {2016},
pages = {47-54},
doi = {10.1007/978-3-319-49409-8_7},
url = {https://mlanthology.org/eccvw/2016/lea2016eccvw-temporal/}
}