MASTAF: A Model-Agnostic Spatio-Temporal Attention Fusion Network for Few-Shot Video Classification
Abstract
We propose MASTAF, a Model-Agnostic Spatio-Temporal Attention Fusion network for few-shot video classification. MASTAF takes input from a general video spatial and temporal representation,e.g., using 2D CNN, 3D CNN, and Video Transformer. Then, to make the most of such representations, we use self- and cross-attention models to highlight the critical spatio-temporal region to increase the inter-class variations and decrease the intra-class variations. Last, MASTAF applies a lightweight fusion network and a nearest neighbor classifier to classify each query video. We demonstrate that MASTAF improves the state-of-the-art performance on three few-shot video classification benchmarks(UCF101, HMDB51, and Something-Something-V2), e.g., by up to 91.6%, 69.5%, and 60.7% for five-way one-shot video classification, respectively.
Cite
Text
Liu et al. "MASTAF: A Model-Agnostic Spatio-Temporal Attention Fusion Network for Few-Shot Video Classification." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Liu et al. "MASTAF: A Model-Agnostic Spatio-Temporal Attention Fusion Network for Few-Shot Video Classification." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/liu2023wacv-mastaf/)BibTeX
@inproceedings{liu2023wacv-mastaf,
title = {{MASTAF: A Model-Agnostic Spatio-Temporal Attention Fusion Network for Few-Shot Video Classification}},
author = {Liu, Xin and Zhang, Huanle and Pirsiavash, Hamed and Liu, Xin},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {2508-2517},
url = {https://mlanthology.org/wacv/2023/liu2023wacv-mastaf/}
}