Exploring Fisher Vector and Deep Networks for Action Spotting
Abstract
This paper describes our method and attempt on track 2 at the ChaLearn Looking at People (LAP) challenge 2015. Our approach utilizes Fisher vector and iDT features for action spotting, and improve its performance from two aspects: (i) We take account of interaction labels into the training process; (ii) By visualizing our results on validation set, we find that our previous method [10] is weak in detecting action class 2, and improve it by introducing multiple thresholds. Moreover, we exploit deep neural networks to extract both appearance and motion representation for this task. However, our current deep network fails to yield better performance than our Fisher vector based approach and may need further exploration. For this reason, we submit the results obtained by our Fisher vector approach which achieves a Jaccard Index of 0.5385 and ranks the 1st place in track 2.
Cite
Text
Wang et al. "Exploring Fisher Vector and Deep Networks for Action Spotting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2015. doi:10.1109/CVPRW.2015.7301330Markdown
[Wang et al. "Exploring Fisher Vector and Deep Networks for Action Spotting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2015.](https://mlanthology.org/cvprw/2015/wang2015cvprw-exploring/) doi:10.1109/CVPRW.2015.7301330BibTeX
@inproceedings{wang2015cvprw-exploring,
title = {{Exploring Fisher Vector and Deep Networks for Action Spotting}},
author = {Wang, Zhe and Wang, Limin and Du, Wenbin and Qiao, Yu},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2015},
pages = {10-14},
doi = {10.1109/CVPRW.2015.7301330},
url = {https://mlanthology.org/cvprw/2015/wang2015cvprw-exploring/}
}