Back to MLP: A Simple Baseline for Human Motion Prediction
Abstract
This paper tackles the problem of human motion prediction, consisting in forecasting future body poses from historically observed sequences. State-of-the-art approaches provide good results, however, they rely on deep learning architectures of arbitrary complexity, such as Recurrent Neural Networks(RNN), Transformers or Graph Convolutional Networks(GCN), typically requiring multiple training stages and more than 2 million parameters. In this paper, we show that, after combining with a series of standard practices, such as applying Discrete Cosine Transform (DCT), predicting residual displacement of joints and optimizing velocity as an auxiliary loss, a light-weight network based on multi-layer perceptrons (MLPs) with only 0.14 million parameters can surpass the state-of-the-art performance. An exhaustive evaluation on the Human3.6M, AMASS, and 3DPW datasets shows that our method, named siMLPe, consistently outperforms all other approaches. We hope that our simple method could serve as a strong baseline for the community and allow re-thinking of the human motion prediction problem. The code is publicly available at https://github.com/dulucas/siMLPe.
Cite
Text
Guo et al. "Back to MLP: A Simple Baseline for Human Motion Prediction." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Guo et al. "Back to MLP: A Simple Baseline for Human Motion Prediction." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/guo2023wacv-back/)BibTeX
@inproceedings{guo2023wacv-back,
title = {{Back to MLP: A Simple Baseline for Human Motion Prediction}},
author = {Guo, Wen and Du, Yuming and Shen, Xi and Lepetit, Vincent and Alameda-Pineda, Xavier and Moreno-Noguer, Francesc},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {4809-4819},
url = {https://mlanthology.org/wacv/2023/guo2023wacv-back/}
}