Human Motion Instruction Tuning

Abstract

This paper presents LLaMo (Large Language and Human Motion Assistant), a multimodal framework for human motion instruction tuning. In contrast to conventional instruction-tuning approaches that convert non-linguistic inputs, such as video or motion sequences, into language tokens, LLaMo retains motion in its native form for instruction tuning. This method preserves motion-specific details that are often diminished in tokenization, thereby improving the model's ability to interpret complex human behaviors. By processing both video and motion data alongside textual inputs, LLaMo enables a flexible, human-centric analysis. Experimental evaluations across high-complexity domains, including human behaviors and professional activities, indicate that LLaMo effectively captures domain-specific knowledge, enhancing comprehension and prediction in motion-intensive scenarios. We hope LLaMo offers a foundation for future multimodal AI systems with broad applications, from sports analytics to behavioral prediction.

Cite

Text

Li et al. "Human Motion Instruction Tuning." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01638

Markdown

[Li et al. "Human Motion Instruction Tuning." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/li2025cvpr-human/) doi:10.1109/CVPR52734.2025.01638

BibTeX

@inproceedings{li2025cvpr-human,
  title     = {{Human Motion Instruction Tuning}},
  author    = {Li, Lei and Jia, Sen and Wang, Jianhao and Jiang, Zhongyu and Zhou, Feng and Dai, Ju and Zhang, Tianfang and Wu, Zongkai and Hwang, Jenq-Neng},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {17582-17591},
  doi       = {10.1109/CVPR52734.2025.01638},
  url       = {https://mlanthology.org/cvpr/2025/li2025cvpr-human/}
}