Advancing Semantic Future Prediction Through Multimodal Visual Sequence Transformers
Abstract
Semantic future prediction is important for autonomous systems navigating dynamic environments. This paper introduces FUTURIST, a method for multimodal future semantic prediction that uses a unified and efficient visual sequence transformer architecture. Our approach incorporates a multimodal masked visual modeling objective and a novel masking mechanism designed for multimodal training. This allows the model to effectively integrate visible information from various modalities, improving prediction accuracy. Additionally, we propose a VAE-free hierarchical tokenization process, which reduces computational complexity, streamlines the training pipeline, and enables end-to-end training with high-resolution, multimodal inputs. We validate FUTURIST on the Cityscapes dataset, demonstrating state-of-the-art performance in future semantic segmentation for both short- and mid-term forecasting. We provide the implementation code and model weights at https://github.com/Sta8is/FUTURIST.
Cite
Text
Karypidis et al. "Advancing Semantic Future Prediction Through Multimodal Visual Sequence Transformers." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00359Markdown
[Karypidis et al. "Advancing Semantic Future Prediction Through Multimodal Visual Sequence Transformers." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/karypidis2025cvpr-advancing/) doi:10.1109/CVPR52734.2025.00359BibTeX
@inproceedings{karypidis2025cvpr-advancing,
title = {{Advancing Semantic Future Prediction Through Multimodal Visual Sequence Transformers}},
author = {Karypidis, Efstathios and Kakogeorgiou, Ioannis and Gidaris, Spyros and Komodakis, Nikos},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {3793-3803},
doi = {10.1109/CVPR52734.2025.00359},
url = {https://mlanthology.org/cvpr/2025/karypidis2025cvpr-advancing/}
}