Toward Interpretable Time Series Modeling: A Kernel Representation Perspective
Abstract
Time series modeling is essential in finance, healthcare, and environmental science, yet nonlinear patterns, noise, and concept drift pose challenges. Although deep learning models, such as Transformer-based and recent pre-trained models, have achieved good performance across various time series tasks, they often lack interpretability, especially in co-evolving time series. This work introduces a kernel representation learning (KRL) perspective, rethinking time series modeling through kernel-induced self-representation to effectively capture temporal structures and dynamic transitions. Additionally, we establish theoretical connections between KRL and advanced deep-network models, demonstrating how kernel methods provide a principled approach to capturing complex time series behaviors.
Cite
Text
Xu. "Toward Interpretable Time Series Modeling: A Kernel Representation Perspective." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/1245Markdown
[Xu. "Toward Interpretable Time Series Modeling: A Kernel Representation Perspective." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/xu2025ijcai-interpretable/) doi:10.24963/IJCAI.2025/1245BibTeX
@inproceedings{xu2025ijcai-interpretable,
title = {{Toward Interpretable Time Series Modeling: A Kernel Representation Perspective}},
author = {Xu, Kunpeng},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {10981-10982},
doi = {10.24963/IJCAI.2025/1245},
url = {https://mlanthology.org/ijcai/2025/xu2025ijcai-interpretable/}
}