Learning with Expected Signatures: Theory and Applications
Abstract
The expected signature maps a collection of data streams to a lower dimensional representation, with a remarkable property: the resulting feature tensor can fully characterize the data generating distribution. This "model-free"’ embedding has been successfully leveraged to build multiple domain-agnostic machine learning (ML) algorithms for time series and sequential data. The convergence results proved in this paper bridge the gap between the expected signature’s empirical discrete-time estimator and its theoretical continuous-time value, allowing for a more complete probabilistic interpretation of expected signature-based ML methods. Moreover, when the data generating process is a martingale, we suggest a simple modification of the expected signature estimator with significantly lower mean squared error and empirically demonstrate how it can be effectively applied to improve predictive performance.
Cite
Text
Lucchese et al. "Learning with Expected Signatures: Theory and Applications." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Lucchese et al. "Learning with Expected Signatures: Theory and Applications." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/lucchese2025icml-learning/)BibTeX
@inproceedings{lucchese2025icml-learning,
title = {{Learning with Expected Signatures: Theory and Applications}},
author = {Lucchese, Lorenzo and Pakkanen, Mikko S. and Veraart, Almut E. D.},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {40995-41055},
volume = {267},
url = {https://mlanthology.org/icml/2025/lucchese2025icml-learning/}
}