Uncertainty Quantification for Data-Driven Change-Point Learning via Cross-Validation
Abstract
Accurately detecting multiple change-points is critical for various applications, but determining the optimal number of change-points remains a challenge. Existing approaches based on information criteria attempt to balance goodness-of-fit and model complexity, but their performance varies depending on the model. Recently, data-driven selection criteria based on cross-validation has been proposed, but these methods can be prone to slight overfitting in finite samples. In this paper, we introduce a method that controls the probability of overestimation and provides uncertainty quantification for learning multiple change-points via cross-validation. We frame this problem as a sequence of model comparison problems and leverage high-dimensional inferential procedures. We demonstrate the effectiveness of our approach through experiments on finite-sample data, showing superior uncertainty quantification for overestimation compared to existing methods. Our approach has broad applicability and can be used in diverse change-point models.
Cite
Text
Chen et al. "Uncertainty Quantification for Data-Driven Change-Point Learning via Cross-Validation." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I10.29008Markdown
[Chen et al. "Uncertainty Quantification for Data-Driven Change-Point Learning via Cross-Validation." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/chen2024aaai-uncertainty-a/) doi:10.1609/AAAI.V38I10.29008BibTeX
@inproceedings{chen2024aaai-uncertainty-a,
title = {{Uncertainty Quantification for Data-Driven Change-Point Learning via Cross-Validation}},
author = {Chen, Hui and Jia, Yinxu and Wang, Guanghui and Zou, Changliang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {11294-11301},
doi = {10.1609/AAAI.V38I10.29008},
url = {https://mlanthology.org/aaai/2024/chen2024aaai-uncertainty-a/}
}