MLDemon:Deployment Monitoring for Machine Learning Systems

Abstract

Post-deployment monitoring of ML systems is critical for ensuring reliability, especially as new user inputs can differ from the training distribution. Here we propose a novel approach, MLDemon, for ML DEployment MONitoring. MLDemon integrates both unlabeled data and a small amount of on-demand labels to produce a real-time estimate of the ML model’s current performance on a given data stream. Subject to budget constraints, MLDemon decides when to acquire additional, potentially costly, expert supervised labels to verify the model. On temporal datasets with diverse distribution drifts and models, MLDemon outperforms existing approaches. Moreover, we provide theoretical analysis to show that MLDemon is minimax rate optimal for a broad class of distribution drifts.

Cite

Text

Ginart et al. " MLDemon:Deployment Monitoring for Machine Learning Systems ." Artificial Intelligence and Statistics, 2022.

Markdown

[Ginart et al. " MLDemon:Deployment Monitoring for Machine Learning Systems ." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/ginart2022aistats-mldemon/)

BibTeX

@inproceedings{ginart2022aistats-mldemon,
  title     = {{ MLDemon:Deployment Monitoring for Machine Learning Systems }},
  author    = {Ginart, Tony and Jinye Zhang, Martin and Zou, James},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {3962-3997},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/ginart2022aistats-mldemon/}
}