Probabilistic Robustness Quantification of Neural Networks

Abstract

Safety properties of neural networks are critical to their application in safety-critical domains. Quantification of their robustness against uncertainties is an upcoming area of research. In this work, we propose an approach for providing probabilistic guarantees on the performance of a trained neural network. We present two novel metrics for probabilistic verification on training data distribution and test dataset. Given a trained neural network, we quantify the probability of the model to make errors on a random sample drawn from the training data distribution. Second, from the output logits of a sample test point, we measure its p-value on the learned logit distribution to quantify the confidence of the model at this test point. We compare our results with softmax based metric using the black-box adversarial attacks on a simple CNN architecture trained for MNIST digit classification.

Cite

Text

Kishan. "Probabilistic Robustness Quantification of Neural Networks." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I18.17979

Markdown

[Kishan. "Probabilistic Robustness Quantification of Neural Networks." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/kishan2021aaai-probabilistic/) doi:10.1609/AAAI.V35I18.17979

BibTeX

@inproceedings{kishan2021aaai-probabilistic,
  title     = {{Probabilistic Robustness Quantification of Neural Networks}},
  author    = {Kishan, Gopi},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {15966-15967},
  doi       = {10.1609/AAAI.V35I18.17979},
  url       = {https://mlanthology.org/aaai/2021/kishan2021aaai-probabilistic/}
}