Safety Analysis of Deep Neural Networks

Abstract

Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behaviour of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following, I will present some of my recent efforts in this area.

Cite

Text

Guidotti. "Safety Analysis of Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/675

Markdown

[Guidotti. "Safety Analysis of Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/guidotti2021ijcai-safety/) doi:10.24963/IJCAI.2021/675

BibTeX

@inproceedings{guidotti2021ijcai-safety,
  title     = {{Safety Analysis of Deep Neural Networks}},
  author    = {Guidotti, Dario},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {4887-4888},
  doi       = {10.24963/IJCAI.2021/675},
  url       = {https://mlanthology.org/ijcai/2021/guidotti2021ijcai-safety/}
}