Verifying Robustness of Gradient Boosted Models
Abstract
Gradient boosted models are a fundamental machine learning technique. Robustness to small perturbations of the input is an important quality measure for machine learning models, but the literature lacks a method to prove the robustness of gradient boosted models.This work introduces VERIGB, a tool for quantifying the robustness of gradient boosted models. VERIGB encodes the model and the robustness property as an SMT formula, which enables state of the art verification tools to prove the model’s robustness. We extensively evaluate VERIGB on publicly available datasets and demonstrate a capability for verifying large models. Finally, we show that some model configurations tend to be inherently more robust than others.
Cite
Text
Einziger et al. "Verifying Robustness of Gradient Boosted Models." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33012446Markdown
[Einziger et al. "Verifying Robustness of Gradient Boosted Models." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/einziger2019aaai-verifying/) doi:10.1609/AAAI.V33I01.33012446BibTeX
@inproceedings{einziger2019aaai-verifying,
title = {{Verifying Robustness of Gradient Boosted Models}},
author = {Einziger, Gil and Goldstein, Maayan and Sa'ar, Yaniv and Segall, Itai},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {2446-2453},
doi = {10.1609/AAAI.V33I01.33012446},
url = {https://mlanthology.org/aaai/2019/einziger2019aaai-verifying/}
}