Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection
Abstract
Network intrusion detection (NID) systems which leverage machine learning have been shown to have strong performance in practice when used to detect malicious network traffic. Decision trees in particular offer a strong balance between performance and simplicity, but require users of NID systems to have background knowledge in machine learning to interpret. In addition, they are unable to provide additional outside information as to why certain features may be important for classification. In this work, we explore the use of large language models (LLMs) to provide explanations and additional background knowledge for decision tree NID systems. Further, we introduce a new human evaluation framework for decision tree explanations, which leverages automatically generated quiz questions that measure human evaluators' understanding of decision tree inference. Finally, we show LLM generated decision tree explanations correlate highly with human ratings of readability, quality, and use of background knowledge while simultaneously providing better understanding of decision boundaries.
Cite
Text
Ziems et al. "Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection." NeurIPS 2023 Workshops: XAIA, 2023.Markdown
[Ziems et al. "Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/ziems2023neuripsw-explaining/)BibTeX
@inproceedings{ziems2023neuripsw-explaining,
title = {{Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection}},
author = {Ziems, Noah and Liu, Gang and Flanagan, John and Jiang, Meng},
booktitle = {NeurIPS 2023 Workshops: XAIA},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/ziems2023neuripsw-explaining/}
}