Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract)
Abstract
Current state-of-the-art neural network explanation methods (e.g. Saliency maps, DeepLIFT, LIME, etc.) focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself, hence there still exists uncertainty over the exact role played by neurons. In this paper, we propose a novel neural network structure with Kolmogorov-Arnold Superposition Theorem based topology and Gaussian Processes based flexible activation function to achieve partial explainability of the neuron inner reasoning. The model feasibility is verified in a case study on binary classification of the banknotes.
Cite
Text
Sun et al. "Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I18.17946Markdown
[Sun et al. "Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/sun2021aaai-scalable/) doi:10.1609/AAAI.V35I18.17946BibTeX
@inproceedings{sun2021aaai-scalable,
title = {{Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract)}},
author = {Sun, Schyler Chengyao and Li, Chen and Wei, Zhuangkun and Tsourdos, Antonios and Guo, Weisi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {15899-15900},
doi = {10.1609/AAAI.V35I18.17946},
url = {https://mlanthology.org/aaai/2021/sun2021aaai-scalable/}
}