Run like a Neural Network, Explain like K-Nearest Neighbor
Abstract
Deep neural networks have achieved remarkable performance across a variety of applications. However, their decision-making processes are opaque. In contrast, k-nearest neighbor (k-NN) provides interpretable predictions by relying on similar cases, but it lacks important capabilities of neural networks. The neural network k-nearest neighbor (NN-kNN) model is designed to bridge this gap, combining the benefits of neural networks with the instance-based interpretability of k-NN. However, the initial formulation of NN-kNN had limitations including scalability issues, reliance on surface-level features, and an excessive number of parameters. This paper improves NN-kNN by enhancing its scalability, parameter efficiency, ease of integration with feature extractors, and training simplicity. An evaluation of the revised architecture for image and language classification tasks illustrates its promise as a flexible and interpretable method.
Cite
Text
Ye et al. "Run like a Neural Network, Explain like K-Nearest Neighbor." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/763Markdown
[Ye et al. "Run like a Neural Network, Explain like K-Nearest Neighbor." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/ye2025ijcai-run/) doi:10.24963/IJCAI.2025/763BibTeX
@inproceedings{ye2025ijcai-run,
title = {{Run like a Neural Network, Explain like K-Nearest Neighbor}},
author = {Ye, Xiaomeng and Leake, David and Wang, Yu and Crandall, David},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {6857-6865},
doi = {10.24963/IJCAI.2025/763},
url = {https://mlanthology.org/ijcai/2025/ye2025ijcai-run/}
}