Discriminative Direction for Kernel Classifiers
Abstract
In many scientific and engineering applications, detecting and under- standing differences between two groups of examples can be reduced to a classical problem of training a classifier for labeling new examples while making as few mistakes as possible. In the traditional classifi- cation setting, the resulting classifier is rarely analyzed in terms of the properties of the input data captured by the discriminative model. How- ever, such analysis is crucial if we want to understand and visualize the detected differences. We propose an approach to interpretation of the sta- tistical model in the original feature space that allows us to argue about the model in terms of the relevant changes to the input vectors. For each point in the input space, we define a discriminative direction to be the direction that moves the point towards the other class while introducing as little irrelevant change as possible with respect to the classifier func- tion. We derive the discriminative direction for kernel-based classifiers, demonstrate the technique on several examples and briefly discuss its use in the statistical shape analysis, an application that originally motivated this work. 1 Introduction
Cite
Text
Golland. "Discriminative Direction for Kernel Classifiers." Neural Information Processing Systems, 2001.Markdown
[Golland. "Discriminative Direction for Kernel Classifiers." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/golland2001neurips-discriminative/)BibTeX
@inproceedings{golland2001neurips-discriminative,
title = {{Discriminative Direction for Kernel Classifiers}},
author = {Golland, Polina},
booktitle = {Neural Information Processing Systems},
year = {2001},
pages = {745-752},
url = {https://mlanthology.org/neurips/2001/golland2001neurips-discriminative/}
}