Support Vector Method for Novelty Detection
Abstract
Suppose you are given some dataset drawn from an underlying probabil(cid:173) ity distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified l/ between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a poten(cid:173) tially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data.
Cite
Text
Schölkopf et al. "Support Vector Method for Novelty Detection." Neural Information Processing Systems, 1999.Markdown
[Schölkopf et al. "Support Vector Method for Novelty Detection." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/scholkopf1999neurips-support/)BibTeX
@inproceedings{scholkopf1999neurips-support,
title = {{Support Vector Method for Novelty Detection}},
author = {Schölkopf, Bernhard and Williamson, Robert C. and Smola, Alex J. and Shawe-Taylor, John and Platt, John C.},
booktitle = {Neural Information Processing Systems},
year = {1999},
pages = {582-588},
url = {https://mlanthology.org/neurips/1999/scholkopf1999neurips-support/}
}