The VC Dimension for Mixtures of Binary Classifiers

Abstract

The mixtures-of-experts (ME) methodology provides a tool of classification when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the Vapnik-Chervonenkis dimension of the ME architecture is bounded below by the number of experts m and above by O (m4s2), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts.

Cite

Text

Jiang. "The VC Dimension for Mixtures of Binary Classifiers." Neural Computation, 2000. doi:10.1162/089976600300015367

Markdown

[Jiang. "The VC Dimension for Mixtures of Binary Classifiers." Neural Computation, 2000.](https://mlanthology.org/neco/2000/jiang2000neco-vc/) doi:10.1162/089976600300015367

BibTeX

@article{jiang2000neco-vc,
  title     = {{The VC Dimension for Mixtures of Binary Classifiers}},
  author    = {Jiang, Wenxin},
  journal   = {Neural Computation},
  year      = {2000},
  pages     = {1293-1301},
  doi       = {10.1162/089976600300015367},
  volume    = {12},
  url       = {https://mlanthology.org/neco/2000/jiang2000neco-vc/}
}