Competitive Classification and Closeness Testing

Abstract

We study the problems of \emphclassification and \emphcloseness testing. A \emphclassifier associates a test sequence with the one of two training sequences that was generated by the same distribution. A \emphcloseness test determines whether two sequences were generated by the same or by different distributions. For both problems all natural algorithms are \emphsymmetric – they make the same decision under all symbol relabelings. With no assumptions on the distributions’ support size or relative distance, we construct a classifier and closeness test that require at most O(n^3/2) samples to attain the n-sample accuracy of the best symmetric classifier or closeness test designed with knowledge of the underlying distributions. Both algorithms run in time linear in the number of samples. Conversely we also show that for any classifier or closeness test, there are distributions that require Ω(n^7/6) samples to achieve the n-sample accuracy of the best symmetric algorithm that knows the underlying distributions.

Cite

Text

Acharya et al. "Competitive Classification and Closeness Testing." Proceedings of the 25th Annual Conference on Learning Theory, 2012.

Markdown

[Acharya et al. "Competitive Classification and Closeness Testing." Proceedings of the 25th Annual Conference on Learning Theory, 2012.](https://mlanthology.org/colt/2012/acharya2012colt-competitive/)

BibTeX

@inproceedings{acharya2012colt-competitive,
  title     = {{Competitive Classification and Closeness Testing}},
  author    = {Acharya, Jayadev and Das, Hirakendu and Jafarpour, Ashkan and Orlitsky, Alon and Pan, Shengjun and Suresh, Ananda},
  booktitle = {Proceedings of the 25th Annual Conference on Learning Theory},
  year      = {2012},
  pages     = {22.1-22.18},
  volume    = {23},
  url       = {https://mlanthology.org/colt/2012/acharya2012colt-competitive/}
}