Improved Guarantees for Agnostic Learning of Disjunctions

Abstract

Given some arbitrary distribution D over $\{0,1\}^n$ and arbitrary target function $c^*$, the problem of agnostic learning of disjunctions is to achieve an error rate comparable to the error $\mathrm{OPT}_{\mathrm{disj}}$ of the best disjunction with respect to (D, $c^*$). Achieving error $O(n \cdot \mathrm{OPT}_{\mathrm{disj}})$ + $\varepsilon$ is trivial, and Winnow [13] achieves error $O(r \cdot \mathrm{OPT}_{\mathrm{disj}})$ + $\varepsilon$, where r is the number of relevant variables in the best disjunction. In recent work, Peleg [14] shows how to achieve a bound of $\tilde{O}(\sqrt{n} \cdot \mathrm{OPT}_{\mathrm{disj}}) + \varepsilon$ in polynomial time. In this paper we improve on Peleg's bound, giving a polynomial-time algorithm achieving a bound of $O(n^{1/3+\alpha} \cdot \mathrm{OPT}_{\mathrm{disj}})$ + $\varepsilon$ for any constant $\alpha > 0$. The heart of the algorithm is a method for weak-learning when $\mathrm{OPT}_{\mathrm{disj}}$ = $O(1/n^{1/3+\alpha})$, which can then be fed into existing agnostic boosting procedures to achieve the desired guarantee.

Cite

Text

Awasthi et al. "Improved Guarantees for Agnostic Learning of Disjunctions." Annual Conference on Computational Learning Theory, 2010.

Markdown

[Awasthi et al. "Improved Guarantees for Agnostic Learning of Disjunctions." Annual Conference on Computational Learning Theory, 2010.](https://mlanthology.org/colt/2010/awasthi2010colt-improved/)

BibTeX

@inproceedings{awasthi2010colt-improved,
  title     = {{Improved Guarantees for Agnostic Learning of Disjunctions}},
  author    = {Awasthi, Pranjal and Blum, Avrim and Sheffet, Or},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2010},
  pages     = {359-367},
  url       = {https://mlanthology.org/colt/2010/awasthi2010colt-improved/}
}