Hypothesis Classes with a Unique Persistence Diagram Are NOT Nonuniformly Learnable

Abstract

*We have since shown that these results are incorrect. Please see PDF for details* Persistence-based summaries are increasingly integrated into deep learning through topological loss functions or regularisers. The implicit role of a topological term in a loss function is to restrict the class of functions in which we are learning (the hypothesis class) to those with a specific topology. Although doing so has had empirical success, to the best of our knowledge there exists no result in the literature that theoretically justifies this restriction. Given a binary classifier in the plane with a Morse-like decision boundary, we prove that the hypothesis class defined by restricting the topology of the possible decision boundaries to those with a unique persistence diagram results in a nonuniformly learnable class of functions. In doing so, we provide a statistical learning theoretic justification for the use of persistence-based summaries in loss functions.

Cite

Text

Bishop et al. "Hypothesis Classes with a Unique Persistence Diagram Are NOT Nonuniformly Learnable." NeurIPS 2020 Workshops: TDA_and_Beyond, 2020.

Markdown

[Bishop et al. "Hypothesis Classes with a Unique Persistence Diagram Are NOT Nonuniformly Learnable." NeurIPS 2020 Workshops: TDA_and_Beyond, 2020.](https://mlanthology.org/neuripsw/2020/bishop2020neuripsw-hypothesis/)

BibTeX

@inproceedings{bishop2020neuripsw-hypothesis,
  title     = {{Hypothesis Classes with a Unique Persistence Diagram Are NOT Nonuniformly Learnable}},
  author    = {Bishop, Nicholas George and Davies, Thomas and Tran-Thanh, Long},
  booktitle = {NeurIPS 2020 Workshops: TDA_and_Beyond},
  year      = {2020},
  url       = {https://mlanthology.org/neuripsw/2020/bishop2020neuripsw-hypothesis/}
}