Optimal Teaching Curricula with Compositional Simplicity Priors
Abstract
Machine teaching under strong simplicity priors can teach any concept in universal languages. Remarkably, recent experiments suggest that the teaching sets are shorter than the concept description itself. This raises many important questions about the complexity of concepts and their teaching size, especially when concepts are taught incrementally. In this paper we put a bound to these surprising experimental findings and reconnect teaching size and concept complexity: complex concepts do require large teaching sets. Also, we analyse teaching curricula, and find a new interposition phenomenon: the teaching size of a concept can increase because examples are captured by simpler concepts built on previously acquired knowledge. We provide a procedure that not only avoids interposition but builds an optimal curriculum . These results indicate novel curriculum design strategies for humans and machines.
Cite
Text
Garcia-Piqueras and Hernández-Orallo. "Optimal Teaching Curricula with Compositional Simplicity Priors." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021. doi:10.1007/978-3-030-86486-6_43Markdown
[Garcia-Piqueras and Hernández-Orallo. "Optimal Teaching Curricula with Compositional Simplicity Priors." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021.](https://mlanthology.org/ecmlpkdd/2021/garciapiqueras2021ecmlpkdd-optimal/) doi:10.1007/978-3-030-86486-6_43BibTeX
@inproceedings{garciapiqueras2021ecmlpkdd-optimal,
title = {{Optimal Teaching Curricula with Compositional Simplicity Priors}},
author = {Garcia-Piqueras, Manuel and Hernández-Orallo, José},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2021},
pages = {705-721},
doi = {10.1007/978-3-030-86486-6_43},
url = {https://mlanthology.org/ecmlpkdd/2021/garciapiqueras2021ecmlpkdd-optimal/}
}