One-Shot Learners Using Negative Counterexamples and Nearest Positive Examples
Abstract
As some cognitive research suggests, in the process of learning languages, in addition to overt explicit negative evidence, a child often receives covert explicit evidence in form of corrected or rephrased sentences. In this paper, we suggest one approach to formalization of overt and covert evidence within the framework of one-shot learners via subset and membership queries to a teacher (oracle). We compare and explore general capabilities of our models, as well as complexity advantages of learnability models of one type over models of other types, where complexity is measured in terms of number of queries. In particular, we establish that “correcting” positive examples give sometimes more power to a learner than just negative (counter)examples and access to full positive data.
Cite
Text
Jain and Kinber. "One-Shot Learners Using Negative Counterexamples and Nearest Positive Examples." International Conference on Algorithmic Learning Theory, 2007. doi:10.1007/978-3-540-75225-7_22Markdown
[Jain and Kinber. "One-Shot Learners Using Negative Counterexamples and Nearest Positive Examples." International Conference on Algorithmic Learning Theory, 2007.](https://mlanthology.org/alt/2007/jain2007alt-oneshot/) doi:10.1007/978-3-540-75225-7_22BibTeX
@inproceedings{jain2007alt-oneshot,
title = {{One-Shot Learners Using Negative Counterexamples and Nearest Positive Examples}},
author = {Jain, Sanjay and Kinber, Efim B.},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2007},
pages = {257-271},
doi = {10.1007/978-3-540-75225-7_22},
url = {https://mlanthology.org/alt/2007/jain2007alt-oneshot/}
}