Combating False Negatives in Adversarial Imitation Learning (Student Abstract)
Abstract
We define the False Negatives problem and show that it is a significant limitation in adversarial imitation learning. We propose a method that solves the problem by leveraging the nature of goal-conditioned tasks. The method, dubbed Fake Conditioning, is tested on instruction following tasks in BabyAI environments, where it improves sample efficiency over the baselines by at least an order of magnitude.
Cite
Text
Zolna et al. "Combating False Negatives in Adversarial Imitation Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7272Markdown
[Zolna et al. "Combating False Negatives in Adversarial Imitation Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/zolna2020aaai-combating/) doi:10.1609/AAAI.V34I10.7272BibTeX
@inproceedings{zolna2020aaai-combating,
title = {{Combating False Negatives in Adversarial Imitation Learning (Student Abstract)}},
author = {Zolna, Konrad and Saharia, Chitwan and Boussioux, Léonard and Hui, David Yu-Tung and Chevalier-Boisvert, Maxime and Bahdanau, Dzmitry and Bengio, Yoshua},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {13999-14000},
doi = {10.1609/AAAI.V34I10.7272},
url = {https://mlanthology.org/aaai/2020/zolna2020aaai-combating/}
}