Learning Implicitly with Noisy Data in Linear Arithmetic
Abstract
Robust learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit" learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.
Cite
Text
Rader et al. "Learning Implicitly with Noisy Data in Linear Arithmetic." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/195Markdown
[Rader et al. "Learning Implicitly with Noisy Data in Linear Arithmetic." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/rader2021ijcai-learning/) doi:10.24963/IJCAI.2021/195BibTeX
@inproceedings{rader2021ijcai-learning,
title = {{Learning Implicitly with Noisy Data in Linear Arithmetic}},
author = {Rader, Alexander Philipp and Mocanu, Ionela G. and Belle, Vaishak and Juba, Brendan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {1410-1417},
doi = {10.24963/IJCAI.2021/195},
url = {https://mlanthology.org/ijcai/2021/rader2021ijcai-learning/}
}