Causal Learnability

Abstract

The ability to predict, or at least recognize, the state of the world that an action brings about, is a central feature of autonomous agents. We propose, herein, a formal framework within which we investigate whether this ability can be autonomously learned. The framework makes explicit certain premises that we contend are central in such a learning task: (i) slow sensors may prevent the sensing of an action's direct effects during learning; (ii) predictions need to be made reliably in future and novel situations. We initiate in this work a thorough investigation of the conditions under which learning is or is not feasible. Despite the very strong negative learnability results that we obtain, we also identify interesting special cases where learning is feasible and useful.

Cite

Text

Michael. "Causal Learnability." International Joint Conference on Artificial Intelligence, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-174

Markdown

[Michael. "Causal Learnability." International Joint Conference on Artificial Intelligence, 2011.](https://mlanthology.org/ijcai/2011/michael2011ijcai-causal/) doi:10.5591/978-1-57735-516-8/IJCAI11-174

BibTeX

@inproceedings{michael2011ijcai-causal,
  title     = {{Causal Learnability}},
  author    = {Michael, Loizos},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2011},
  pages     = {1014-1020},
  doi       = {10.5591/978-1-57735-516-8/IJCAI11-174},
  url       = {https://mlanthology.org/ijcai/2011/michael2011ijcai-causal/}
}