Learning to Reason About and to Act on Physical Cascading Events
Abstract
Reasoning and interacting with dynamic environments is a fundamental problem in AI, but it becomes extremely challenging when actions can trigger cascades of cross-dependant events. We introduce a new learning setup called Cascade where an agent is shown a video of a simulated physical dynamic scene, and is asked to intervene and trigger a cascade of events, such that the system reaches a "counterfactual" goal. For instance, the agent may be asked to “Make the blue ball hit the red one, by pushing the green ball”. The problem is very challenging because agent interventions are from a continuous space, and cascades of events make the dynamics highly non-linear. We combine semantic tree search with an event-driven forward model and devise an algorithm that learns to search in semantic trees in continuous spaces. We demonstrate that our approach learns to effectively follow instructions to intervene in previously unseen complex scenes. Interestingly, it can use the observed cascade of events to reason about alternative counterfactual outcomes.
Cite
Text
Atzmon et al. "Learning to Reason About and to Act on Physical Cascading Events." ICLR 2022 Workshops: OSC, 2022.Markdown
[Atzmon et al. "Learning to Reason About and to Act on Physical Cascading Events." ICLR 2022 Workshops: OSC, 2022.](https://mlanthology.org/iclrw/2022/atzmon2022iclrw-learning/)BibTeX
@inproceedings{atzmon2022iclrw-learning,
title = {{Learning to Reason About and to Act on Physical Cascading Events}},
author = {Atzmon, Yuval and Meirom, Eli and Mannor, Shie and Chechik, Gal},
booktitle = {ICLR 2022 Workshops: OSC},
year = {2022},
url = {https://mlanthology.org/iclrw/2022/atzmon2022iclrw-learning/}
}