Testing Causal Hypotheses Through Hierarchical Reinforcement Learning
Abstract
A goal of AI research is to develop agentic systems capable of operating in open-ended environments with the autonomy and adaptability akin to a scientist in the world of research---generating hypothesis, empirically testing them, and drawing conclusions about how the world works. We propose Structural Causal Models (SCMs) as a formalization of the space of hypothesis, and hierarchical reinforcement learning (HRL) as a key ingredient to building agents that can systematically discover the correct SCM. This provides a framework towards constructing agent behavior that generates and tests hypothesis to enables transferable learning of the world. Finally, we discuss practical implementation strategies.
Cite
Text
GX-Chen et al. "Testing Causal Hypotheses Through Hierarchical Reinforcement Learning." NeurIPS 2024 Workshops: IMOL, 2024.Markdown
[GX-Chen et al. "Testing Causal Hypotheses Through Hierarchical Reinforcement Learning." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/gxchen2024neuripsw-testing/)BibTeX
@inproceedings{gxchen2024neuripsw-testing,
title = {{Testing Causal Hypotheses Through Hierarchical Reinforcement Learning}},
author = {GX-Chen, Anthony and Lin, Dongyan and Samiei, Mandana},
booktitle = {NeurIPS 2024 Workshops: IMOL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/gxchen2024neuripsw-testing/}
}