Gradient-Based Neural DAG Learning
Abstract
We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while being more global in its search compared to other greedy approaches. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.
Cite
Text
Lachapelle et al. "Gradient-Based Neural DAG Learning." NeurIPS 2019 Workshops: Deep_Inverse, 2019.Markdown
[Lachapelle et al. "Gradient-Based Neural DAG Learning." NeurIPS 2019 Workshops: Deep_Inverse, 2019.](https://mlanthology.org/neuripsw/2019/lachapelle2019neuripsw-gradientbased/)BibTeX
@inproceedings{lachapelle2019neuripsw-gradientbased,
title = {{Gradient-Based Neural DAG Learning}},
author = {Lachapelle, Sébastien and Brouillard, Philippe and Deleu, Tristan and Lacoste-Julien, Simon},
booktitle = {NeurIPS 2019 Workshops: Deep_Inverse},
year = {2019},
url = {https://mlanthology.org/neuripsw/2019/lachapelle2019neuripsw-gradientbased/}
}