DAG Learning on the Permutahedron
Abstract
We propose a continuous optimization framework for discovering a latent directed acyclic graph (DAG) from observational data. Our approach optimizes over the polytope of permutation vectors, the so-called Permutahedron, to learn a topological ordering. Edges can be optimized jointly, or learned conditional on the ordering via a non-differentiable subroutine. Compared to existing continuous optimization approaches our formulation has a number of advantages including: 1. validity: optimizes over exact DAGs as opposed to other relaxations optimizing approximate DAGs; 2. modularity: accommodates any edge-optimization procedure, edge structural parameterization, and optimization loss; 3. end-to-end: either alternately iterates between node-ordering and edge-optimization, or optimizes them jointly; We demonstrate, on real-world data problems in protein-signaling and transcriptional network discovery, that our approach lies on the Pareto frontier of two key metrics, the SID and SHD.
Cite
Text
Zantedeschi et al. "DAG Learning on the Permutahedron." International Conference on Learning Representations, 2023.Markdown
[Zantedeschi et al. "DAG Learning on the Permutahedron." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/zantedeschi2023iclr-dag/)BibTeX
@inproceedings{zantedeschi2023iclr-dag,
title = {{DAG Learning on the Permutahedron}},
author = {Zantedeschi, Valentina and Franceschi, Luca and Kaddour, Jean and Kusner, Matt and Niculae, Vlad},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/zantedeschi2023iclr-dag/}
}