Learning Structure-Aware Representations of Dependent Types
Abstract
Agda is a dependently-typed programming language and a proof assistant, pivotal in proof formalization and programming language theory.This paper extends the Agda ecosystem into machine learning territory, and, vice versa, makes Agda-related resources available to machine learning practitioners.We introduce and release a novel dataset of Agda program-proofs that is elaborate and extensive enough to support various machine learning applications -- the first of its kind.Leveraging the dataset's ultra-high resolution, which details proof states at the sub-type level, we propose a novel neural architecture targeted at faithfully representing dependently-typed programs on the basis of structural rather than nominal principles.We instantiate and evaluate our architecture in a premise selection setup, where it achieves promising initial results, surpassing strong baselines.
Cite
Text
Kogkalidis et al. "Learning Structure-Aware Representations of Dependent Types." Neural Information Processing Systems, 2024. doi:10.52202/079017-2079Markdown
[Kogkalidis et al. "Learning Structure-Aware Representations of Dependent Types." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/kogkalidis2024neurips-learning/) doi:10.52202/079017-2079BibTeX
@inproceedings{kogkalidis2024neurips-learning,
title = {{Learning Structure-Aware Representations of Dependent Types}},
author = {Kogkalidis, Konstantinos and Melkonian, Orestis and Bernardy, Jean-Philippe},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-2079},
url = {https://mlanthology.org/neurips/2024/kogkalidis2024neurips-learning/}
}