Learning to Represent Edits

Abstract

We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.

Cite

Text

Yin et al. "Learning to Represent Edits." International Conference on Learning Representations, 2019.

Markdown

[Yin et al. "Learning to Represent Edits." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/yin2019iclr-learning/)

BibTeX

@inproceedings{yin2019iclr-learning,
  title     = {{Learning to Represent Edits}},
  author    = {Yin, Pengcheng and Neubig, Graham and Allamanis, Miltiadis and Brockschmidt, Marc and Gaunt, Alexander L.},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/yin2019iclr-learning/}
}