AutoCoder: Leveraging Transformers for Automatic Code Synthesis

Abstract

Synthesizing programs from natural language descriptions is a challenging task. In this paper, we leverage the power of transformer-based language models for the task of program synthesis. We experiment with two variants of transformers and showcase their superior performance than the existing SOTA models. We also discuss the qualitative differences in the learned representation of these two variants. Finally, we compared both these models through the lens of " degree of memorization" and demonstrated that the vanilla transformer model has a higher affinity towards memorizing the training data than the other variant.

Cite

Text

Anand et al. "AutoCoder: Leveraging Transformers for Automatic Code Synthesis." NeurIPS 2021 Workshops: AIPLANS, 2021.

Markdown

[Anand et al. "AutoCoder: Leveraging Transformers for Automatic Code Synthesis." NeurIPS 2021 Workshops: AIPLANS, 2021.](https://mlanthology.org/neuripsw/2021/anand2021neuripsw-autocoder/)

BibTeX

@inproceedings{anand2021neuripsw-autocoder,
  title     = {{AutoCoder: Leveraging Transformers for Automatic Code Synthesis}},
  author    = {Anand, Mrinal and Kayal, Pratik and Singh, Mayank},
  booktitle = {NeurIPS 2021 Workshops: AIPLANS},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/anand2021neuripsw-autocoder/}
}