BabelTower: Learning to Auto-Parallelized Program Translation
Abstract
GPUs have become the dominant computing platforms for many applications, while programming GPUs with the widely-used CUDA parallel programming model is difficult. As sequential C code is relatively easy to obtain either from legacy repositories or by manual implementation, automatically translating C to its parallel CUDA counterpart is promising to relieve the burden of GPU programming. However, because of huge differences between the sequential C and the parallel CUDA programming model, existing approaches fail to conduct the challenging auto-parallelized program translation. In this paper, we propose a learning-based framework, i.e., BabelTower, to address this problem. We first create a large-scale dataset consisting of compute-intensive function-level monolingual corpora. We further propose using back-translation with a discriminative reranker to cope with unpaired corpora and parallel semantic conversion. Experimental results show that BabelTower outperforms state-of-the-art by 1.79, 6.09, and 9.39 in terms of BLEU, CodeBLEU, and specifically designed ParaBLEU, respectively. The CUDA code generated by BabelTower attains a speedup of up to 347x over the sequential C code, and the developer productivity is improved by at most 3.8x.
Cite
Text
Wen et al. "BabelTower: Learning to Auto-Parallelized Program Translation." International Conference on Machine Learning, 2022.Markdown
[Wen et al. "BabelTower: Learning to Auto-Parallelized Program Translation." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/wen2022icml-babeltower/)BibTeX
@inproceedings{wen2022icml-babeltower,
title = {{BabelTower: Learning to Auto-Parallelized Program Translation}},
author = {Wen, Yuanbo and Guo, Qi and Fu, Qiang and Li, Xiaqing and Xu, Jianxing and Tang, Yanlin and Zhao, Yongwei and Hu, Xing and Du, Zidong and Li, Ling and Wang, Chao and Zhou, Xuehai and Chen, Yunji},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {23685-23700},
volume = {162},
url = {https://mlanthology.org/icml/2022/wen2022icml-babeltower/}
}