AdaTune: Adaptive Tensor Program Compilation Made Efficient

Abstract

Deep learning models are computationally intense, and implementations often have to be highly optimized by experts or hardware vendors to be usable in practice. The DL compiler, together with Learning to Compile have proven to be a powerful technique for optimizing tensor programs. However, a limitation of this approach is that it still suffers from unbearably long overall optimization time.

Cite

Text

Li et al. "AdaTune: Adaptive Tensor Program Compilation Made Efficient." Neural Information Processing Systems, 2020.

Markdown

[Li et al. "AdaTune: Adaptive Tensor Program Compilation Made Efficient." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/li2020neurips-adatune/)

BibTeX

@inproceedings{li2020neurips-adatune,
  title     = {{AdaTune: Adaptive Tensor Program Compilation Made Efficient}},
  author    = {Li, Menghao and Zhang, Minjia and Wang, Chi and Li, Mingqin},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/li2020neurips-adatune/}
}