Knowledge Distillation for Fast and Accurate DNA Sequence Correction

Abstract

Accurate genome sequencing can improve our understanding of biology and the genetic basis of disease. The standard approach for generating DNA sequences from PacBio instruments relies on HMM-based models. Here, we introduce Distilled DeepConsensus - a distilled transformer–encoder model for sequence correction, which improves upon the HMM-based methods with runtime constraints in mind. Distilled DeepConsensus is 1.3x faster and 1.5x smaller than its larger counterpart while improving the yield of high quality reads (Q30) over the HMM-based method by 1.69x (vs. 1.73x for larger model). With improved accuracy of genomic sequences, Distilled DeepConsensus improves downstream applications of genomic sequence analysis such as reducing variant calling errors by 39% (34% for larger model) and improving genome assembly quality by 3.8% (4.2% for larger model). We show that the representations learned by Distilled DeepConsensus are similar between faster and slower models.

Cite

Text

Belyaeva et al. "Knowledge Distillation for Fast and Accurate DNA Sequence Correction." NeurIPS 2022 Workshops: LMRL, 2022.

Markdown

[Belyaeva et al. "Knowledge Distillation for Fast and Accurate DNA Sequence Correction." NeurIPS 2022 Workshops: LMRL, 2022.](https://mlanthology.org/neuripsw/2022/belyaeva2022neuripsw-knowledge/)

BibTeX

@inproceedings{belyaeva2022neuripsw-knowledge,
  title     = {{Knowledge Distillation for Fast and Accurate DNA Sequence Correction}},
  author    = {Belyaeva, Anastasiya and Shor, Joel and Cook, Daniel E and Shafin, Kishwar and Liu, Daniel and Töpfer, Armin and Wenger, Aaron M and Rowell, William J and Yang, Howard and Kolesnikov, Alexey and McLean, Cory Y and Nattestad, Maria and Carroll, Andrew and Chang, Pi-Chuan},
  booktitle = {NeurIPS 2022 Workshops: LMRL},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/belyaeva2022neuripsw-knowledge/}
}