Transformer Model for Genome Sequence Analysis

Abstract

One major challenge of applying machine learning in genomics is the scarcity of labeled data, which often requires expensive and time-consuming physical experimentation under laboratory conditions to obtain. However, the advent of high throughput sequencing has made large quantities of unlabeled genome data available. This can be used to apply semi-supervised learning methods through representation learning. In this paper, we investigate the impact of a popular and well-established language model, namely BERT [Devlin et al., 2018], for sequence genome analysis. Specifically, we adapt DNABERT [Ji et al., 2021] to GenomeNet-BERT in order to produce useful representations for downstream tasks such as classification and semi10 supervised learning. We explore different pretraining setups and compare their performance on a virus genome classification task to strictly supervised training and baselines on different training set size setups. The conducted experiments show that this architecture provides an increase in performance compared to existing methods at the cost of more resource-intensive training.

Cite

Text

Hurmer et al. "Transformer Model for Genome Sequence Analysis." NeurIPS 2022 Workshops: LMRL, 2022.

Markdown

[Hurmer et al. "Transformer Model for Genome Sequence Analysis." NeurIPS 2022 Workshops: LMRL, 2022.](https://mlanthology.org/neuripsw/2022/hurmer2022neuripsw-transformer/)

BibTeX

@inproceedings{hurmer2022neuripsw-transformer,
  title     = {{Transformer Model for Genome Sequence Analysis}},
  author    = {Hurmer, Noah and To, Xiao-Yin and Binder, Martin and Gündüz, Hüseyin Anil and Münch, Philipp C. and Mreches, René and McHardy, Alice C and Bischl, Bernd and Rezaei, Mina},
  booktitle = {NeurIPS 2022 Workshops: LMRL},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/hurmer2022neuripsw-transformer/}
}