Extending Prot2Token: Aligning Protein Language Models for Unified and Diverse Protein Prediction Tasks
Abstract
Comprehensive protein function and property prediction remains a major challenge due to the vast diversity of sequences, structural variations, and limited labeled data. Existing models are often specialized to be task-specific, requiring independent training, which limits scalability. To address this, we extend Prot2Token, a unified autoregressive framework that focuses on the post-training alignment of pre-trained protein language models (PLMs), to new applications. Our approach enables next-token prediction across new applications of protein-prediction tasks, including protein-protein structure similarity, 3D structure prediction, mutation stability, post-translational modifications (PTMs), substrate-kinase phosphorylation sites, protein-protein affinity, and protein-ion binding sites. We introduce a self-supervised pre-training stage for the decoder, enhancing model initialization and improving downstream predictions. By integrating a causal autoregressive transformer with a pre-trained ESM-2 encoder, our model effectively aligns diverse protein tasks within a single framework. Additionally, we discuss the opportunities and limitations of this approach, providing insights for future research in optimizing PLMs as a general tool for broader biological applications.
Cite
Text
Pourmirzaei et al. "Extending Prot2Token: Aligning Protein Language Models for Unified and Diverse Protein Prediction Tasks." ICLR 2025 Workshops: LMRL, 2025.Markdown
[Pourmirzaei et al. "Extending Prot2Token: Aligning Protein Language Models for Unified and Diverse Protein Prediction Tasks." ICLR 2025 Workshops: LMRL, 2025.](https://mlanthology.org/iclrw/2025/pourmirzaei2025iclrw-extending/)BibTeX
@inproceedings{pourmirzaei2025iclrw-extending,
title = {{Extending Prot2Token: Aligning Protein Language Models for Unified and Diverse Protein Prediction Tasks}},
author = {Pourmirzaei, Mahdi and Han, Ye and Esmaili, Farzaneh and Pourmirzaeioliaei, Mohammadreza and Alqarghuli, Salhuldin and Chen, Kai and Xu, Dong},
booktitle = {ICLR 2025 Workshops: LMRL},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/pourmirzaei2025iclrw-extending/}
}