Preference Optimization of Protein Language Models as a Multi-Objective Binder Design Paradigm
Abstract
We present a multi-objective binder design paradigm based on instruction fine-tuning and direct preference optimization (DPO) of autoregressive protein language models (pLMs). Multiple design objectives are encoded in the language model through direct optimization on expert curated preference sequence datasets comprising preferred and dispreferred distributions. We show the proposed alignment strategy enables ProtGPT2 to effectively design binders conditioned on specified receptors and a drug developability criterion. Generated binder samples demonstrate median isoelectric point (pI) improvements by 17%-60%.
Cite
Text
Mistani and Mysore. "Preference Optimization of Protein Language Models as a Multi-Objective Binder Design Paradigm." ICLR 2024 Workshops: GEM, 2024.Markdown
[Mistani and Mysore. "Preference Optimization of Protein Language Models as a Multi-Objective Binder Design Paradigm." ICLR 2024 Workshops: GEM, 2024.](https://mlanthology.org/iclrw/2024/mistani2024iclrw-preference/)BibTeX
@inproceedings{mistani2024iclrw-preference,
title = {{Preference Optimization of Protein Language Models as a Multi-Objective Binder Design Paradigm}},
author = {Mistani, Pouria and Mysore, Venkatesh},
booktitle = {ICLR 2024 Workshops: GEM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/mistani2024iclrw-preference/}
}