SmileyLlama: Modifying Large Language Models \\for Directed Chemical Space Exploration
Abstract
Here we show that a Large Language Model (LLM) can serve as a foundation model for a Chemical Language Model (CLM) which performs at or above the level of CLMs trained solely on chemical SMILES string data. Using supervised fine-tuning (SFT) and direct preference optimization (DPO) on the open-source Llama LLM, we demonstrate that we can train an LLM to respond to prompts such as generating molecules with properties of interest to drug development. This overall framework allows an LLM to not just be a chatbot client for chemistry and materials tasks, but can be adapted to speak more directly as a CLM which can generate molecules with user-specified properties.
Cite
Text
Cavanagh et al. "SmileyLlama: Modifying Large Language Models \\for Directed Chemical Space Exploration." NeurIPS 2024 Workshops: AIDrugX, 2024.Markdown
[Cavanagh et al. "SmileyLlama: Modifying Large Language Models \\for Directed Chemical Space Exploration." NeurIPS 2024 Workshops: AIDrugX, 2024.](https://mlanthology.org/neuripsw/2024/cavanagh2024neuripsw-smileyllama/)BibTeX
@inproceedings{cavanagh2024neuripsw-smileyllama,
title = {{SmileyLlama: Modifying Large Language Models \\for Directed Chemical Space Exploration}},
author = {Cavanagh, Joe and Sun, Kunyang and Gritsevskiy, Andrew and Bagni, Dorian and Head-Gordon, Teresa and Bannister, Thomas D.},
booktitle = {NeurIPS 2024 Workshops: AIDrugX},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/cavanagh2024neuripsw-smileyllama/}
}