Uncovering Neural Scaling Laws in Molecular Representation Learning

Abstract

Molecular Representation Learning (MRL) has emerged as a powerful tool for drug and materials discovery in a variety of tasks such as virtual screening and inverse design. While there has been a surge of interest in advancing model-centric techniques, the influence of both data quantity and quality on molecular representations is not yet clearly understood within this field. In this paper, we delve into the neural scaling behaviors of MRL from a data-centric viewpoint, examining four key dimensions: (1) data modalities, (2) dataset splitting, (3) the role of pre-training, and (4) model capacity.Our empirical studies confirm a consistent power-law relationship between data volume and MRL performance across these dimensions. Additionally, through detailed analysis, we identify potential avenues for improving learning efficiency.To challenge these scaling laws, we adapt seven popular data pruning strategies to molecular data and benchmark their performance. Our findings underline the importance of data-centric MRL and highlight possible directions for future research.

Cite

Text

Chen et al. "Uncovering Neural Scaling Laws in Molecular Representation Learning." Neural Information Processing Systems, 2023.

Markdown

[Chen et al. "Uncovering Neural Scaling Laws in Molecular Representation Learning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/chen2023neurips-uncovering/)

BibTeX

@inproceedings{chen2023neurips-uncovering,
  title     = {{Uncovering Neural Scaling Laws in Molecular Representation Learning}},
  author    = {Chen, Dingshuo and Zhu, Yanqiao and Zhang, Jieyu and Du, Yuanqi and Li, Zhixun and Liu, Qiang and Wu, Shu and Wang, Liang},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/chen2023neurips-uncovering/}
}