Pretrained Language Models to Solve Graph Tasks in Natural Language
Abstract
Pretrained large language models (LLMs) are powerful learners in a variety of language tasks. We explore if LLMs can learn from graph-structured data when the graphs are described using natural language. We explore data augmentation and pretraining specific to the graph domain and show that LLMs such as GPT-2 and GPT-3 are promising alternatives to graph neural networks.
Cite
Text
Wenkel et al. "Pretrained Language Models to Solve Graph Tasks in Natural Language." ICML 2023 Workshops: SPIGM, 2023.Markdown
[Wenkel et al. "Pretrained Language Models to Solve Graph Tasks in Natural Language." ICML 2023 Workshops: SPIGM, 2023.](https://mlanthology.org/icmlw/2023/wenkel2023icmlw-pretrained/)BibTeX
@inproceedings{wenkel2023icmlw-pretrained,
title = {{Pretrained Language Models to Solve Graph Tasks in Natural Language}},
author = {Wenkel, Frederik and Wolf, Guy and Knyazev, Boris},
booktitle = {ICML 2023 Workshops: SPIGM},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/wenkel2023icmlw-pretrained/}
}