Position: Key Claims in LLM Research Have a Long Tail of Footnotes
Abstract
Much of the recent discourse within the ML community has been centered around Large Language Models (LLMs), their functionality and potential – yet not only do we not have a working definition of LLMs, but much of this discourse relies on claims and assumptions that are worth re-examining. We contribute a definition of LLMs, critically examine five common claims regarding their properties (including ’emergent properties’), and conclude with suggestions for future research directions and their framing.
Cite
Text
Rogers and Luccioni. "Position: Key Claims in LLM Research Have a Long Tail of Footnotes." International Conference on Machine Learning, 2024.Markdown
[Rogers and Luccioni. "Position: Key Claims in LLM Research Have a Long Tail of Footnotes." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/rogers2024icml-position/)BibTeX
@inproceedings{rogers2024icml-position,
title = {{Position: Key Claims in LLM Research Have a Long Tail of Footnotes}},
author = {Rogers, Anna and Luccioni, Sasha},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {42647-42665},
volume = {235},
url = {https://mlanthology.org/icml/2024/rogers2024icml-position/}
}