Beltagy, Iz

8 publications

NeurIPS 2024 Paloma: A Benchmark for Evaluating Language Model Fit Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, Jesse Dodge
ICLRW 2024 Source-Aware Training Enables Knowledge Attribution in Language Models Muhammad Khalifa, David Wadden, Emma Strubell, Honglak Lee, Lu Wang, Iz Beltagy, Hao Peng
NeurIPS 2023 How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, Hannaneh Hajishirzi
ICLRW 2023 Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations Xinxi Lyu, Sewon Min, Iz Beltagy, Luke Zettlemoyer, Hannaneh Hajishirzi
ICML 2022 Staged Training for Transformer Language Models Sheng Shen, Pete Walsh, Kurt Keutzer, Jesse Dodge, Matthew Peters, Iz Beltagy
ICML 2022 What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Generalization? Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, Colin Raffel
NeurIPS 2021 FLEX: Unifying Evaluation for Few-Shot NLP Jonathan Bragg, Arman Cohan, Kyle Lo, Iz Beltagy
NeurIPSW 2021 Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study Rahul Nadkarni, David Wadden, Iz Beltagy, Noah Smith, Hannaneh Hajishirzi, Tom Hope