ML Anthology
Authors
Search
About
Levine, Yoav
12 publications
ICML
2025
Position: Language Model Developers Should Report Train-Test Overlap
Andy K Zhang
,
Kevin Klyman
,
Yifan Mai
,
Yoav Levine
,
Yian Zhang
,
Rishi Bommasani
,
Percy Liang
ICLRW
2025
Tradeoffs Between Alignment and Helpfulness in Language Models with Steering Methods
Yotam Wolf
,
Noam Wies
,
Dorin Shteyman
,
Binyamin Rothberg
,
Yoav Levine
,
Amnon Shashua
ICML
2024
Fundamental Limitations of Alignment in Large Language Models
Yotam Wolf
,
Noam Wies
,
Oshri Avnery
,
Yoav Levine
,
Amnon Shashua
ICML
2024
STEER: Assessing the Economic Rationality of Large Language Models
Narun Krishnamurthi Raman
,
Taylor Lundy
,
Samuel Joseph Amouyal
,
Yoav Levine
,
Kevin Leyton-Brown
,
Moshe Tennenholtz
ICLR
2023
Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks
Noam Wies
,
Yoav Levine
,
Amnon Shashua
NeurIPS
2023
The Learnability of In-Context Learning
Noam Wies
,
Yoav Levine
,
Amnon Shashua
ICMLW
2022
Huge Frozen Language Models as Readers for Open-Domain Question Answering
Yoav Levine
,
Ori Ram
,
Daniel Jannai
,
Barak Lenz
,
Shai Shalev-Shwartz
,
Amnon Shashua
,
Kevin Leyton-Brown
,
Yoav Shoham
ICLR
2022
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Yoav Levine
,
Noam Wies
,
Daniel Jannai
,
Dan Navon
,
Yedid Hoshen
,
Amnon Shashua
ICLR
2021
PMI-Masking: Principled Masking of Correlated Spans
Yoav Levine
,
Barak Lenz
,
Opher Lieber
,
Omri Abend
,
Kevin Leyton-Brown
,
Moshe Tennenholtz
,
Yoav Shoham
ICML
2021
Which Transformer Architecture Fits My Data? a Vocabulary Bottleneck in Self-Attention
Noam Wies
,
Yoav Levine
,
Daniel Jannai
,
Amnon Shashua
NeurIPS
2020
Limits to Depth Efficiencies of Self-Attention
Yoav Levine
,
Noam Wies
,
Or Sharir
,
Hofit Bata
,
Amnon Shashua
ICLR
2018
Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design
Yoav Levine
,
David Yakira
,
Nadav Cohen
,
Amnon Shashua