ML Anthology
Authors
Search
About
Carbin, Michael
20 publications
NeurIPS
2025
FreshStack: Building Realistic Benchmarks for Evaluating Retrieval on Technical Documents
Nandan Thakur
,
Jimmy Lin
,
Sam Havens
,
Michael Carbin
,
Omar Khattab
,
Andrew Drozdov
ICML
2025
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding
Tian Jin
,
Ellie Y Cheng
,
Zachary Ankner
,
Nikunj Saunshi
,
Blake M Elias
,
Amir Yazdanbakhsh
,
Jonathan Ragan-Kelley
,
Suvinay Subramanian
,
Michael Carbin
ICLRW
2024
Expressing and Exploiting Parallelism in Language Model Decoding
Tian Jin
,
Ellie Y Cheng
,
Michael Carbin
ICML
2024
Learning to Compile Programs to Neural Networks
Logan Weber
,
Jesse Michel
,
Alex Renda
,
Michael Carbin
NeurIPSW
2024
Long Context RAG Performance of Large Language Models
Quinn Leng
,
Jacob Portes
,
Sam Havens
,
Matei Zaharia
,
Michael Carbin
ICLR
2024
The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory Before In-Context Learning
Tian Jin
,
Nolan Clement
,
Xin Dong
,
Vaishnavh Nagarajan
,
Michael Carbin
,
Jonathan Ragan-Kelley
,
Gintare Karolina Dziugaite
ICMLW
2023
Can LLMs Generate Random Numbers? Evaluating LLM Sampling in Controlled Domains
Aspen K Hopkins
,
Alex Renda
,
Michael Carbin
AAAI
2023
Computably Continuous Reinforcement-Learning Objectives Are PAC-Learnable
Cambridge Yang
,
Michael Littman
,
Michael Carbin
ICMLW
2023
Distributions for Compositionally Differentiating Parametric Discontinuities
Jesse Michel
,
Kevin Mu
,
Xuanda Yang
,
Sai Praveen Bangaru
,
Elias Rojas Collins
,
Gilbert Bernstein
,
Jonathan Ragan-Kelley
,
Michael Carbin
,
Tzu-Mao Li
IJCAI
2022
On the (In)Tractability of Reinforcement Learning for LTL Objectives
Cambridge Yang
,
Michael L. Littman
,
Michael Carbin
NeurIPS
2022
Pruning’s Effect on Generalization Through the Lens of Training and Regularization
Tian Jin
,
Michael Carbin
,
Dan Roy
,
Jonathan Frankle
,
Gintare Karolina Dziugaite
ICML
2021
On the Predictability of Pruning Across Scales
Jonathan S Rosenfeld
,
Jonathan Frankle
,
Michael Carbin
,
Nir Shavit
ICLR
2021
Pruning Neural Networks at Initialization: Why Are We Missing the Mark?
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel Roy
,
Michael Carbin
CVPR
2021
The Lottery Tickets Hypothesis for Supervised and Self-Supervised Pre-Training in Computer Vision Models
Tianlong Chen
,
Jonathan Frankle
,
Shiyu Chang
,
Sijia Liu
,
Yang Zhang
,
Michael Carbin
,
Zhangyang Wang
ICLR
2020
Comparing Rewinding and Fine-Tuning in Neural Network Pruning
Alex Renda
,
Jonathan Frankle
,
Michael Carbin
ICML
2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel Roy
,
Michael Carbin
NeurIPS
2020
The Lottery Ticket Hypothesis for Pre-Trained BERT Networks
Tianlong Chen
,
Jonathan Frankle
,
Shiyu Chang
,
Sijia Liu
,
Yang Zhang
,
Zhangyang Wang
,
Michael Carbin
NeurIPS
2019
Compiler Auto-Vectorization with Imitation Learning
Charith Mendis
,
Cambridge Yang
,
Yewen Pu
,
Dr.Saman Amarasinghe
,
Michael Carbin
ICML
2019
Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation Using Deep Neural Networks
Charith Mendis
,
Alex Renda
,
Dr.Saman Amarasinghe
,
Michael Carbin
ICLR
2019
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
,
Michael Carbin