Zettlemoyer, Luke

92 publications

ICLR 2025 (Mis)Fitting Scaling Laws: A Survey of Scaling Law Fitting Techniques in Deep Learning Margaret Li, Sneha Kudugunta, Luke Zettlemoyer
NeurIPS 2025 CAT: Content-Adaptive Image Tokenization Junhong Shen, Kushal Tirumala, Michihiro Yasunaga, Ishan Misra, Luke Zettlemoyer, Lili Yu, Chunting Zhou
CoRL 2025 DreamGen: Unlocking Generalization in Robot Learning Through Video World Models Joel Jang, Seonghyeon Ye, Zongyu Lin, Jiannan Xiang, Johan Bjorck, Yu Fang, Fengyuan Hu, Spencer Huang, Kaushil Kundalia, Yen-Chen Lin, Loïc Magne, Ajay Mandlekar, Avnish Narayan, You Liang Tan, Guanzhi Wang, Jing Wang, Qi Wang, Yinzhen Xu, Xiaohui Zeng, Kaiyuan Zheng, Ruijie Zheng, Ming-Yu Liu, Luke Zettlemoyer, Dieter Fox, Jan Kautz, Scott Reed, Yuke Zhu, Linxi Fan
ICLR 2025 Fantastic Copyrighted Beasts and How (Not) to Generate Them Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, Yue Wang, Luke Zettlemoyer, Chiyuan Zhang, Danqi Chen, Peter Henderson
NeurIPS 2025 FlexOLMo: Open Language Models for Flexible Data Use Weijia Shi, Akshita Bhagia, Kevin Farhat, Niklas Muennighoff, Jacob Morrison, Evan Pete Walsh, Dustin Schwenk, Shayne Longpre, Jake Poznanski, Allyson Ettinger, Daogao Liu, Margaret Li, Mike Lewis, Wen-tau Yih, Dirk Groeneveld, Luca Soldaini, Kyle Lo, Noah A. Smith, Luke Zettlemoyer, Pang Wei Koh, Hannaneh Hajishirzi, Ali Farhadi, Sewon Min
ICLR 2025 Generative Adapter: Contextualizing Language Models in Parameters with a Single Forward Pass Tong Chen, Hao Fang, Patrick Xia, Xiaodong Liu, Benjamin Van Durme, Luke Zettlemoyer, Jianfeng Gao, Hao Cheng
NeurIPS 2025 Heterogeneous Swarms: Jointly Optimizing Model Roles and Weights for Multi-LLM Systems Shangbin Feng, Zifeng Wang, Palash Goyal, Yike Wang, Weijia Shi, Huang Xia, Hamid Palangi, Luke Zettlemoyer, Yulia Tsvetkov, Chen-Yu Lee, Tomas Pfister
NeurIPS 2025 LMFusion: Adapting Pretrained Language Models for Multimodal Generation Weijia Shi, Xiaochuang Han, Chunting Zhou, Weixin Liang, Xi Victoria Lin, Luke Zettlemoyer, Lili Yu
ICLR 2025 Latent Action Pretraining from Videos Seonghyeon Ye, Joel Jang, Byeongguk Jeon, Se June Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo
ICLR 2025 MUSE: Machine Unlearning Six-Way Evaluation for Language Models Weijia Shi, Jaechan Lee, Yangsibo Huang, Sadhika Malladi, Jieyu Zhao, Ari Holtzman, Daogao Liu, Luke Zettlemoyer, Noah A. Smith, Chiyuan Zhang
ICML 2025 Memory Layers at Scale Vincent-Pierre Berges, Barlas Oguz, Daniel Haziza, Wen-Tau Yih, Luke Zettlemoyer, Gargi Ghosh
NeurIPS 2025 Meta CLIP 2: A Worldwide Scaling Recipe Yung-Sung Chuang, Yang Li, Dong Wang, Ching-Feng Yeh, Kehan Lyu, Ramya Raghavendra, James R. Glass, Lifei Huang, Jason E Weston, Luke Zettlemoyer, Xinlei Chen, Zhuang Liu, Saining Xie, Wen-tau Yih, Shang-Wen Li, Hu Xu
ICLRW 2025 Mixture-of-Mamba: Enhancing Multi-Modal State-Space Models with Modality-Aware Sparsity Weixin Liang, Junhong Shen, Genghan Zhang, Ning Dong, Luke Zettlemoyer, Lili Yu
TMLR 2025 Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models Weixin Liang, Lili Yu, Liang Luo, Srini Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, Xi Victoria Lin
ICLRW 2025 Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models Weixin Liang, Lili Yu, Liang Luo, Srini Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, Xi Victoria Lin
ICLRW 2025 Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models Weixin Liang, Lili Yu, Liang Luo, Srini Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Xi Victoria Lin
NeurIPS 2025 Precise Information Control in Long-Form Text Generation Jacqueline He, Howard Yen, Margaret Li, Shuyue Stella Li, Zhiyuan Zeng, Weijia Shi, Yulia Tsvetkov, Danqi Chen, Pang Wei Koh, Luke Zettlemoyer
ICLRW 2025 S1: Simple Test-Time Scaling Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candes, Tatsunori Hashimoto
DMLR 2025 Text Quality-Based Pruning for Efficient Training of Language Models Vasu Sharma, Karthik Padthe, Newsha Ardalani, Kushal Tirumala, Russell Howes, Hu Xu, Po-Yao Huang, Daniel Li Chen, Armen Aghajanyan, Gargi Ghosh, Luke Zettlemoyer
ICLR 2025 Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, Omer Levy
NeurIPS 2025 When Worse Is Better: Navigating the Compression Generation Trade-Off in Visual Tokenization Vivek Ramanujan, Kushal Tirumala, Armen Aghajanyan, Luke Zettlemoyer, Ali Farhadi
NeurIPSW 2024 CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh
NeurIPSW 2024 CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh
NeurIPS 2024 DataComp-LM: In Search of the Next Generation of Training Sets for Language Models Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar
ICLR 2024 Demystifying CLIP Data Hu Xu, Saining Xie, Xiaoqing Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer
ICLR 2024 Detecting Pretraining Data from Large Language Models Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer
NeurIPS 2024 Evaluating Copyright Takedown Methods for Language Models Boyi Wei, Weijia Shi, Yangsibo Huang, Noah A. Smith, Chiyuan Zhang, Luke Zettlemoyer, Kai Li, Peter Henderson
NeurIPSW 2024 Generative Adapter: Contextualizing Language Models in Parameters with a Single Forward Pass Tong Chen, Hao Fang, Patrick Xia, Xiaodong Liu, Benjamin Van Durme, Luke Zettlemoyer, Jianfeng Gao, Hao Cheng
ICLR 2024 In-Context Pretraining: Language Modeling Beyond Document Boundaries Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Wen-tau Yih, Mike Lewis
NeurIPS 2024 Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou
CVPR 2024 MoDE: CLIP Data Experts via Clustering Jiawei Ma, Po-Yao Huang, Saining Xie, Shang-Wen Li, Luke Zettlemoyer, Shih-Fu Chang, Wen-Tau Yih, Hu Xu
NeurIPSW 2024 Personalized Soups: Personalized Large Language Model Alignment via Post-Hoc Parameter Merging Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu
ICLR 2024 RA-DIT: Retrieval-Augmented Dual Instruction Tuning Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Richard James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih
ICLR 2024 Representation Deficiency in Masked Language Modeling Yu Meng, Jitin Krishnan, Sinong Wang, Qifan Wang, Yuning Mao, Han Fang, Marjan Ghazvininejad, Jiawei Han, Luke Zettlemoyer
ICLR 2024 SILO Language Models: Isolating Legal Risk in a Nonparametric Datastore Sewon Min, Suchin Gururangan, Eric Wallace, Weijia Shi, Hannaneh Hajishirzi, Noah A. Smith, Luke Zettlemoyer
NeurIPS 2024 Scaling Retrieval-Based Language Models with a Trillion-Token Datastore Rulin Shao, Jacqueline He, Akari Asai, Weijia Shi, Tim Dettmers, Sewon Min, Luke Zettlemoyer, Pang Wei Koh
ICLR 2024 Self-Alignment with Instruction Backtranslation Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason E Weston, Mike Lewis
NeurIPS 2024 Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Ranjay Krishna
NeurIPSW 2024 Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Ranjay Krishna
ICLR 2023 AGRO: Adversarial Discovery of Error-Prone Groups for Robust Optimization Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi
ICLR 2023 Binding Language Models in Symbolic Languages Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu
ICCV 2023 CiT: Curation in Training for Effective Vision-Language Data Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer
ICML 2023 DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-Tau Yih, Daniel Fried, Sida Wang, Tao Yu
NeurIPSW 2023 Detecting Pretraining Data from Large Language Models Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer
NeurIPSW 2023 FActScore: Fine-Grained Atomic Evaluation of Factual Precision in Long Form Text Generation Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi
ICLR 2023 InCoder: A Generative Model for Code Infilling and Synthesis Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, Mike Lewis
NeurIPS 2023 LIMA: Less Is More for Alignment Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy
AAAI 2023 Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias
NeurIPS 2023 MEGABYTE: Predicting Million-Byte Sequences with Multiscale Transformers Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis
ICLR 2023 Mega: Moving Average Equipped Gated Attention Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer
NeurIPSW 2023 PATHFINDER: Guided Search over Multi-Step Reasoning Paths Olga Golovneva, Sean O'Brien, Ramakanth Pasunuru, Tianlu Wang, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz
NeurIPS 2023 QLoRA: Efficient Finetuning of Quantized LLMs Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer
ICLR 2023 ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning Olga Golovneva, Moya Peng Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz
ICML 2023 Retrieval-Augmented Multimodal Language Modeling Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-Tau Yih
NeurIPSW 2023 Retrieval-Based Language Models Using a Multi-Domain Datastore Rulin Shao, Sewon Min, Luke Zettlemoyer, Pang Wei Koh
NeurIPSW 2023 SILO Language Models: Isolating Legal Risk in a Nonparametric Datastore Sewon Min, Suchin Gururangan, Eric Wallace, Weijia Shi, Hannaneh Hajishirzi, Noah A. Smith, Luke Zettlemoyer
NeurIPSW 2023 SILO Language Models: Isolating Legal Risk in a Nonparametric Datastore Sewon Min, Suchin Gururangan, Eric Wallace, Weijia Shi, Hannaneh Hajishirzi, Noah Smith, Luke Zettlemoyer
ICML 2023 Scaling Laws for Generative Mixed-Modal Language Models Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer
ICLR 2023 Selective Annotation Makes Language Models Better Few-Shot Learners Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu
NeurIPS 2023 Stable and Low-Precision Training for Large-Scale Vision-Language Models Mitchell Wortsman, Tim Dettmers, Luke Zettlemoyer, Ari Morcos, Ali Farhadi, Ludwig Schmidt
ICML 2023 The Case for 4-Bit Precision: K-Bit Inference Scaling Laws Tim Dettmers, Luke Zettlemoyer
NeurIPS 2023 Toolformer: Language Models Can Teach Themselves to Use Tools Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom
ICLRW 2023 Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun
ICLRW 2023 Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations Xinxi Lyu, Sewon Min, Iz Beltagy, Luke Zettlemoyer, Hannaneh Hajishirzi
ICLR 2022 8-Bit Optimizers via Block-Wise Quantization Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer
NeurIPSW 2022 Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer
NeurIPS 2022 GPT3.int8(): 8-Bit Matrix Multiplication for Transformers at Scale Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer
ICLR 2022 HTLM: Hyper-Text Pre-Training and Prompting of Language Models Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer
NeurIPS 2022 Improving Policy Learning via Language Dynamics Distillation Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel
NeurIPS 2022 Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, Armen Aghajanyan
ICML 2021 BASE Layers: Simplifying Training of Large, Sparse Models Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer
ICLR 2021 Better Fine-Tuning by Reducing Representational Collapse Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta
ICLR 2021 DeLighT: Deep and Light-Weight Transformer Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi
CoRL 2021 Language Grounding with 3D Objects Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer
ICLR 2021 Learning Better Structured Representations Using Low-Rank Adaptive Label Smoothing Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, Yashar Mehdad
NeurIPS 2021 Luna: Linear Unified Nested Attention Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer
ICLR 2021 Nearest Neighbor Machine Translation Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis
NeurIPS 2021 SILG: The Multi-Domain Symbolic Interactive Language Grounding Benchmark Victor Zhong, Austin W. Hanjie, Sida Wang, Karthik Narasimhan, Luke Zettlemoyer
ICML 2020 Aligned Cross Entropy for Non-Autoregressive Machine Translation Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy
ICLR 2020 Generalization Through Memorization: Nearest Neighbor Language Models Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis
NeurIPS 2020 Pre-Training via Paraphrasing Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer
ICLR 2020 Sparse Networks from Scratch: Faster Training Without Losing Performance Tim Dettmers, Luke Zettlemoyer
CoRL 2019 Vision-and-Dialog Navigation Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer
ICLR 2018 Deep Contextualized Word Representations Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer
CVPR 2017 Commonly Uncommon: Semantic Sparsity in Situation Recognition Mark Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi
CVPR 2016 Situation Recognition: Visual Semantic Role Labeling for Image Understanding Mark Yatskar, Luke Zettlemoyer, Ali Farhadi
IJCAI 2015 Personalized Mathematical Word Problem Generation Oleksandr Polozov, Eleanor O'Rourke, Adam M. Smith, Luke Zettlemoyer, Sumit Gulwani, Zoran Popovic
MLJ 2014 Introduction to the Special Issue on Learning Semantics Antoine Bordes, Léon Bottou, Ronan Collobert, Dan Roth, Jason Weston, Luke Zettlemoyer
AAAI 2014 Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions Cynthia Matuszek, Liefeng Bo, Luke Zettlemoyer, Dieter Fox
ICML 2012 A Joint Model of Language and Perception for Grounded Attribute Learning Cynthia Matuszek, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, Dieter Fox
UAI 2012 Learning STRIPS Operators from Noisy and Incomplete Observations Kira Mourão, Luke Zettlemoyer, Ronald P. A. Petrick, Mark Steedman
NeurIPS 2008 Multi-Agent Filtering with Infinitely Nested Beliefs Luke Zettlemoyer, Brian Milch, Leslie P. Kaelbling