Kakade, Sham

66 publications

NeurIPS 2024 CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-Training David Brandfonbrener, Hanlin Zhang, Andreas Kirsch, Jonathan Richard Schwarz, Sham Kakade
NeurIPS 2024 DataComp-LM: In Search of the Next Generation of Training Sets for Language Models Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar
NeurIPS 2024 From an Image to a Scene: Learning to Imagine the World from a Million 360° Videos Matthew Wallingford, Anand Bhattad, Aditya Kusupati, Vivek Ramanujan, Matt Deitke, Sham Kakade, Aniruddha Kembhavi, Roozbeh Mottaghi, Wei-Chiu Ma, Ali Farhadi
NeurIPS 2024 MatFormer: Nested Transformer for Elastic Inference Devvrit, Sneha Kudugunta, Aditya Kusupati, Tim Dettmers, Kaifeng Chen, Inderjit Dhillon, Yulia Tsvetkov, Hannaneh Hajishirzi, Sham Kakade, Ali Farhadi, Prateek Jain
NeurIPS 2024 Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass Ethan Shen, Alan Fan, Sarah Pratt, Jae Sung Park, Matthew Wallingford, Sham Kakade, Ari Holtzman, Ranjay Krishna, Ali Farhadi, Aditya Kusupati
NeurIPS 2024 Transcendence: Generative Models Can Outperform the Experts That Train Them Edwin Zhang, Vincent Zhu, Naomi Saphra, Anat Kleiman, Benjamin L. Edelman, Milind Tambe, Sham Kakade, Eran Malach
JMLR 2023 A Complete Characterization of Linear Estimators for Offline Policy Evaluation Juan C. Perdomo, Akshay Krishnamurthy, Peter Bartlett, Sham Kakade
NeurIPS 2023 AdANNS: A Framework for Adaptive Semantic Search Aniket Rege, Aditya Kusupati, Sharan Ranjit S, Alan Fan, Qingqing Cao, Sham Kakade, Prateek Jain, Ali Farhadi
COLT 2023 Learning Hidden Markov Models Using Conditional Samples Gaurav Mahajan, Sham Kakade, Akshay Krishnamurthy, Cyril Zhang
NeurIPS 2023 Pareto Frontiers in Deep Feature Learning: Data, Compute, Width, and Luck Benjamin Edelman, Surbhi Goel, Sham Kakade, Eran Malach, Cyril Zhang
NeurIPS 2022 Hidden Progress in Deep Learning: SGD Learns Parities near the Computational Limit Boaz Barak, Benjamin Edelman, Surbhi Goel, Sham Kakade, Eran Malach, Cyril Zhang
ICML 2022 Inductive Biases and Variable Creation in Self-Attention Mechanisms Benjamin L Edelman, Surbhi Goel, Sham Kakade, Cyril Zhang
ICML 2022 Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham Kakade
NeurIPS 2022 Matryoshka Representation Learning Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jain, Ali Farhadi
NeurIPS 2022 Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms Surbhi Goel, Sham Kakade, Adam Kalai, Cyril Zhang
NeurIPS 2022 Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham Kakade
ICML 2022 Sparsity in Partially Controllable Linear Systems Yonathan Efroni, Sham Kakade, Akshay Krishnamurthy, Cyril Zhang
NeurIPS 2022 The Power and Limitation of Pretraining-Finetuning for Linear Regression Under Covariate Shift Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, Sham Kakade
ICML 2022 Understanding Contrastive Learning Requires Incorporating Inductive Biases Nikunj Saunshi, Jordan Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham Kakade, Akshay Krishnamurthy
NeurIPS 2022 Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham Kakade, Sergey Levine
NeurIPS 2021 An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap Yuanhao Wang, Ruosong Wang, Sham Kakade
COLT 2021 Benign Overfitting of Constant-Stepsize SGD for Linear Regression Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham Kakade
ICML 2021 Bilinear Classes: A Structural Framework for Provable Generalization in RL Simon Du, Sham Kakade, Jason Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang
NeurIPS 2021 Going Beyond Linear RL: Sample Efficient Neural Function Approximation Baihe Huang, Kaixuan Huang, Sham Kakade, Jason Lee, Qi Lei, Runzhe Wang, Jiaqi Yang
NeurIPS 2021 Gone Fishing: Neural Active Learning with Fisher Embeddings Jordan Ash, Surbhi Goel, Akshay Krishnamurthy, Sham Kakade
ICML 2021 How Important Is the Train-Validation Split in Meta-Learning? Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason Lee, Sham Kakade, Huan Wang, Caiming Xiong
ICML 2021 Instabilities of Offline RL with Pre-Trained Neural Representation Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, Sham Kakade
NeurIPS 2021 LLC: Accurate, Multi-Purpose Learnt Low-Dimensional Binary Codes Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi
NeurIPS 2021 Optimal Gradient-Based Algorithms for Non-Concave Bandit Optimization Baihe Huang, Kaixuan Huang, Sham Kakade, Jason Lee, Qi Lei, Runzhe Wang, Jiaqi Yang
NeurIPS 2021 Robust and Differentially Private Mean Estimation Xiyang Liu, Weihao Kong, Sham Kakade, Sewoong Oh
NeurIPS 2021 The Benefits of Implicit Regularization from SGD in Least Squares Problems Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham Kakade
ICML 2020 Calibration, Entropy Rates, and Memory in Language Models Mark Braverman, Xinyi Chen, Sham Kakade, Karthik Narasimhan, Cyril Zhang, Yi Zhang
NeurIPS 2020 FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs Alekh Agarwal, Sham Kakade, Akshay Krishnamurthy, Wen Sun
NeurIPS 2020 Information Theoretic Regret Bounds for Online Nonlinear Control Sham Kakade, Akshay Krishnamurthy, Kendall Lowrey, Motoya Ohnishi, Wen Sun
NeurIPS 2020 Is Long Horizon RL More Difficult than Short Horizon RL? Ruosong Wang, Simon S Du, Lin Yang, Sham Kakade
ALT 2020 Leverage Score Sampling for Faster Accelerated Regression and ERM Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin-Tat Lee, Praneeth Netrapalli, Aaron Sidford
ICML 2020 Meta-Learning for Mixed Linear Regression Weihao Kong, Raghav Somani, Zhao Song, Sham Kakade, Sewoong Oh
NeurIPS 2020 Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity Kaiqing Zhang, Sham Kakade, Tamer Basar, Lin Yang
COLT 2020 Model-Based Reinforcement Learning with a Generative Model Is Minimax Optimal Alekh Agarwal, Sham Kakade, Lin F. Yang
NeurIPS 2020 PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning Alekh Agarwal, Mikael Henaff, Sham Kakade, Wen Sun
ICML 2020 Provable Representation Learning for Imitation Learning via Bi-Level Optimization Sanjeev Arora, Simon Du, Sham Kakade, Yuping Luo, Nikunj Saunshi
NeurIPS 2020 Robust Meta-Learning for Mixed Linear Regression with Small Batches Weihao Kong, Raghav Somani, Sham Kakade, Sewoong Oh
NeurIPS 2020 Sample-Efficient Reinforcement Learning of Undercomplete POMDPs Chi Jin, Sham Kakade, Akshay Krishnamurthy, Qinghua Liu
ICML 2020 Soft Threshold Weight Reparameterization for Learnable Sparsity Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, Ali Farhadi
ICML 2020 The Implicit and Explicit Regularization Effects of Dropout Colin Wei, Sham Kakade, Tengyu Ma
ALT 2020 The Nonstochastic Control Problem Elad Hazan, Sham Kakade, Karan Singh
ICML 2019 Maximum Likelihood Estimation for Learning Populations of Parameters Ramya Korlakai Vinayak, Weihao Kong, Gregory Valiant, Sham Kakade
ICML 2019 Online Control with Adversarial Disturbances Naman Agarwal, Brian Bullins, Elad Hazan, Sham Kakade, Karan Singh
ICML 2019 Online Meta-Learning Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine
ICLRW 2019 Online Meta-Learning Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine
ICLR 2019 Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
ICML 2019 Provably Efficient Maximum Entropy Exploration Elad Hazan, Sham Kakade, Karan Singh, Abby Van Soest
ICML 2018 Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator Maryam Fazel, Rong Ge, Sham Kakade, Mehran Mesbahi
ICLR 2018 Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M Bayen, Sham Kakade, Igor Mordatch, Pieter Abbeel
ICML 2015 A Linear Dynamical System Model for Text David Belanger, Sham Kakade
ICML 2015 Un-Regularizing: Approximate Proximal Point and Faster Stochastic Algorithms for Empirical Risk Minimization Roy Frostig, Rong Ge, Sham Kakade, Aaron Sidford
JMLR 2015 When Are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity Animashree Anandkumar, Daniel Hsu, Majid Janzamin, Sham Kakade
ICML 2014 Least Squares Revisited: Scalable Approaches for Multi-Class Prediction Alekh Agarwal, Sham Kakade, Nikos Karampatziakis, Le Song, Gregory Valiant
ICML 2013 Learning Linear Bayesian Networks with Latent Variables Animashree Anandkumar, Daniel Hsu, Adel Javanmard, Sham Kakade
AISTATS 2012 Domain Adaptation: A Small Sample Statistical Approach Ruslan Salakhutdinov, Sham Kakade, Dean Foster
AISTATS 2011 Domain Adaptation with Coupled Subspaces John Blitzer, Sham Kakade, Dean Foster
AISTATS 2010 Learning Exponential Families in High-Dimensions: Strong Convexity and Sparsity Sham Kakade, Ohad Shamir, Karthik Sindharan, Ambuj Tewari
NeurIPS 2005 From Batch to Transductive Online Learning Sham Kakade, Adam Tauman Kalai
NeurIPS 2000 Dopamine Bonuses Sham Kakade, Peter Dayan
NeurIPS 2000 Explaining Away in Weight Space Peter Dayan, Sham Kakade
NeurIPS 1999 Acquisition in Autoshaping Sham Kakade, Peter Dayan