Chandar, Sarath

45 publications

TMLR 2026 CADmium: Fine-Tuning Code Language Models for Text- Driven Sequential CAD Design Prashant Govindarajan, Davide Baldelli, Jay Pathak, Quentin Fournier, Sarath Chandar
ICLR 2025 A Generalist Hanabi Agent Arjun V Sudhakar, Hadi Nekoei, Mathieu Reymond, Miao Liu, Janarthanan Rajendran, Sarath Chandar
AAAI 2025 BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning Artem Zholus, Maksim Kuznetsov, Roman Schutski, Rim Shayakhmetov, Daniil Polykovskiy, Sarath Chandar, Alex Zhavoronkov
ICLRW 2025 CrystalGym: A New Benchmark for Materials Discovery Using Reinforcement Learning Prashant Govindarajan, Mathieu Reymond, Antoine Clavaud, Mariano Phielipp, Santiago Miret, Sarath Chandar
TMLR 2025 NeoBERT: A Next Generation BERT Lola Le Breton, Quentin Fournier, John Xavier Morris, Mariam El Mezouar, Sarath Chandar
ICCV 2025 TAPNext: Tracking Any Point (TAP) as Next Token Prediction Artem Zholus, Carl Doersch, Yi Yang, Skanda Koppula, Viorica Patraucean, Xu Owen He, Ignacio Rocco, Mehdi S. M. Sajjadi, Sarath Chandar, Ross Goroshin
NeurIPS 2024 Balancing Context Length and Mixing Times for Reinforcement Learning at Scale Matthew Riemer, Khimya Khetarpal, Janarthanan Rajendran, Sarath Chandar
NeurIPSW 2024 Combining Domain and Alignment Vectors to Achieve Better Knowledge-Safety Trade-Offs in LLMs Megh Thakkar, Yash More, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, Sarath Chandar
NeurIPSW 2024 Crystal Design Amidst Noisy DFT Signals: A Reinforcement Learning Approach Prashant Govindarajan, Mathieu Reymond, Santiago Miret, Mariano Phielipp, Sarath Chandar
AAAI 2024 Fairness-Aware Structured Pruning in Transformers Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar
ICML 2024 Faithfulness Measurable Masked Language Models Andreas Madsen, Siva Reddy, Sarath Chandar
ICLR 2024 Intelligent Switching for Reset-Free RL Darshan Patil, Janarthanan Rajendran, Glen Berseth, Sarath Chandar
ICMLW 2024 Language Model-in-the-Loop: Data Optimal Approach to Recommend Actions in Text Games Arjun V Sudhakar, Prasanna Parthasarathi, Janarthanan Rajendran, Sarath Chandar
ICML 2024 Lookbehind-SAM: K Steps Back, 1 Step Forward Goncalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar
ICLR 2024 Mastering Memory Tasks with World Models Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar
TMLR 2024 Promoting Exploration in Memory-Augmented Adam Using Critical Momenta Pranshu Malviya, Goncalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar
CoLLAs 2024 Sub-Goal Distillation: A Method to Improve Small Language Agents Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Côté
JMLR 2023 An Empirical Investigation of the Role of Pre-Training in Lifelong Learning Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell
ICLRW 2023 Behavioral Cloning for Crystal Design Prashant Govindarajan, Santiago Miret, Jarrid Rector-Brooks, Mariano Phielipp, Janarthanan Rajendran, Sarath Chandar
UAI 2023 Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran
CoLLAs 2023 Dealing with Non-Stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar
AAAI 2023 Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar
NeurIPSW 2023 Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning Prashant Govindarajan, Santiago Miret, Jarrid Rector-Brooks, Mariano Phielipp, Janarthanan Rajendran, Sarath Chandar
NeurIPSW 2023 Mastering Memory Tasks with World Models Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran, Sarath Chandar
CoLLAs 2023 Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm Seijen, Sarath Chandar
ICMLW 2023 Thompson Sampling for Improved Exploration in GFlowNets Jarrid Rector-Brooks, Kanika Madan, Moksh Jain, Maksym Korablyov, Cheng-Hao Liu, Sarath Chandar, Nikolay Malkin, Yoshua Bengio
CoLLAs 2023 Towards Few-Shot Coordination: Revisiting Ad-Hoc Teamplay Challenge in the Game of Hanabi Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu, Sarath Chandar
CoLLAs 2022 Improving Meta-Learning Generalization with Activation-Based Early-Stopping Simon Guiroy, Christopher Pal, Goncalo Mordido, Sarath Chandar
ICLR 2022 Memory Augmented Optimizers for Deep Learning Paul-Aymeric Martin McRae, Prasanna Parthasarathi, Mido Assran, Sarath Chandar
AAAI 2022 PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar
NeurIPSW 2022 Replay Buffer with Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar
ICLRW 2022 Staged Independent Learning: Towards Decentralized Cooperative Multi-Agent Reinforcement Learning Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar
CoLLAs 2022 TAG: Task-Based Accumulated Gradients for Lifelong Learning Pranshu Malviya, Balaraman Ravindran, Sarath Chandar
ICML 2022 Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm H Van Seijen
ICLRW 2022 Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen
ICML 2021 Continuous Coordination as a Realistic Scenario for Lifelong Learning Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville, Sarath Chandar
CVPR 2021 IIRC: Incremental Implicitly-Refined Classification Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar
NeurIPSW 2021 IIRC: Incremental Implicitly-Refined Classification Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar
AAAI 2021 Towered Actor Critic for Handling Multiple Action Types in Reinforcement Learning for Drug Discovery Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, Sarath Chandar
ICMLW 2020 Chaotic Continual Learning Touraj Laleh, Mojtaba Faramarzi, Irina Rish, Sarath Chandar
ICML 2020 Learning to Navigate the Synthetically Accessible Chemical Space Using Reinforcement Learning Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Shengchao Liu, Simon Blackburn, Karam Thomas, Connor Coley, Jian Tang, Sarath Chandar, Yoshua Bengio
NeurIPS 2020 The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning Harm Van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar
AAAI 2019 Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, Yoshua Bengio
AAAI 2018 Complex Sequential Question Answering: Towards Learning to Converse over Linked Question Answer Pairs with a Knowledge Graph Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar
CVPR 2017 GuessWhat?! Visual Object Discovery Through Multi-Modal Dialogue Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville