Chakraborty, Souradip

32 publications

NeurIPS 2025 A Technical Report on “Erasing the Invisible”: The 2024 NeurIPS Competition on Stress Testing Image Watermarks Mucong Ding, Bang An, Tahseen Rabbani, Chenghao Deng, Anirudh Satheesh, Souradip Chakraborty, Mehrdad Saberi, Yuxin Wen, Kyle Rui Sang, Aakriti Agrawal, Xuandong Zhao, Mo Zhou, Mary-Anne Hartley, Lei Li, Yu-Xiang Wang, Vishal M. Patel, Soheil Feizi, Tom Goldstein, Furong Huang
ECML-PKDD 2025 Active Preference Optimization for Sample Efficient RLHF Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, Sayak Ray Chowdhury
AAAI 2025 Align-Pro: A Principled Approach to Prompt Optimization for LLM Alignment Prashant Trivedi, Souradip Chakraborty, Avinash Reddy, Vaneet Aggarwal, Amrit Singh Bedi, George K. Atia
ICML 2025 Bounded Rationality for LLMs: Satisficing Alignment at Inference-Time Mohamad Fares El Hajj Chehade, Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy, Dinesh Manocha, Hao Zhu, Amrit Singh Bedi
AAAI 2025 Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data? Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang
ICLR 2025 Collab: Controlled Decoding Using Mixture of Agents for LLM Alignment Souradip Chakraborty, Sujay Bhatt, Udari Madhushani Sehwag, Soumya Suvra Ghosal, Jiahao Qiu, Mengdi Wang, Dinesh Manocha, Furong Huang, Alec Koppel, Sumitra Ganesh
NeurIPS 2025 Does Thinking More Always Help? Mirage of Test-Time Scaling in Reasoning Models Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy, Yifu Lu, Mengdi Wang, Dinesh Manocha, Furong Huang, Mohammad Ghavamzadeh, Amrit Singh Bedi
CVPR 2025 Immune: Improving Safety Against Jailbreaks in Multi-Modal LLMs via Inference-Time Alignment Soumya Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, Amrit Singh Bedi
AAAI 2025 Is Poisoning a Real Threat to DPO? Maybe More so than You Think Pankayaraj Pathmanathan, Souradip Chakraborty, Xiangyu Liu, Yongyuan Liang, Furong Huang
NeurIPS 2025 On the Global Optimality of Policy Gradient Methods in General Utility Reinforcement Learning Anas Barakat, Souradip Chakraborty, Peihong Yu, Pratap Tokekar, Amrit Singh Bedi
TMLR 2025 PROPS: Progressively Private Self-Alignment of Large Language Models Noel Teku, Fengwei Tian, Payel Bhattacharjee, Souradip Chakraborty, Amrit Singh Bedi, Ravi Tandon
ICMLW 2024 Active Preference Optimization for Sample Efficient RLHF Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, Sayak Ray Chowdhury
TMLR 2024 Beyond Text: Utilizing Vocal Cues to Improve Decision Making in LLMs for Robot Navigation Tasks Xingpeng Sun, Haoming Meng, Souradip Chakraborty, Amrit Bedi, Aniket Bera
ICMLW 2024 Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data? Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang
NeurIPSW 2024 Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data? Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang
ICMLW 2024 Is Poisoning a Real Threat to LLM Alignment? Maybe More so than You Think Pankayaraj Pathmanathan, Souradip Chakraborty, Xiangyu Liu, Yongyuan Liang, Furong Huang
ICML 2024 MaxMin-RLHF: Alignment with Diverse Human Preferences Souradip Chakraborty, Jiahao Qiu, Hui Yuan, Alec Koppel, Dinesh Manocha, Furong Huang, Amrit Bedi, Mengdi Wang
ICMLW 2024 MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences Souradip Chakraborty, Jiahao Qiu, Hui Yuan, Alec Koppel, Furong Huang, Dinesh Manocha, Amrit Bedi, Mengdi Wang
ICLR 2024 PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback Souradip Chakraborty, Amrit Bedi, Alec Koppel, Huazheng Wang, Dinesh Manocha, Mengdi Wang, Furong Huang
ICML 2024 Position: On the Possibilities of AI-Generated Text Detection Souradip Chakraborty, Amrit Bedi, Sicheng Zhu, Bang An, Dinesh Manocha, Furong Huang
ICLR 2024 Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL Xiangyu Liu, Souradip Chakraborty, Yanchao Sun, Furong Huang
ICMLW 2024 SAIL: Self-Improving Efficient Online Alignment of Large Language Models Mucong Ding, Souradip Chakraborty, Vibhu Agrawal, Zora Che, Alec Koppel, Mengdi Wang, Amrit Bedi, Furong Huang
NeurIPS 2024 Transfer Q-Star : Principled Decoding for LLM Alignment Souradip Chakraborty, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Singh Bedi, Furong Huang
TMLR 2023 A Survey on the Possibilities & Impossibilities of AI-Generated Text Detection Soumya Suvra Ghosal, Souradip Chakraborty, Jonas Geiping, Furong Huang, Dinesh Manocha, Amrit Bedi
AAAI 2023 Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning Souradip Chakraborty, Amrit Singh Bedi, Pratap Tokekar, Alec Koppel, Brian M. Sadler, Furong Huang, Dinesh Manocha
ICMLW 2023 Principal-Driven Reward Design and Agent Policy Alignment via Bilevel-RL Souradip Chakraborty, Amrit Bedi, Alec Koppel, Furong Huang, Mengdi Wang
ICML 2023 STEERING : Stein Information Directed Exploration for Model-Based Reinforcement Learning Souradip Chakraborty, Amrit Bedi, Alec Koppel, Mengdi Wang, Furong Huang, Dinesh Manocha
NeurIPSW 2022 Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning Xiangyu Liu, Souradip Chakraborty, Furong Huang
CoRL 2022 HTRON: Efficient Outdoor Navigation with Sparse Rewards via Heavy Tailed Adaptive Reinforce Algorithm Kasun Weerakoon, Souradip Chakraborty, Nare Karapetyan, Adarsh Jagan Sathyamoorthy, Amrit Bedi, Dinesh Manocha
ICML 2022 On the Hidden Biases of Policy Mirror Ascent in Continuous Action Spaces Amrit Singh Bedi, Souradip Chakraborty, Anjaly Parayil, Brian M Sadler, Pratap Tokekar, Alec Koppel
NeurIPSW 2022 Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning Souradip Chakraborty, Amrit Bedi, Alec Koppel, Pratap Tokekar, Furong Huang, Dinesh Manocha
NeurIPSW 2021 Uncertainty-Aware Labelled Augmentations for High Dimensional Latent Space Bayesian Optimization Ekansh Verma, Souradip Chakraborty