Bedi, Amrit Singh

20 publications

AAAI 2025 Align-Pro: A Principled Approach to Prompt Optimization for LLM Alignment Prashant Trivedi, Souradip Chakraborty, Avinash Reddy, Vaneet Aggarwal, Amrit Singh Bedi, George K. Atia
TMLR 2025 Beyond Joint Demonstrations: Personalized Expert Guidance for Efficient Multi-Agent Reinforcement Learning Peihong Yu, Manav Mishra, Alec Koppel, Carl Busart, Priya Narayan, Dinesh Manocha, Amrit Singh Bedi, Pratap Tokekar
ICML 2025 Bounded Rationality for LLMs: Satisficing Alignment at Inference-Time Mohamad Fares El Hajj Chehade, Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy, Dinesh Manocha, Hao Zhu, Amrit Singh Bedi
NeurIPS 2025 Does Thinking More Always Help? Mirage of Test-Time Scaling in Reasoning Models Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy, Yifu Lu, Mengdi Wang, Dinesh Manocha, Furong Huang, Mohammad Ghavamzadeh, Amrit Singh Bedi
CVPR 2025 Immune: Improving Safety Against Jailbreaks in Multi-Modal LLMs via Inference-Time Alignment Soumya Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, Amrit Singh Bedi
NeurIPS 2025 On the Global Optimality of Policy Gradient Methods in General Utility Reinforcement Learning Anas Barakat, Souradip Chakraborty, Peihong Yu, Pratap Tokekar, Amrit Singh Bedi
NeurIPS 2025 On the Sample Complexity Bounds of Bilevel Reinforcement Learning Mudit Gaur, Utsav Singh, Amrit Singh Bedi, Raghu Pasupathy, Vaneet Aggarwal
TMLR 2025 PROPS: Progressively Private Self-Alignment of Large Language Models Noel Teku, Fengwei Tian, Payel Bhattacharjee, Souradip Chakraborty, Amrit Singh Bedi, Ravi Tandon
NeurIPS 2024 FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding? Marco Bornstein, Amrit Singh Bedi, Abdirisak Mohamed, Furong Huang
JMLR 2024 On the Sample Complexity and Metastability of Heavy-Tailed Policy Search in Continuous Control Amrit Singh Bedi, Anjaly Parayil, Junyu Zhang, Mengdi Wang, Alec Koppel
ICMLW 2024 PIPER: Primitive-Informed Preference-Based Hierarchical Reinforcement Learning via Hindsight Relabeling Utsav Singh, Wesley A. Suttle, Brian M. Sadler, Vinay P. Namboodiri, Amrit Singh Bedi
NeurIPS 2024 Transfer Q-Star : Principled Decoding for LLM Alignment Souradip Chakraborty, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Singh Bedi, Furong Huang
AAAI 2023 Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm Qinbo Bai, Amrit Singh Bedi, Vaneet Aggarwal
AAAI 2023 Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning Souradip Chakraborty, Amrit Singh Bedi, Pratap Tokekar, Alec Koppel, Brian M. Sadler, Furong Huang, Dinesh Manocha
AAAI 2022 Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Primal-Dual Approach Qinbo Bai, Amrit Singh Bedi, Mridul Agarwal, Alec Koppel, Vaneet Aggarwal
ICML 2022 FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning Anis Elgabli, Chaouki Ben Issaid, Amrit Singh Bedi, Ketan Rajawat, Mehdi Bennis, Vaneet Aggarwal
AAAI 2022 Multi-Agent Reinforcement Learning with General Utilities via Decentralized Shadow Reward Actor-Critic Junyu Zhang, Amrit Singh Bedi, Mengdi Wang, Alec Koppel
ICML 2022 On the Hidden Biases of Policy Mirror Ascent in Continuous Action Spaces Amrit Singh Bedi, Souradip Chakraborty, Anjaly Parayil, Brian M Sadler, Pratap Tokekar, Alec Koppel
L4DC 2020 Efficient Large-Scale Gaussian Process Bandits by Believing Only Informative Actions Amrit Singh Bedi, Dheeraj Peddireddy, Vaneet Aggarwal, Alec Koppel
NeurIPS 2020 Variational Policy Gradient Method for Reinforcement Learning with General Utilities Junyu Zhang, Alec Koppel, Amrit Singh Bedi, Csaba Szepesvari, Mengdi Wang