Balashankar, Ananth

7 publications

ICLRW 2025 Inducing Group Fairness in Prompt-Based Language Model Decisions James Atwood, Nino Scherrer, Preethi Lahoti, Ananth Balashankar, Flavien Prost, Ahmad Beirami
ICML 2025 InfAlign: Inference-Aware Language Model Alignment Ananth Balashankar, Ziteng Sun, Jonathan Berant, Jacob Eisenstein, Michael Collins, Adrian Hutter, Jong Lee, Chirag Nagpal, Flavien Prost, Aradhana Sinha, Ananda Theertha Suresh, Ahmad Beirami
TMLR 2024 Break It, Imitate It, Fix It: Robustness by Generating Human-like Attacks Aradhana Sinha, Ananth Balashankar, Ahmad Beirami, Thi Avrahami, Jilin Chen, Alex Beutel
ICLRW 2024 Break It, Imitate It, Fix It: Robustness by Generating Human-like Attacks Aradhana Sinha, Ananth Balashankar, Ahmad Beirami, Thi Avrahami, Jilin Chen, Alex Beutel
ICMLW 2024 Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment Zhaofeng Wu, Ananth Balashankar, Yoon Kim, Jacob Eisenstein, Ahmad Beirami
NeurIPS 2023 Effective Robustness Against Natural Distribution Shifts for Models with Different Training Data Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin
CLeaR 2023 Learning Conditional Granger Causal Temporal Networks Ananth Balashankar, Srikanth Jagabathula, Lakshmi Subramanian