Hassani, Hamed

78 publications

ICML 2025 Adversarial Reasoning at Jailbreaking Time Mahdi Sabbaghi, Paul Kassianik, George J. Pappas, Amin Karbasi, Hamed Hassani
ICLR 2025 Approaching Rate-Distortion Limits in Neural Compression with Lattice Transform Coding Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti
L4DC 2025 Asymptotics of Linear Regression with Linearly Dependent Data Behrad Moniri, Hamed Hassani
TMLR 2025 Automated Black-Box Prompt Engineering for Personalized Text-to-Image Generation Yutong He, Alexander Robey, Naoki Murata, Yiding Jiang, Joshua Nathaniel Williams, George J. Pappas, Hamed Hassani, Yuki Mitsufuji, Ruslan Salakhutdinov, J Zico Kolter
NeurIPS 2025 Conformal Inference Under High-Dimensional Covariate Shifts via Likelihood-Ratio Regularization Sunay Joshi, Shayan Kiyani, George J. Pappas, Edgar Dobriban, Hamed Hassani
NeurIPS 2025 Conformal Information Pursuit for Interactively Guiding Large Language Models Kwan Ho Ryan Chan, Yuyan Ge, Edgar Dobriban, Hamed Hassani, Rene Vidal
NeurIPS 2025 Conformal Prediction Beyond the Seen: A Missing Mass Perspective for Uncertainty Quantification in Generative Models Sima Noorani, Shayan Kiyani, George J. Pappas, Hamed Hassani
ICML 2025 Decision Theoretic Foundations for Conformal Prediction: Optimal Uncertainty Quantification for Risk-Averse Agents Shayan Kiyani, George J. Pappas, Aaron Roth, Hamed Hassani
ICML 2025 On the Concurrence of Layer-Wise Preconditioning Methods and Provable Feature Learning Thomas Tck Zhang, Behrad Moniri, Ansh Nagwekar, Faraz Rahman, Anton Xue, Hamed Hassani, Nikolai Matni
NeurIPS 2025 On the Mechanisms of Weak-to-Strong Generalization: A Theoretical Perspective Behrad Moniri, Hamed Hassani
NeurIPS 2025 Optimal Neural Compressors for the Rate-Distortion-Perception Tradeoff Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti
TMLR 2025 SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas
ICLRW 2025 Watermark Smoothing Attacks Against Language Models Hongyan Chang, Hamed Hassani, Reza Shokri
ICLRW 2025 Watermarking Language Models with Error Correcting Codes Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
ICML 2024 A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks Behrad Moniri, Donghwan Lee, Hamed Hassani, Edgar Dobriban
ICLR 2024 Adversarial Training Should Be Cast as a Non-Zero-Sum Game Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher
ICML 2024 Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth Kevin Kögler, Aleksandr Shevchenko, Hamed Hassani, Marco Mondelli
ICML 2024 Conformal Prediction with Learned Features Shayan Kiyani, George J. Pappas, Hamed Hassani
TMLR 2024 Federated TD Learning with Linear Function Approximation Under Environmental Heterogeneity Han Wang, Aritra Mitra, Hamed Hassani, George J. Pappas, James Anderson
NeurIPS 2024 JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong
ICMLW 2024 JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong
NeurIPS 2024 Length Optimization in Conformal Prediction Shayan Kiyani, George Pappas, Hamed Hassani
NeurIPS 2024 One-Shot Safety Alignment for Large Language Models via Optimal Dualization Xinmeng Huang, Shuo Li, Edgar Dobriban, Osbert Bastani, Hamed Hassani, Dongsheng Ding
ICMLW 2024 One-Shot Safety Alignment for Large Language Models via Optimal Dualization Xinmeng Huang, Shuo Li, Edgar Dobriban, Osbert Bastani, Hamed Hassani, Dongsheng Ding
ICMLW 2024 One-Shot Safety Alignment for Large Language Models via Optimal Dualization Xinmeng Huang, Shuo Li, Edgar Dobriban, Osbert Bastani, Hamed Hassani, Dongsheng Ding
ICML 2024 Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks Liam Collins, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai
AISTATS 2024 Stochastic Approximation with Delayed Updates: Finite-Time Rates Under Markovian Sampling Arman Adibi, Nicolò Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra
TMLR 2024 Temporal Difference Learning with Compressed Updates: Error-Feedback Meets Reinforcement Learning Aritra Mitra, George J. Pappas, Hamed Hassani
NeurIPSW 2023 A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks Behrad Moniri, Donghwan Lee, Hamed Hassani, Edgar Dobriban
ICMLW 2023 Adversarial Training Should Be Cast as a Non-Zero-Sum Game Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher
ICML 2023 Demystifying Disagreement-on-the-Line in High Dimensions Donghwan Lee, Behrad Moniri, Xinmeng Huang, Edgar Dobriban, Hamed Hassani
ICML 2023 Fundamental Limits of Two-Layer Autoencoders, and Achieving Them with Gradient Methods Aleksandr Shevchenko, Kevin Kögler, Hamed Hassani, Marco Mondelli
NeurIPSW 2023 Jailbreaking Black Box Large Language Models in Twenty Queries Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, Eric Wong
L4DC 2023 Linear Stochastic Bandits over a Bit-Constrained Channel Aritra Mitra, Hamed Hassani, George J. Pappas
ICLR 2023 Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri
NeurIPSW 2023 SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Alexander Robey, Eric Wong, Hamed Hassani, George Pappas
TMLR 2023 Straggler-Resilient Personalized Federated Learning Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari
JMLR 2023 T-Cal: An Optimal Test for the Calibration of Predictive Models Donghwan Lee, Xinmeng Huang, Hamed Hassani, Edgar Dobriban
ICMLW 2023 Text + Sketch: Image Compression at Ultra Low Rates Eric Lei, Yigit Berkay Uslu, Hamed Hassani, Shirin Saeedi Bidokhti
AISTATS 2022 Federated Functional Gradient Boosting Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi
AISTATS 2022 Minimax Optimization: The Case of Convex-Submodular Arman Adibi, Aryan Mokhtari, Hamed Hassani
ICLR 2022 An Agnostic Approach to Federated Learning with Class Imbalance Zebang Shen, Juan Cervino, Hamed Hassani, Alejandro Ribeiro
NeurIPS 2022 Collaborative Learning of Discrete Distributions Under Heterogeneity and Communication Constraints Xinmeng Huang, Donghwan Lee, Edgar Dobriban, Hamed Hassani
NeurIPS 2022 Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds Aritra Mitra, Arman Adibi, George J. Pappas, Hamed Hassani
ICLR 2022 Do Deep Networks Transfer Invariances Across Classes? Allan Zhou, Fahim Tajwar, Alexander Robey, Tom Knowles, George J. Pappas, Hamed Hassani, Chelsea Finn
NeurIPS 2022 FedAvg with Fine Tuning: Local Updates Lead to Representation Learning Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
ICML 2022 Probabilistically Robust Learning: Balancing Average and Worst-Case Performance Alexander Robey, Luiz Chamon, George J. Pappas, Hamed Hassani
NeurIPS 2022 Probable Domain Generalization via Quantile Risk Minimization Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf
COLT 2022 Self-Consistency of the Fokker Planck Equation Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani
NeurIPS 2021 Adversarial Robustness with Semi-Infinite Constrained Learning Alexander Robey, Luiz Chamon, George J. Pappas, Hamed Hassani, Alejandro Ribeiro
ICML 2021 Exploiting Shared Representations for Personalized Federated Learning Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2021 Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients Aritra Mitra, Rayana Jaafar, George J. Pappas, Hamed Hassani
NeurIPS 2021 Model-Based Domain Generalization Alexander Robey, George J. Pappas, Hamed Hassani
L4DC 2021 Optimal Algorithms for Submodular Maximization with Distributed Constraints Alexander Robey, Arman Adibi, Brent Schlotfeldt, Hamed Hassani, George J. Pappas
AISTATS 2020 Black Box Submodular Maximization: Discrete and Continuous Settings Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi
AISTATS 2020 FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani
AISTATS 2020 One Sample Stochastic Frank-Wolfe Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
COLT 2020 Precise Tradeoffs in Adversarial Training for Linear Regression Adel Javanmard, Mahdi Soltanolkotabi, Hamed Hassani
ICML 2020 Quantized Decentralized Stochastic Learning over Directed Graphs Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
AISTATS 2020 Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
NeurIPS 2020 Sinkhorn Barycenter via Functional Gradient Descent Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
NeurIPS 2020 Sinkhorn Natural Gradient for Generative Models Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
JMLR 2020 Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization Aryan Mokhtari, Hamed Hassani, Amin Karbasi
NeurIPS 2020 Submodular Meta-Learning Arman Adibi, Aryan Mokhtari, Hamed Hassani
NeurIPS 2019 Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George Pappas
ICML 2019 Entropic GANs Meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi
ICML 2019 Hessian Aided Policy Gradient Zebang Shen, Alejandro Ribeiro, Hamed Hassani, Hui Qian, Chao Mi
NeurIPS 2019 Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback Mingrui Zhang, Lin Chen, Hamed Hassani, Amin Karbasi
NeurIPS 2019 Robust and Communication-Efficient Collaborative Learning Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
NeurIPS 2019 Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen
AISTATS 2018 Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap Aryan Mokhtari, Hamed Hassani, Amin Karbasi
ICML 2018 Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings Aryan Mokhtari, Hamed Hassani, Amin Karbasi
AISTATS 2018 Online Continuous Submodular Maximization Lin Chen, Hamed Hassani, Amin Karbasi
ICML 2018 Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi
NeurIPS 2017 Gradient Methods for Submodular Maximization Hamed Hassani, Mahdi Soltanolkotabi, Amin Karbasi
NeurIPS 2017 Stochastic Submodular Maximization: The Case of Coverage Functions Mohammad Karimi, Mario Lucic, Hamed Hassani, Andreas Krause
NeurIPS 2016 Fast and Provably Good Seedings for K-Means Olivier Bachem, Mario Lucic, Hamed Hassani, Andreas Krause
NeurIPS 2015 Sampling from Probabilistic Submodular Models Alkis Gotovos, Hamed Hassani, Andreas Krause