Prabhu, Ameya

31 publications

TMLR 2026 Delta-Influence: Identifying Poisons via Influence Functions Wenjie Li, Jiawei Li, Pengcheng Zeng, Christian Schroeder de Witt, Ameya Prabhu, Amartya Sanyal
ICLRW 2025 Are We Done with Object-Centric Learning? Alexander Rubinstein, Ameya Prabhu, Matthias Bethge, Seong Joon Oh
ICLRW 2025 Can Language Models Falsify? the Need for Inverse Benchmarking Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru, Jonas Geiping, Matthias Bethge, Ameya Prabhu
ICML 2025 Great Models Think Alike and This Undermines AI Oversight Shashwat Goel, Joschka Strüber, Ilze Amanda Auzina, Karuna K Chandra, Ponnurangam Kumaraguru, Douwe Kiela, Ameya Prabhu, Matthias Bethge, Jonas Geiping
ICLRW 2025 Great Models Think Alike and This Undermines AI Oversight Shashwat Goel, Joschka Strüber, Ilze Amanda Auzina, Karuna K Chandra, Ponnurangam Kumaraguru, Douwe Kiela, Ameya Prabhu, Matthias Bethge, Jonas Geiping
ICLRW 2025 How to Merge Multimodal Models over Time? Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth, Ameya Prabhu, Zeynep Akata, Samuel Albanie, Matthias Bethge
CVPR 2025 How to Merge Your Multimodal Models over Time? Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth, Ameya Prabhu, Zeynep Akata, Samuel Albanie, Matthias Bethge
ICCV 2025 VGGSounder: Audio-Visual Evaluations for Foundation Models Daniil Zverev, Thaddäus Wiedemer, Ameya Prabhu, Matthias Bethge, Wieland Brendel, A. Sophia Koepke
NeurIPSW 2024 A Practitioner's Guide to Continual Multimodal Pretraining Karsten Roth, Vishaal Udandarao, Sebastian Dziadzio, Ameya Prabhu, Mehdi Cherti, Oriol Vinyals, Olivier J Henaff, Samuel Albanie, Matthias Bethge, Zeynep Akata
NeurIPS 2024 A Practitioner's Guide to Real-World Continual Multimodal Pretraining Vishaal Udandarao, Karsten Roth, Sebastian Dziadzio, Ameya Prabhu, Mehdi Cherti, Oriol Vinyals, Olivier Hénaff, Samuel Albanie, Zeynep Akata, Matthias Bethge
NeurIPS 2024 CiteME: Can Language Models Accurately Cite Scientific Claims? Ori Press, Andreas Hochlehnert, Ameya Prabhu, Vishaal Udandarao, Ofir Press, Matthias Bethge
TMLR 2024 Corrective Machine Unlearning Shashwat Goel, Ameya Prabhu, Philip Torr, Ponnurangam Kumaraguru, Amartya Sanyal
NeurIPS 2024 Efficient Lifelong Model Evaluation in an Era of Rapid Progress Ameya Prabhu, Vishaal Udandarao, Philip H.S. Torr, Matthias Bethge, Adel Bibi, Samuel Albanie
CoLLAs 2024 From Categories to Classifiers: Name-Only Continual Learning by Exploring the Web Ameya Prabhu, Hasan Abed Al Kader Hammoud, Ser-Nam Lim, Bernard Ghanem, Philip Torr, Adel Bibi
NeurIPS 2024 No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H.S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge
ICLRW 2024 Pre-Training Concept Frequency Is Predictive of CLIP Zero-Shot Performance Vishaal Udandarao, Ameya Prabhu, Philip Torr, Adel Bibi, Samuel Albanie, Matthias Bethge
NeurIPSW 2024 Pretraining Frequency Predicts Compositional Generalization of CLIP on Real-World Tasks Thaddäus Wiedemer, Yash Sharma, Ameya Prabhu, Matthias Bethge, Wieland Brendel
NeurIPS 2024 RanDumb: Random Representations Outperform Online Continually Learned Representations Ameya Prabhu, Shiven Sinha, Ponnurangam Kumaraguru, Philip H.S. Torr, Ozan Sener, Puneet K. Dokania
NeurIPSW 2024 Wu’s Method Boosts Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry Shiven Sinha, Ameya Prabhu, Ponnurangam Kumaraguru, Siddharth Bhat, Matthias Bethge
TMLR 2024 kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies Zhongrui Gui, Shuyang Sun, Runjia Li, Jianhao Yuan, Zhaochong An, Karsten Roth, Ameya Prabhu, Philip Torr
CVPR 2023 Computationally Budgeted Continual Learning: What Does Matter? Ameya Prabhu, Hasan Abed Al Kader Hammoud, Puneet K. Dokania, Philip H.S. Torr, Ser-Nam Lim, Bernard Ghanem, Adel Bibi
TMLR 2023 Inverse Scaling: When Bigger Isn't Better Ian R. McKenzie, Alexander Lyzhov, Michael Martin Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Xudong Shen, Joe Cavanagh, Andrew George Gritsevskiy, Derik Kauffman, Aaron T. Kirtland, Zhengping Zhou, Yuhui Zhang, Sicong Huang, Daniel Wurgaft, Max Weiss, Alexis Ross, Gabriel Recchia, Alisa Liu, Jiacheng Liu, Tom Tseng, Tomasz Korbak, Najoung Kim, Samuel R. Bowman, Ethan Perez
ICCV 2023 Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right? Hasan Abed Al Kader Hammoud, Ameya Prabhu, Ser-Nam Lim, Philip H.S. Torr, Adel Bibi, Bernard Ghanem
CVPR 2023 Real-Time Evaluation in Online Continual Learning: A New Hope Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Ameya Prabhu, Philip H.S. Torr, Bernard Ghanem
CoLLAs 2022 CLActive: Episodic Memories for Rapid Active Learning Sri Aurobindo Munagala, Sidhant Subramanian, Shyamgopal Karthik, Ameya Prabhu, Anoop Namboodiri
ICLR 2021 No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks Shyamgopal Karthik, Ameya Prabhu, Puneet K. Dokania, Vineet Gandhi
ECCV 2020 GDumb: A Simple Approach That Questions Our Progress in Continual Learning Ameya Prabhu, Philip H. S. Torr, Puneet K. Dokania
AAAI 2018 Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks Ameya Prabhu, Harish Krishna, Soham Saha
ECCV 2018 Deep Expander Networks: Efficient Deep Networks from Graph Theory Ameya Prabhu, Girish Varma, Anoop Namboodiri
WACV 2018 Distribution-Aware Binarization of Neural Networks for Sketch Recognition Ameya Prabhu, Vishal Batchu, Sri Aurobindo Munagala, Rohit Gajawada, Anoop M. Namboodiri
WACV 2018 Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory Ameya Prabhu, Vishal Batchu, Rohit Gajawada, Sri Aurobindo Munagala, Anoop M. Namboodiri