CoLLAs 2023
56 papers
A Minimalist Approach for Domain Adaptation with Optimal Transport
Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Vladislava Kontsevaya, Andrey Filchenkov Active Class Selection for Few-Shot Class-Incremental Learning
Christopher McClurg, Ali Ayub, Harsh Tyagi, Sarah M. Rajtmajer, Alan R. Wagner Augmenting Autotelic Agents with Large Language Models
Cédric Colas, Laetitia Teodorescu, Pierre-Yves Oudeyer, Xingdi Yuan, Marc-Alexandre Côté Autotelic Reinforcement Learning in Multi-Agent Environments
Eleni Nisioti, Elias Masquil, Gautier Hamon, Clément Moulin-Frier Auxiliary Task Discovery Through Generate-and-Test
Banafsheh Rafiee, Sina Ghiassian, Jun Jin, Richard Sutton, Jun Luo, Adam White Challenging Common Assumptions About Catastrophic Forgetting and Knowledge Accumulation
Timothée Lesort, Oleksiy Ostapenko, Pau Rodríguez, Diganta Misra, Md Rifat Arefin, Laurent Charlin, Irina Rish Class-Incremental Learning with Repetition
Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth Continual Learning Beyond a Single Model
Thang Doan, Seyed Iman Mirzadeh, Mehrdad Farajtabar Continually Learning Representations at Scale
Alexandre Galashov, Jovana Mitrovic, Dhruva Tirumala, Yee Whye Teh, Timothy Nguyen, Arslan Chaudhry, Razvan Pascanu I2I: Initializing Adapters with Improvised Knowledge
Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason Introspective Action Advising for Interpretable Transfer Learning
Joseph Campbell, Yue Guo, Fiona Xie, Simon Stepputtis, Katia Sycara Loss of Plasticity in Continual Deep Reinforcement Learning
Zaheer Abbas, Rosie Zhao, Joseph Modayil, Adam White, Marlos C. Machado Measuring and Mitigating Interference in Reinforcement Learning
Vincent Liu, Han Wang, Ruo Yu Tao, Khurram Javed, Adam White, Martha White Model-Based Meta Automatic Curriculum Learning
Zifan Xu, Yulin Zhang, Shahaf S. Shperberg, Reuth Mirsky, Yuqian Jiang, Bo Liu, Peter Stone Partial Hypernetworks for Continual Learning
Hamed Hemati, Vincenzo Lomonaco, Davide Bacciu, Damian Borth PlaStIL: Plastic and Stable Exemplar-Free Class-Incremental Learning
Grégoire Petit, Adrian Popescu, Eden Belouadah, David Picard, Bertrand Delezoide Prospective Learning: Principled Extrapolation to the Future
Ashwin De Silva, Rahul Ramesh, Lyle Ungar, Marshall Hussain Shuler, Noah J. Cowan, Michael Platt, Chen Li, Leyla Isik, Seung-Eon Roh, Adam Charles, Archana Venkataraman, Brian Caffo, Javier J. How, Justus M Kebschull, John W. Krakauer, Maxim Bichuch, Kaleab Alemayehu Kinfu, Eva Yezerets, Dinesh Jayaraman, Jong M. Shin, Soledad Villar, Ian Phillips, Carey E. Priebe, Thomas Hartung, Michael I. Miller, Jayanta Dey, Ningyuan Huang, Eric Eaton, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Randal Burns, Onyema Osuagwu, Brett Mensh, Alysson R. Muotri, Julia Brown, Chris White, Weiwei Yang, Andrei A. Rusu Timothy Verstynen, Konrad P. Kording, Pratik Chaudhari, Joshua T. Vogelstein Sample-Efficient Learning of Novel Visual Concepts
Sarthak Bhagat, Simon Stepputtis, Joseph Campbell, Katia Sycara Sharing Lifelong Reinforcement Learning Knowledge via Modulating Masks
Saptarshi Nath, Christos Peridis, Eseoghene Ben-Iwhiwhu, Xinran Liu, Shirin Dora, Cong Liu, Soheil Kolouri, Andrea Soltoggio Stabilizing Unsupervised Environment Design with a Learned Adversary
Ishita Mediratta, Minqi Jiang, Jack Parker-Holder, Michael Dennis, Eugene Vinitsky, Tim Rocktäschel The Effectiveness of World Models for Continual Reinforcement Learning
Samuel Kessler, Mateusz Ostaszewski, MichałPaweł Bortkiewicz, Mateusz Żarski, Maciej Wolczyk, Jack Parker-Holder, Stephen J. Roberts, Piotr Miłoś Vision-Language Models as Success Detectors
Yuqing Du, Ksenia Konyushkova, Misha Denil, Akhil Raju, Jessica Landon, Felix Hill, Nando Freitas, Serkan Cabi