ML Anthology
Authors
Search
About
Arumugam, Dilip
13 publications
UAI
2025
Hindsight Merging: Diverse Data Generation with Language Models
Veniamin Veselovsky
,
Benedikt Stroebl
,
Gianluca Bencomo
,
Dilip Arumugam
,
Lisa Schut
,
Arvind Narayanan
,
Thomas L. Griffiths
NeurIPSW
2023
Social Contract AI: Aligning AI Assistants with Implicit Group Norms
Jan-Philipp Fränken
,
Samuel Kwok
,
Peixuan Ye
,
Kanishk Gandhi
,
Dilip Arumugam
,
Jared Moore
,
Alex Tamkin
,
Tobias Gerstenberg
,
Noah Goodman
NeurIPS
2022
Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning
Dilip Arumugam
,
Benjamin Van Roy
ICMLW
2022
Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning
Dilip Arumugam
,
Benjamin Van Roy
NeurIPSW
2022
In the ZONE: Measuring Difficulty and Progression in Curriculum Generation
Rose E Wang
,
Jesse Mu
,
Dilip Arumugam
,
Natasha Jaques
,
Noah Goodman
NeurIPSW
2022
On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning
Dilip Arumugam
,
Mark K Ho
,
Noah Goodman
,
Benjamin Van Roy
NeurIPS
2022
Planning to the Information Horizon of BAMDPs via Epistemic State Abstraction
Dilip Arumugam
,
Satinder P. Singh
ICML
2021
Deciding What to Learn: A Rate-Distortion Approach
Dilip Arumugam
,
Benjamin Van Roy
NeurIPS
2021
The Value of Information When Deciding What to Learn
Dilip Arumugam
,
Benjamin Van Roy
ICML
2020
Flexible and Efficient Long-Range Planning Through Curious Exploration
Aidan Curtis
,
Minjian Xin
,
Dilip Arumugam
,
Kevin Feigelis
,
Daniel Yamins
AISTATS
2020
Value Preserving State-Action Abstractions
David Abel
,
Nate Umbanhowar
,
Khimya Khetarpal
,
Dilip Arumugam
,
Doina Precup
,
Michael Littman
AAAI
2019
State Abstraction as Compression in Apprenticeship Learning
David Abel
,
Dilip Arumugam
,
Kavosh Asadi
,
Yuu Jinnai
,
Michael L. Littman
,
Lawson L. S. Wong
ICML
2018
State Abstractions for Lifelong Reinforcement Learning
David Abel
,
Dilip Arumugam
,
Lucas Lehnert
,
Michael Littman