Kim, Kee-Eung

66 publications

NeurIPS 2025 DPAIL: Training Diffusion Policy for Adversarial Imitation Learning Without Policy Optimization Yunseon Choi, Minchan Jeong, Soobin Um, Kee-Eung Kim
ICLR 2025 Monet: Mixture of Monosemantic Experts for Transformers Jungwoo Park, Ahn Young Jin, Kee-Eung Kim, Jaewoo Kang
AAAI 2024 A Submodular Optimization Approach to Accountable Loan Approval Kyungsik Lee, Hana Yoo, Sumin Shin, Wooyoung Kim, Yeonung Baek, Hyunjin Kang, Jaehyun Kim, Kee-Eung Kim
NeurIPS 2024 Data Augmentation with Diffusion for Open-Set Semi-Supervised Learning Seonghyun Ban, Heesan Kong, Kee-Eung Kim
IJCAI 2024 Diversification of Adaptive Policy for Effective Offline Reinforcement Learning Yunseon Choi, Li Zhao, Chuheng Zhang, Lei Song, Jiang Bian, Kee-Eung Kim
ICLR 2024 Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies Haanvid Lee, Tri Wahyu Guntara, Jongmin Lee, Yung-Kyun Noh, Kee-Eung Kim
NeurIPS 2024 Mitigating Covariate Shift in Behavioral Cloning via Robust Stationary Distribution Correction Seokin Seo, Byung-Jun Lee, Jongmin Lee, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim
AAAI 2024 Stitching Sub-Trajectories with Conditional Diffusion Model for Goal-Conditioned Offline RL Sungyoon Kim, Yunseon Choi, Daiki E. Matsunaga, Kee-Eung Kim
NeurIPS 2023 AlberDICE: Addressing Out-of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation Daiki E. Matsunaga, Jongmin Lee, Jaeseok Yoon, Stefanos Leonardos, Pieter Abbeel, Kee-Eung Kim
ICML 2023 Information-Theoretic State Space Model for Multi-View Reinforcement Learning Hyeongjoo Hwang, Seokin Seo, Youngsoo Jang, Sungyoon Kim, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
NeurIPS 2023 Regularized Behavior Cloning for Blocking the Leakage of past Action Information Seokin Seo, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim
AAAI 2023 Trustworthy Residual Vehicle Value Prediction for Auto Finance Mihye Kim, Jimyung Choi, Jaehyun Kim, Wooyoung Kim, Yeonung Baek, Gisuk Bang, Kwangwoon Son, Yeonman Ryou, Kee-Eung Kim
ICLR 2022 COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation Jongmin Lee, Cosmin Paduraru, Daniel J Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, Arthur Guez
IJCAI 2022 Data Augmentation for Learning to Play in Text-Based Games Jinhyeon Kim, Kee-Eung Kim
ICLR 2022 DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim
ICLR 2022 GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems Youngsoo Jang, Jongmin Lee, Kee-Eung Kim
NeurIPS 2022 LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim
NeurIPS 2022 Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions Haanvid Lee, Jongmin Lee, Yunseon Choi, Wonseok Jeon, Byung-Jun Lee, Yung-Kyun Noh, Kee-Eung Kim
ICML 2022 PAC-Net: A Model Pruning Approach to Inductive Transfer Learning Sanghoon Myung, In Huh, Wonik Jang, Jae Myung Choe, Jisu Ryu, Daesin Kim, Kee-Eung Kim, Changwook Jeong
ICLR 2022 Structure-Aware Transformer Policy for Inhomogeneous Multi-Task Reinforcement Learning Sunghoon Hong, Deunsol Yoon, Kee-Eung Kim
ICLR 2021 Monte-Carlo Planning and Learning with Language Action Value Estimates Youngsoo Jang, Seokin Seo, Jongmin Lee, Kee-Eung Kim
NeurIPS 2021 Multi-View Representation Learning via Total Correlation Objective HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
ICML 2021 OptiDICE: Offline Policy Optimization via Stationary Distribution Correction Estimation Jongmin Lee, Wonseok Jeon, Byungjun Lee, Joelle Pineau, Kee-Eung Kim
ICLR 2021 Representation Balancing Offline Model-Based Reinforcement Learning Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim
ICLR 2021 Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic Deunsol Yoon, Sunghoon Hong, Byung-Jun Lee, Kee-Eung Kim
ICML 2020 Batch Reinforcement Learning with Hyperparameter Gradients Byungjun Lee, Jongmin Lee, Peter Vrancx, Dongho Kim, Kee-Eung Kim
AAAI 2020 Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues Youngsoo Jang, Jongmin Lee, Kee-Eung Kim
MLJ 2020 Foreword: Special Issue for the Journal Track of the 11th Asian Conference on Machine Learning (ACML 2019) Kee-Eung Kim, Jun Zhu
MLJ 2020 Foreword: Special Issue for the Journal Track of the 12th Asian Conference on Machine Learning (ACML 2020) Kee-Eung Kim, Vineeth N. Balasubramanian
AAAI 2020 Monte-Carlo Tree Search in Continuous Action Spaces with Value Gradients Jongmin Lee, Wonseok Jeon, Geon-Hyeong Kim, Kee-Eung Kim
NeurIPS 2020 Reinforcement Learning for Control with Multiple Frequencies Jongmin Lee, Byung-Jun Lee, Kee-Eung Kim
AAAI 2020 Residual Neural Processes Byung-Jun Lee, Seunghoon Hong, Kee-Eung Kim
ICML 2020 Variational Inference for Sequential Data with Future Likelihood Estimates Geon-Hyeong Kim, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim
NeurIPS 2020 Variational Interaction Information Maximization for Cross-Domain Disentanglement HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
MLJ 2019 Bayesian Optimistic Kullback-Leibler Exploration Kanghoon Lee, Geon-hyeong Kim, Pedro A. Ortega, Daniel D. Lee, Kee-Eung Kim
ACML 2019 Trust Region Sequential Variational Inference Geon-Hyeong Kim, Youngsoo Jang, Jongmin Lee, Wonseok Jeon, Hongseok Yang, Kee-Eung Kim
NeurIPS 2018 A Bayesian Approach to Generative Adversarial Imitation Learning Wonseok Jeon, Seokin Seo, Kee-Eung Kim
AAAI 2018 Imitation Learning via Kernel Mean Embedding Kee-Eung Kim, Hyun Soo Park
NeurIPS 2018 Monte-Carlo Tree Search for Constrained POMDPs Jongmin Lee, Geon-hyeong Kim, Pascal Poupart, Kee-Eung Kim
IJCAI 2017 Constrained Bayesian Reinforcement Learning via Approximate Linear Programming Jongmin Lee, Youngsoo Jang, Pascal Poupart, Kee-Eung Kim
MLJ 2017 Foreword: Special Issue for the Journal Track of the 8th Asian Conference on Machine Learning (ACML 2016) Robert J. Durrant, Kee-Eung Kim, Geoffrey Holmes, Stephen Marsland, Masashi Sugiyama, Zhi-Hua Zhou
NeurIPS 2017 Generative Local Metric Learning for Kernel Regression Yung-Kyun Noh, Masashi Sugiyama, Kee-Eung Kim, Frank Park, Daniel D Lee
AISTATS 2017 Hierarchically-Partitioned Gaussian Process Approximation Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim
IJCAI 2016 Bayesian Reinforcement Learning with Behavioral Feedback Teakgyu Hong, Jongmin Lee, Kee-Eung Kim, Pedro A. Ortega, Daniel D. Lee
ACML 2016 Preface Robert J. Durrant, Kee-Eung Kim
AAAI 2015 Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes Pascal Poupart, Aarti Malhotra, Pei Pei, Kee-Eung Kim, Bongseok Goh, Michael Bowling
AISTATS 2015 Reactive Bandits with Attitude Pedro A. Ortega, Kee-Eung Kim, Daniel D. Lee
AAAI 2015 Reward Shaping for Model-Based Bayesian Reinforcement Learning Hyeoneun Kim, Woosang Lim, Kanghoon Lee, Yung-Kyun Noh, Kee-Eung Kim
AAAI 2015 Tighter Value Function Bounds for Bayesian Reinforcement Learning Kanghoon Lee, Kee-Eung Kim
IJCAI 2013 Bayesian Nonparametric Feature Construction for Inverse Reinforcement Learning Jaedeug Choi, Kee-Eung Kim
NeurIPS 2012 Cost-Sensitive Exploration in Bayesian Reinforcement Learning Dongho Kim, Kee-eung Kim, Pascal Poupart
NeurIPS 2012 Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions Jaedeug Choi, Kee-eung Kim
UAI 2011 A Geometric Traversal Algorithm for Reward-Uncertain MDPs Eunsoo Oh, Kee-Eung Kim
AAAI 2011 A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces Jaeyoung Park, Kee-Eung Kim, Yoon-Kyu Song
JMLR 2011 Inverse Reinforcement Learning in Partially Observable Environments Jaedeug Choi, Kee-Eung Kim
NeurIPS 2011 MAP Inference for Bayesian Inverse Reinforcement Learning Jaedeug Choi, Kee-eung Kim
IJCAI 2011 Point-Based Value Iteration for Constrained POMDPs Dongho Kim, Jaesong Lee, Kee-Eung Kim, Pascal Poupart
IJCAI 2009 Inverse Reinforcement Learning in Partially Observable Environments Jaedeug Choi, Kee-Eung Kim
AAAI 2008 Exploiting Symmetries in POMDPs for Point-Based Algorithms Kee-Eung Kim
AAAI 2008 Symbolic Heuristic Search Value Iteration for Factored POMDPs Hyeong Seop Sim, Kee-Eung Kim, Jin Hyung Kim, Du-Seong Chang, Myoung-Wan Koo
AAAI 2006 Hand Grip Pattern Recognition for Mobile User Interfaces Kee-Eung Kim, Wook Chang, Sung-Jung Cho, Junghyun Shim, Hyunjeong Lee, Joonah Park, Youngbeom Lee, Sangryoung Kim
IJCAI 2001 Solving Factored MDPs via Non-Homogeneous Partitioning Kee-Eung Kim, Thomas L. Dean
UAI 2000 Learning to Cooperate via Policy Search Leonid Peshkin, Kee-Eung Kim, Nicolas Meuleau, Leslie Pack Kaelbling
UAI 1999 Learning Finite-State Controllers for Partially Observable Environments Nicolas Meuleau, Leonid Peshkin, Kee-Eung Kim, Leslie Pack Kaelbling
UAI 1999 Solving POMDPs by Searching the Space of Finite Policies Nicolas Meuleau, Kee-Eung Kim, Leslie Pack Kaelbling, Anthony R. Cassandra
AAAI 1998 Solving Very Large Weakly Coupled Markov Decision Processes Nicolas Meuleau, Milos Hauskrecht, Kee-Eung Kim, Leonid Peshkin, Leslie Pack Kaelbling, Thomas L. Dean, Craig Boutilier