Oh, Min-Hwan

43 publications

ICLR 2025 ADAM Optimization with Adaptive Batch Selection Gyu Yeol Kim, Min-hwan Oh
ICLR 2025 Adversarial Policy Optimization for Offline Preference-Based Reinforcement Learning Hyungkyu Kang, Min-hwan Oh
ICML 2025 Combinatorial Reinforcement Learning with Preference Feedback Joongkyu Lee, Min-Hwan Oh
ICLR 2025 Dynamic Assortment Selection and Pricing with Censored Preference Feedback Jung-hun Kim, Min-hwan Oh
NeurIPS 2025 EUGens: Efficient, Unified and General Dense Layers Sang Min Kim, Byeongchan Kim, Arijit Sehanobish, Somnath Basu Roy Chowdhury, Rahul Kidambi, Dongseok Shim, Kumar Avinava Dubey, Snigdha Chaturvedi, Min-hwan Oh, Krzysztof Marcin Choromanski
COLT 2025 Experimental Design for Semiparametric Bandits Seok-Jin Kim, Gi-Soo Kim, Min-hwan Oh
NeurIPS 2025 Exploration via Feature Perturbation in Contextual Bandits Seouh-won Yi, Min-hwan Oh
ICML 2025 Improved Online Confidence Bounds for Multinomial Logistic Bandits Joongkyu Lee, Min-Hwan Oh
NeurIPS 2025 Infrequent Exploration in Linear Bandits Harin Lee, Min-hwan Oh
ICLR 2025 Lasso Bandit with Compatibility Condition on Optimal Arm Harin Lee, Taehyun Hwang, Min-hwan Oh
ICML 2025 Linear Bandits with Partially Observable Features Wonyoung Kim, Sungwoo Park, Garud Iyengar, Assaf Zeevi, Min-Hwan Oh
ICLR 2025 Minimax Optimal Reinforcement Learning with Quasi-Optimism Harin Lee, Min-hwan Oh
ICML 2025 Optimal and Practical Batched Linear Bandit Algorithm Sanghoon Yu, Min-Hwan Oh
NeurIPS 2025 Oracle-Efficient Combinatorial Semi-Bandits Jung-hun Kim, Milan Vojnovic, Min-hwan Oh
NeurIPS 2025 Position: AI Should Sense Better, Not Just Scale Bigger: Adaptive Sensing as a Paradigm Shift Eunsu Baek, Keondo Park, Jeonggil Ko, Min-hwan Oh, Taesik Gong, Hyung-Sin Kim
NeurIPS 2025 Preference-Based Reinforcement Learning Beyond Pairwise Comparisons: Benefits of Multiple Options Joongkyu Lee, Seouh-won Yi, Min-hwan Oh
NeurIPS 2025 Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
ICML 2025 Symmetry-Aware GFlowNets Hohyun Kim, Seunggeun Lee, Min-Hwan Oh
NeurIPS 2025 Thompson Sampling for Multi-Objective Linear Contextual Bandit Somangchan Park, Heesang Ann, Min-hwan Oh
NeurIPS 2025 Tractable Multinomial Logit Contextual Bandits with Non-Linear Utilities Taehyun Hwang, Dahngoon Kim, Min-hwan Oh
NeurIPS 2025 True Impact of Cascade Length in Contextual Cascading Bandits Hyunjun Choi, Joongkyu Lee, Min-hwan Oh
ICLR 2024 Demystifying Linear MDPs and Novel Dynamics Aggregation Framework Joongkyu Lee, Min-hwan Oh
AAAI 2024 Doubly Perturbed Task Free Continual Learning Byung Hyun Lee, Min-hwan Oh, Se Young Chun
COLT 2024 Follow-the-Perturbed-Leader with Fréchet-Type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
NeurIPS 2024 Improved Regret of Linear Ensemble Sampling Harin Lee, Min-hwan Oh
AAAI 2024 Learning Uncertainty-Aware Temporally-Extended Actions Joongkyu Lee, Seung Joon Park, Yunhao Tang, Min-hwan Oh
NeurIPS 2024 Local Anti-Concentration Class: Logarithmic Regret for Greedy Linear Contextual Bandit Seok-Jin Kim, Min-hwan Oh
AAAI 2024 Mixed-Effects Contextual Bandits Kyungbok Lee, Myunghee Cho Paik, Min-hwan Oh, Gi-Soo Kim
NeurIPS 2024 Nearly Minimax Optimal Regret for Multinomial Logistic Bandit Joongkyu Lee, Min-hwan Oh
NeurIPS 2024 Queueing Matching Bandits with Preference Feedback Jung-hun Kim, Min-hwan Oh
NeurIPS 2024 Randomized Exploration for Reinforcement Learning with Multinomial Logistic Function Approximation Wooseong Cho, Taehyun Hwang, Joongkyu Lee, Min-hwan Oh
NeurIPS 2023 Cascading Contextual Assortment Bandits Hyun-jun Choi, Rajan Udwani, Min-hwan Oh
ICML 2023 Combinatorial Neural Bandits Taehyun Hwang, Kyuwook Chai, Min-Hwan Oh
ICML 2023 Model-Based Offline Reinforcement Learning with Count-Based Conservatism Byeongchan Kim, Min-Hwan Oh
AAAI 2023 Model-Based Reinforcement Learning with Multinomial Logistic Function Approximation Taehyun Hwang, Min-hwan Oh
ICML 2023 Semi-Parametric Contextual Pricing Algorithm Using Cox Proportional Hazards Model Young-Geun Choi, Gi-Soo Kim, Yunseo Choi, Wooseong Cho, Myunghee Cho Paik, Min-Hwan Oh
AISTATS 2023 Squeeze All: Novel Estimator and Self-Normalized Bound for Linear Contextual Bandits Wonyoung Kim, Myunghee Cho Paik, Min-Hwan Oh
NeurIPSW 2023 Uncertainty-Aware Action Repeating Options Joongkyu Lee, Seung Joon Park, Yunhao Tang, Min-hwan Oh
AAAI 2021 Multinomial Logit Contextual Bandits: Provable Optimality and Practicality Min-hwan Oh, Garud Iyengar
ICML 2021 Sparsity-Agnostic Lasso Bandit Min-Hwan Oh, Garud Iyengar, Assaf Zeevi
AAAI 2020 Crowd Counting with Decomposed Uncertainty Min-hwan Oh, Peder A. Olsen, Karthikeyan Natesan Ramamurthy
ICMLW 2019 Multinomial Logit Contextual Bandits Min-hwan Oh, Garud Iyengar
NeurIPS 2019 Thompson Sampling for Multinomial Logit Contextual Bandits Min-hwan Oh, Garud Iyengar