Honda, Junya

45 publications

ICML 2025 Geometric Resampling in Nearly Linear Time for Follow-the-Perturbed-Leader with Best-of-Both-Worlds Guarantee in Bandit Problems Botao Chen, Jongyeong Lee, Junya Honda
AISTATS 2025 Multi-Player Approaches for Dueling Bandits Or Raveh, Junya Honda, Masashi Sugiyama
NeurIPS 2025 Optimal Estimation of the Best Mean in Multi-Armed Bandits Takayuki Osogami, Junya Honda, Junpei Komiyama
NeurIPS 2025 Optimal Regret of Bandits Under Differential Privacy Achraf Azize, Yulian Wu, Junya Honda, Francesco Orabona, Shinji Ito, Debabrota Basu
NeurIPS 2025 Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
MLJ 2024 Active Model Selection: A Variance Minimization Approach Satoshi Hara, Mitsuru Matsuura, Junya Honda, Shinji Ito
COLT 2024 Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds Shinji Ito, Taira Tsuchiya, Junya Honda
ICML 2024 Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring Taira Tsuchiya, Shinji Ito, Junya Honda
JMLR 2024 Finite-Time Analysis of Globally Nonstationary Multi-Armed Bandits Junpei Komiyama, Edouard Fouché, Junya Honda
COLT 2024 Follow-the-Perturbed-Leader with Fréchet-Type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
IJCAI 2024 Learning with Posterior Sampling for Revenue Management Under Time-Varying Demand Kazuma Shimizu, Junya Honda, Shinji Ito, Shinji Nakadai
TMLR 2024 The Survival Bandit Problem Charles Riou, Junya Honda, Masashi Sugiyama
ALT 2023 Best-of-Both-Worlds Algorithms for Partial Monitoring Taira Tsuchiya, Shinji Ito, Junya Honda
ALT 2023 Follow-the-Perturbed-Leader Achieves Best-of-Both-Worlds for Bandit Problems Junya Honda, Shinji Ito, Taira Tsuchiya
AISTATS 2023 Further Adaptive Best-of-Both-Worlds Algorithm for Combinatorial Semi-Bandits Taira Tsuchiya, Shinji Ito, Junya Honda
ICML 2023 Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits Jongyeong Lee, Junya Honda, Chao-Kai Chiang, Masashi Sugiyama
NeurIPS 2023 Stability-Penalty-Adaptive Follow-the-Regularized-Leader: Sparsity, Game-Dependency, and Best-of-Both-Worlds Taira Tsuchiya, Shinji Ito, Junya Honda
ACML 2023 Thompson Exploration with Best Challenger Rule in Best Arm Identification Jongyeong Lee, Junya Honda, Masashi Sugiyama
COLT 2022 Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds Shinji Ito, Taira Tsuchiya, Junya Honda
MLJ 2022 Bayesian Optimization with Partially Specified Queries Shogo Hayashi, Junya Honda, Hisashi Kashima
NeurIPS 2022 Minimax Optimal Algorithms for Fixed-Budget Best Arm Identification Junpei Komiyama, Taira Tsuchiya, Junya Honda
NeurIPS 2022 Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs Shinji Ito, Taira Tsuchiya, Junya Honda
ICML 2021 Mediated Uncoupled Learning: Learning Functions Without Direct Input-Output Correspondences Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama
MLJ 2020 A Bad Arm Existence Checking Problem: How to Utilize Asymmetric Problem Structure? Koji Tabata, Atsuyoshi Nakamura, Junya Honda, Tamiki Komatsuzaki
NeurIPS 2020 Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring Taira Tsuchiya, Junya Honda, Masashi Sugiyama
ALT 2020 Bandit Algorithms Based on Thompson Sampling for Bounded Reward Distributions Charles Riou, Junya Honda
ICML 2020 Online Dense Subgraph Discovery via Blurred-Graph Feedback Yuko Kuroki, Atsushi Miyauchi, Junya Honda, Masashi Sugiyama
AAAI 2019 Dueling Bandits with Qualitative Feedback Liyuan Xu, Junya Honda, Masashi Sugiyama
MLJ 2019 Good Arm Identification via Bandit Feedback Hideaki Kano, Junya Honda, Kentaro Sakamaki, Kentaro Matsuura, Atsuyoshi Nakamura, Masashi Sugiyama
ICLR 2019 Learning from Positive and Unlabeled Data with a Selection Bias Masahiro Kato, Takeshi Teshima, Junya Honda
NeurIPS 2019 On the Calibration of Multiclass Classification with Rejection Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama
NeurIPS 2019 Uncoupled Regression from Pairwise Comparison Data Liyuan Xu, Junya Honda, Gang Niu, Masashi Sugiyama
AAAI 2019 Unsupervised Domain Adaptation Based on Source-Guided Discrepancy Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, Masashi Sugiyama
AISTATS 2018 A Fully Adaptive Algorithm for Pure Exploration in Linear Bandits Liyuan Xu, Junya Honda, Masashi Sugiyama
ICML 2018 Nonconvex Optimization for Regression with Fairness Constraints Junpei Komiyama, Akiko Takeda, Junya Honda, Hajime Shimao
NeurIPS 2017 Position-Based Multiple-Play Bandit Problem with Unknown Position Bias Junpei Komiyama, Junya Honda, Akiko Takeda
ICML 2016 Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm Junpei Komiyama, Junya Honda, Hiroshi Nakagawa
JMLR 2015 Non-Asymptotic Analysis of a New Bandit Algorithm for Semi-Bounded Rewards Junya Honda, Akimichi Takemura
ICML 2015 Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-Armed Bandit Problem with Multiple Plays Junpei Komiyama, Junya Honda, Hiroshi Nakagawa
COLT 2015 Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem Junpei Komiyama, Junya Honda, Hisashi Kashima, Hiroshi Nakagawa
NeurIPS 2015 Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial Monitoring Junpei Komiyama, Junya Honda, Hiroshi Nakagawa
AISTATS 2014 Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda, Akimichi Takemura
AISTATS 2012 Stochastic Bandit Based on Empirical Moments Junya Honda, Akimichi Takemura
MLJ 2011 An Asymptotically Optimal Policy for Finite Support Models in the Multiarmed Bandit Problem Junya Honda, Akimichi Takemura
COLT 2010 An Asymptotically Optimal Bandit Algorithm for Bounded Support Models Junya Honda, Akimichi Takemura