Ito, Shinji

57 publications

NeurIPS 2025 Adapting to Stochastic and Adversarial Losses in Episodic MDPs with Aggregate Bandit Feedback Shinji Ito, Kevin Jamieson, Haipeng Luo, Arnab Maiti, Taira Tsuchiya
ECML-PKDD 2025 Bandit Max-Min Fair Allocation Tsubasa Harada, Shinji Ito, Hanna Sumita
COLT 2025 Corrupted Learning Dynamics in Games Taira Tsuchiya, Shinji Ito, Haipeng Luo
COLT 2025 Data-Dependent Bounds with $t$-Optimal Best-of-Both-Worlds Guarantees in Multi-Armed Bandits Using Stability-Penalty Matching Quan Nguyen, Shinji Ito, Junpei Komiyama, Mehta Nishant
TMLR 2025 Influential Bandits: Pulling an Arm May Change the Environment Ryoma Sato, Shinji Ito
COLT 2025 Instance-Dependent Regret Bounds for Learning Two-Player Zero-Sum Games with Bandit Feedback Shinji Ito, Haipeng Luo, Taira Tsuchiya, Yue Wu
AISTATS 2025 LC-Tsallis-INF: Generalized Best-of-Both-Worlds Linear Contextual Bandits Masahiro Kato, Shinji Ito
NeurIPS 2025 Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning Baiyuan Chen, Shinji Ito, Masaaki Imaizumi
NeurIPS 2025 Optimal Regret of Bandits Under Differential Privacy Achraf Azize, Yulian Wu, Junya Honda, Francesco Orabona, Shinji Ito, Debabrota Basu
NeurIPS 2025 Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
NeurIPS 2024 A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of $\Theta(T^{2/3})$ and Its Application to Best-of-Both-Worlds Taira Tsuchiya, Shinji Ito
ICMLW 2024 A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of $\Theta(T^{2/3})$ and Its Application to Best-of-Both-Worlds Taira Tsuchiya, Shinji Ito
MLJ 2024 Active Model Selection: A Variance Minimization Approach Satoshi Hara, Mitsuru Matsuura, Junya Honda, Shinji Ito
COLT 2024 Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds Shinji Ito, Taira Tsuchiya, Junya Honda
TMLR 2024 Best-of-Both-Worlds Linear Contextual Bandits Masahiro Kato, Shinji Ito
TMLR 2024 Contaminated Online Convex Optimization Tomoya Kamijima, Shinji Ito
ICML 2024 Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring Taira Tsuchiya, Shinji Ito, Junya Honda
NeurIPS 2024 Fast Rates in Stochastic Online Convex Optimization by Exploiting the Curvature of Feasible Sets Taira Tsuchiya, Shinji Ito
COLT 2024 Follow-the-Perturbed-Leader with Fréchet-Type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
IJCAI 2024 Learning with Posterior Sampling for Revenue Management Under Time-Varying Demand Kazuma Shimizu, Junya Honda, Shinji Ito, Shinji Nakadai
AAAI 2024 New Classes of the Greedy-Applicable Arm Feature Distributions in the Sparse Linear Bandit Problem Koji Ichikawa, Shinji Ito, Daisuke Hatano, Hanna Sumita, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
NeurIPS 2024 On the Minimax Regret for Contextual Linear Bandits and Multi-Armed Bandits with Expert Advice Shinji Ito
ECML-PKDD 2024 Online $\textrm{L}{\natural }$-Convex Minimization Ken Yokoyama, Shinji Ito, Tatsuya Matsuoka, Kei Kimura, Makoto Yokoo
NeurIPS 2023 An Exploration-by-Optimization Approach to Best of Both Worlds in Linear Bandits Shinji Ito, Kei Takemura
NeurIPS 2023 Bandit Task Assignment with Unknown Processing Time Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
ALT 2023 Best-of-Both-Worlds Algorithms for Partial Monitoring Taira Tsuchiya, Shinji Ito, Junya Honda
COLT 2023 Best-of-Three-Worlds Linear Bandit Algorithm with Variance-Adaptive Regret Bounds Shinji Ito, Kei Takemura
ALT 2023 Follow-the-Perturbed-Leader Achieves Best-of-Both-Worlds for Bandit Problems Junya Honda, Shinji Ito, Taira Tsuchiya
AISTATS 2023 Further Adaptive Best-of-Both-Worlds Algorithm for Combinatorial Semi-Bandits Taira Tsuchiya, Shinji Ito, Junya Honda
ACML 2023 Maximization of Minimum Weighted Hamming Distance Between Set Pairs Tatsuya Matsuoka, Shinji Ito
NeurIPS 2023 Stability-Penalty-Adaptive Follow-the-Regularized-Leader: Sparsity, Game-Dependency, and Best-of-Both-Worlds Taira Tsuchiya, Shinji Ito, Junya Honda
COLT 2022 Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds Shinji Ito, Taira Tsuchiya, Junya Honda
NeurIPS 2022 Average Sensitivity of Euclidean K-Clustering Yuichi Yoshida, Shinji Ito
NeurIPS 2022 Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs Shinji Ito, Taira Tsuchiya, Junya Honda
AAAI 2022 Online Task Assignment Problems with Reusable Resources Hanna Sumita, Shinji Ito, Kei Takemura, Daisuke Hatano, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
ICML 2022 Revisiting Online Submodular Minimization: Gap-Dependent Regret Bounds, Best of Both Worlds and Adversarial Robustness Shinji Ito
NeurIPS 2022 Single Loop Gaussian Homotopy Method for Non-Convex Optimization Hidenori Iwakiri, Yuhang Wang, Shinji Ito, Akiko Takeda
AISTATS 2021 A Parameter-Free Algorithm for Misspecified Linear Contextual Bandits Kei Takemura, Shinji Ito, Daisuke Hatano, Hanna Sumita, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
AISTATS 2021 Tracking Regret Bounds for Online Submodular Optimization Tatsuya Matsuoka, Shinji Ito, Naoto Ohsaka
NeurIPS 2021 Hybrid Regret Bounds for Combinatorial Semi-Bandits and Adversarial Linear Bandits Shinji Ito
AAAI 2021 Near-Optimal Regret Bounds for Contextual Combinatorial Semi-Bandits with Linear Payoff Functions Kei Takemura, Shinji Ito, Daisuke Hatano, Hanna Sumita, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
NeurIPS 2021 On Optimal Robustness to Adversarial Corruption in Online Decision Problems Shinji Ito
COLT 2021 Parameter-Free Multi-Armed Bandit Algorithms with Hybrid Data-Dependent Regret Bounds Shinji Ito
NeurIPS 2020 A Tight Lower Bound and Efficient Reduction for Swap Regret Shinji Ito
AISTATS 2020 An Optimal Algorithm for Bandit Convex Optimization with Strongly-Convex and Smooth Loss Shinji Ito
NeurIPS 2020 Delay and Cooperation in Nonstochastic Linear Bandits Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
NeurIPS 2020 Tight First- and Second-Order Regret Bounds for Adversarial Linear Bandits Shinji Ito, Shuichi Hirahara, Tasuku Soma, Yuichi Yoshida
NeurIPS 2019 Improved Regret Bounds for Bandit Combinatorial Optimization Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
NeurIPS 2019 Oracle-Efficient Algorithms for Online Linear Optimization with Bandit Feedback Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
NeurIPS 2019 Submodular Function Minimization with Noisy Evaluation Oracle Shinji Ito
ICML 2018 Causal Bandits with Propagating Inference Akihiro Yabe, Daisuke Hatano, Hanna Sumita, Shinji Ito, Naonori Kakimura, Takuro Fukunaga, Ken-ichi Kawarabayashi
AISTATS 2018 Online Regression with Partial Information: Generalization and Linear Projection Shinji Ito, Daisuke Hatano, Hanna Sumita, Akihiro Yabe, Takuro Fukunaga, Naonori Kakimura, Ken-ichi Kawarabayashi
NeurIPS 2018 Regret Bounds for Online Portfolio Selection with a Cardinality Constraint Shinji Ito, Daisuke Hatano, Hanna Sumita, Akihiro Yabe, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
ICML 2018 Unbiased Objective Estimation in Predictive Optimization Shinji Ito, Akihiro Yabe, Ryohei Fujimaki
NeurIPS 2017 Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observation Shinji Ito, Daisuke Hatano, Hanna Sumita, Akihiro Yabe, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
IJCAI 2017 Robust Quadratic Programming for Price Optimization Akihiro Yabe, Shinji Ito, Ryohei Fujimaki
NeurIPS 2016 Large-Scale Price Optimization via Network Flow Shinji Ito, Ryohei Fujimaki