Nitanda, Atsushi

38 publications

AISTATS 2025 Clustered Invariant Risk Minimization Tomoya Murata, Atsushi Nitanda, Taiji Suzuki
ICLR 2025 Direct Distributional Optimization for Provable Alignment of Diffusion Models Ryotaro Kawata, Kazusato Oko, Atsushi Nitanda, Taiji Suzuki
TMLR 2025 Mirror Descent Policy Optimisation for Robust Constrained Markov Decision Processes David Mark Bossens, Atsushi Nitanda
ICLRW 2025 Nonparametric Distributional Black-Box Optimization via Diffusion Process Yueming Lyu, Atsushi Nitanda, Ivor Tsang
ICML 2025 Propagation of Chaos for Mean-Field Langevin Dynamics and Its Application to Model Ensemble Atsushi Nitanda, Anzelle Lee, Damian Tan Xing Kai, Mizuki Sakaguchi, Taiji Suzuki
ICML 2025 Provable In-Context Vector Arithmetic via Retrieving Task Concepts Dake Bu, Wei Huang, Andi Han, Atsushi Nitanda, Qingfu Zhang, Hau-San Wong, Taiji Suzuki
NeurIPS 2025 Statistical Analysis of the Sinkhorn Iterations for Two-Sample Schr\"odinger Bridge Estimation Ibuki Maeda, Rentian Yao, Atsushi Nitanda
TMLR 2025 Unlearning Misalignment for Personalized LLM Adaptation via Instance-Response-Dependent Discrepancies Cheng Chen, Atsushi Nitanda, Ivor Tsang
NeurIPS 2024 Improved Particle Approximation Error for Mean Field Neural Networks Atsushi Nitanda
ICLR 2024 Improved Statistical and Computational Complexity of the Mean-Field Langevin Dynamics Under Structured Data Atsushi Nitanda, Kazusato Oko, Taiji Suzuki, Denny Wu
ICLR 2024 Koopman-Based Generalization Bound: New Aspect for Full-Rank Weights Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki
NeurIPS 2024 Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning Dake Bu, Wei Huang, Andi Han, Atsushi Nitanda, Taiji Suzuki, Qingfu Zhang, Hau-San Wong
AISTATS 2024 Why Is Parameter Averaging Beneficial in SGD? an Objective Smoothing Perspective Atsushi Nitanda, Ryuhei Kikuchi, Shugo Maeda, Denny Wu
NeurIPS 2023 Convergence of Mean-Field Langevin Dynamics: Time-Space Discretization, Stochastic Gradient, and Variance Reduction Taiji Suzuki, Denny Wu, Atsushi Nitanda
NeurIPS 2023 Feature Learning via Mean-Field Langevin Dynamics: Classifying Sparse Parities and Beyond Taiji Suzuki, Denny Wu, Kazusato Oko, Atsushi Nitanda
NeurIPSW 2023 How Structured Data Guides Feature Learning: A Case Study of the Parity Problem Atsushi Nitanda, Kazusato Oko, Taiji Suzuki, Denny Wu
ICML 2023 Primal and Dual Analysis of Entropic Fictitious Play for Finite-Sum Problems Atsushi Nitanda, Kazusato Oko, Denny Wu, Nobuhito Takenouchi, Taiji Suzuki
ICML 2023 Tight and Fast Generalization Error Bound of Graph Embedding in Metric Space Atsushi Suzuki, Atsushi Nitanda, Taiji Suzuki, Jing Wang, Feng Tian, Kenji Yamanishi
ICLR 2023 Uniform-in-Time Propagation of Chaos for the Mean-Field Gradient Langevin Dynamics Taiji Suzuki, Atsushi Nitanda, Denny Wu
AISTATS 2022 Convex Analysis of the Mean Field Langevin Dynamics Atsushi Nitanda, Denny Wu, Taiji Suzuki
ICLR 2022 Particle Stochastic Dual Coordinate Ascent: Exponential Convergent Algorithm for Mean Field Neural Network Optimization Kazusato Oko, Taiji Suzuki, Atsushi Nitanda, Denny Wu
NeurIPS 2022 Two-Layer Neural Network on Infinite Dimensional Data: Global Optimization Guarantee in the Mean-Field Regime Naoki Nishikawa, Taiji Suzuki, Atsushi Nitanda, Denny Wu
AISTATS 2021 Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features Shingo Yashima, Atsushi Nitanda, Taiji Suzuki
NeurIPS 2021 Deep Learning Is Adaptive to Intrinsic Dimensionality of Model Smoothness in Anisotropic Besov Space Taiji Suzuki, Atsushi Nitanda
NeurIPS 2021 Generalization Bounds for Graph Embedding Using Negative Sampling: Linear vs Hyperbolic Atsushi Suzuki, Atsushi Nitanda, Jing Wang, Linchuan Xu, Kenji Yamanishi, Marc Cavazza
ICML 2021 Generalization Error Bound for Hyperbolic Ordinal Embedding Atsushi Suzuki, Atsushi Nitanda, Jing Wang, Linchuan Xu, Kenji Yamanishi, Marc Cavazza
ICLR 2021 Optimal Rates for Averaged Stochastic Gradient Descent Under Neural Tangent Kernel Regime Atsushi Nitanda, Taiji Suzuki
NeurIPS 2021 Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis Atsushi Nitanda, Denny Wu, Taiji Suzuki
ICLR 2021 When Does Preconditioning Help or Hurt Generalization? Shun-ichi Amari, Jimmy Ba, Roger Baker Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu
AISTATS 2020 Functional Gradient Boosting for Learning Residual-like Networks with Statistical Guarantees Atsushi Nitanda, Taiji Suzuki
NeurIPS 2019 Data Cleansing for Models Trained with SGD Satoshi Hara, Atsushi Nitanda, Takanori Maehara
ACML 2019 Hyperbolic Ordinal Embedding Atsushi Suzuki, Jing Wang, Feng Tian, Atsushi Nitanda, Kenji Yamanishi
AISTATS 2019 Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors Atsushi Nitanda, Taiji Suzuki
ICML 2018 Functional Gradient Boosting Based on Residual Network Perception Atsushi Nitanda, Taiji Suzuki
AISTATS 2018 Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models Atsushi Nitanda, Taiji Suzuki
AISTATS 2017 Stochastic Difference of Convex Algorithm and Its Application to Training Deep Boltzmann Machines Atsushi Nitanda, Taiji Suzuki
AISTATS 2016 Accelerated Stochastic Gradient Descent for Minimizing Finite Sums Atsushi Nitanda
NeurIPS 2014 Stochastic Proximal Gradient Descent with Acceleration Techniques Atsushi Nitanda