Sugiyama, Masashi

299 publications

TMLR 2026 Causal Graph Learning via Distributional Invariance of Cause-Effect Relationship Nang Hung Nguyen, Phi Le Nguyen, Thao Nguyen Truong, Trong Nghia Hoang, Masashi Sugiyama
TMLR 2026 Estimating Expected Calibration Error for Positive-Unlabeled Learning Ryuichi Kiryo, Futoshi Futami, Masashi Sugiyama
TMLR 2026 Understanding Guidance Scale in Diffusion Models from a Geometric Perspective Zhiyuan Zhan, Liuzhuozheng Li, Masashi Sugiyama
MLJ 2026 Weakly Supervised Classification with Pre-Trained Models: A Robust Fine-Tuning Approach Ming Li, Wei Wang, Masashi Sugiyama
AAAI 2025 Action-Agnostic Point-Level Supervision for Temporal Action Detection Shuhei M. Yoshida, Takashi Shibata, Makoto Terao, Takayuki Okatani, Masashi Sugiyama
ICML 2025 Adaptive Localization of Knowledge Negation for Continual LLM Unlearning Abudukelimu Wuerkaixi, Qizhou Wang, Sen Cui, Wutong Xu, Bo Han, Gang Niu, Masashi Sugiyama, Changshui Zhang
AISTATS 2025 Domain Adaptation and Entanglement: An Optimal Transport Perspective Okan Koc, Alexander Soen, Chao-Kai Chiang, Masashi Sugiyama
NeurIPS 2025 Generalized Linear Bandits: Almost Optimal Regret with One-Pass Update Yu-Jie Zhang, Sheng-An Xu, Peng Zhao, Masashi Sugiyama
TMLR 2025 Importance Weighting for Aligning Language Models Under Deployment Distribution Shift Thanawat Lodkaew, Tongtong Fang, Takashi Ishida, Masashi Sugiyama
IJCAI 2025 Label Distribution Learning with Biased Annotations Assisted by Multi-Label Learning Zhiqiang Kou, Si Qin, Hailin Wang, Jing Wang, Ming-Kun Xie, Shuo Chen, Yuheng Jia, Tongliang Liu, Masashi Sugiyama, Xin Geng
ICLR 2025 Learning View-Invariant World Models for Visual Robotic Manipulation Jing-Cheng Pang, Nan Tang, Kaiyuan Li, Yuting Tang, Xin-Qiang Cai, Zhen-Yu Zhang, Gang Niu, Masashi Sugiyama, Yang Yu
AISTATS 2025 Multi-Player Approaches for Dueling Bandits Or Raveh, Junya Honda, Masashi Sugiyama
ICML 2025 Non-Stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama
ICML 2025 Parallel Simulation for Log-Concave Sampling and Score-Based Diffusion Models Huanjian Zhou, Masashi Sugiyama
ICLR 2025 Realistic Evaluation of Deep Partial-Label Learning Algorithms Wei Wang, Dong-Dong Wu, Jindong Wang, Gang Niu, Min-Ling Zhang, Masashi Sugiyama
TMLR 2025 Reinforcement Learning from Bagged Reward Yuting Tang, Xin-Qiang Cai, Yao-Xiang Ding, Qiyu Wu, Guoqing Liu, Masashi Sugiyama
ICCV 2025 Robust Multi-View Learning via Representation Fusion of Sample-Level Attention and Alignment of Simulated Perturbation Jie Xu, Na Zhao, Gang Niu, Masashi Sugiyama, Xiaofeng Zhu
ICLR 2025 Sharpness-Aware Black-Box Optimization Feiyang Ye, Yueming Lyu, Xuehao Wang, Masashi Sugiyama, Yu Zhang, Ivor Tsang
NeurIPS 2025 The Adaptive Complexity of Minimizing Relative Fisher Information Huanjian Zhou, Masashi Sugiyama
ICLR 2025 Towards Effective Evaluations and Comparisons for LLM Unlearning Methods Qizhou Wang, Bo Han, Puning Yang, Jianing Zhu, Tongliang Liu, Masashi Sugiyama
ICLR 2025 Towards Out-of-Modal Generalization Without Instance-Level Modal Correspondence Zhuo Huang, Gang Niu, Bo Han, Masashi Sugiyama, Tongliang Liu
TMLR 2025 Unified Risk Analysis for Weakly Supervised Learning Chao-Kai Chiang, Masashi Sugiyama
ICLRW 2025 Weak-to-Strong Diffusion with Reflection Lichen Bai, Masashi Sugiyama, Zeke Xie
ICML 2024 A General Framework for Learning from Weak Supervision Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj
ICLR 2024 Accurate Forgetting for Heterogeneous Federated Continual Learning Abudukelimu Wuerkaixi, Sen Cui, Jingfeng Zhang, Kunda Yan, Bo Han, Gang Niu, Lei Fang, Changshui Zhang, Masashi Sugiyama
WACV 2024 Appearance-Based Curriculum for Semi-Supervised Learning with Multi-Angle Unlabeled Data Yuki Tanaka, Shuhei M. Yoshida, Takashi Shibata, Makoto Terao, Takayuki Okatani, Masashi Sugiyama
ICML 2024 Balancing Similarity and Complementarity for Federated Learning Kunda Yan, Sen Cui, Abudukelimu Wuerkaixi, Jingfeng Zhang, Bo Han, Gang Niu, Masashi Sugiyama, Changshui Zhang
ICML 2024 Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training Ming-Kun Xie, Jia-Hao Xiao, Pei Peng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
ECCV 2024 Direct Distillation Between Different Domains Jialiang Tang, Shuo Chen, Gang Niu, Hongyuan Zhu, Joey Tianyi Zhou, Chen Gong, Masashi Sugiyama
ECCV 2024 Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning Jia-Hao Xiao, Ming-Kun Xie, Heng-Bo Fan, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
ICML 2024 Efficient Non-Stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation Yu-Yang Qian, Peng Zhao, Yu-Jie Zhang, Masashi Sugiyama, Zhi-Hua Zhou
NeurIPS 2024 Enriching Disentanglement: From Logical Definitions to Quantitative Metrics Yivan Zhang, Masashi Sugiyama
AISTATS 2024 Fixed-Budget Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit Shintaro Nakamura, Masashi Sugiyama
ICML 2024 Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought Zhen-Yu Zhang, Siwei Han, Huaxiu Yao, Gang Niu, Masashi Sugiyama
NeurIPS 2024 Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations Hao Chen, Ankit Shah, Jindong Wang, Ran Tao, Yidong Wang, Xiang Li, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj
ICML 2024 Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical Wei Wang, Takashi Ishida, Yu-Jie Zhang, Gang Niu, Masashi Sugiyama
ICML 2024 Locally Estimated Global Perturbations Are Better than Local Perturbations for Federated Sharpness-Aware Minimization Ziqing Fan, Shengchao Hu, Jiangchao Yao, Gang Niu, Ya Zhang, Masashi Sugiyama, Yanfeng Wang
ICMLW 2024 Reinforcement Learning from Bagged Reward Yuting Tang, Xin-Qiang Cai, Yao-Xiang Ding, Qiyu Wu, Guoqing Liu, Masashi Sugiyama
ICLR 2024 Robust Similarity Learning with Difference Alignment Regularization Shuo Chen, Gang Niu, Chen Gong, Okan Koc, Jian Yang, Masashi Sugiyama
NeurIPS 2024 Slight Corruption in Pre-Training Data Makes Better Diffusion Models Hao Chen, Yujin Han, Diganta Misra, Xiang Li, Kai Hu, Difan Zou, Masashi Sugiyama, Jindong Wang, Bhiksha Raj
NeurIPS 2024 Test-Time Adaptation in Non-Stationary Environments via Adaptive Representation Alignment Zhen-Yu Zhang, Zhiyu Xie, Huaxiu Yao, Masashi Sugiyama
AAAI 2024 The Choice of Noninformative Priors for Thompson Sampling in Multiparameter Bandit Models Jongyeong Lee, Chao-Kai Chiang, Masashi Sugiyama
TMLR 2024 The Survival Bandit Problem Charles Riou, Junya Honda, Masashi Sugiyama
AAAI 2024 Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit Shintaro Nakamura, Masashi Sugiyama
ICLR 2024 Understanding and Mitigating the Label Noise in Pre-Training on Downstream Tasks Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
AISTATS 2024 VEC-SBM: Optimal Community Detection with Vectorial Edges Covariates Guillaume Braun, Masashi Sugiyama
NeurIPS 2024 What Makes Partial-Label Learning Algorithms Effective? Jiaqi Lv, Yangfan Liu, Shiyu Xia, Ning Xu, Miao Xu, Gang Niu, Min-Ling Zhang, Masashi Sugiyama, Xin Geng
ICML 2023 A Category-Theoretical Meta-Analysis of Definitions of Disentanglement Yivan Zhang, Masashi Sugiyama
NeurIPS 2023 Adapting to Continuous Covariate Shift via Online Density Ratio Estimation Yu-Jie Zhang, Zhen-Yu Zhang, Peng Zhao, Masashi Sugiyama
NeurIPS 2023 Binary Classification with Confidence Difference Wei Wang, Lei Feng, Yuchen Jiang, Gang Niu, Min-Ling Zhang, Masashi Sugiyama
MLJ 2023 Boundary-Restricted Metric Learning Shuo Chen, Chen Gong, Xiang Li, Jian Yang, Gang Niu, Masashi Sugiyama
NeurIPS 2023 Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning Ming-Kun Xie, Jiahao Xiao, Hao-Zhe Liu, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
ICCV 2023 Distribution Shift Matters for Knowledge Distillation with Webly Collected Images Jialiang Tang, Shuo Chen, Gang Niu, Masashi Sugiyama, Chen Gong
NeurIPS 2023 Distributional Pareto-Optimal Multi-Objective Reinforcement Learning Xin-Qiang Cai, Pushi Zhang, Li Zhao, Jiang Bian, Masashi Sugiyama, Ashley Llorens
NeurIPS 2023 Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation Jianing Zhu, Yu Geng, Jiangchao Yao, Tongliang Liu, Gang Niu, Masashi Sugiyama, Bo Han
ICML 2023 Diversity-Enhancing Generative Network for Few-Shot Hypothesis Adaptation Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han
NeurIPS 2023 Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S Kankanhalli
NeurIPS 2023 Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan S Kankanhalli
ICMLW 2023 Enriching Disentanglement: Definitions to Metrics Yivan Zhang, Masashi Sugiyama
ICML 2023 GAT: Guided Adversarial Training with Pareto-Optimal Auxiliary Tasks Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon
NeurIPS 2023 Generalizing Importance Weighting to a Universal Solver for Distribution Shift Problems Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama
NeurIPS 2023 Imitation Learning from Vague Feedback Xin-Qiang Cai, Yu-Jie Zhang, Chao-Kai Chiang, Masashi Sugiyama
ICLR 2023 Is the Performance of My Deep Network Too Good to Be True? a Direct Approach to Estimating the Bayes Error in Binary Classification Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama
ICCV 2023 Multi-Label Knowledge Distillation Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
NeurIPS 2023 On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective Zeke Xie, Zhiqiang Xu, Jingzhao Zhang, Issei Sato, Masashi Sugiyama
NeurIPS 2023 Online (Multinomial) Logistic Bandit: Improved Regret and Constant Computation Cost Yu-Jie Zhang, Masashi Sugiyama
ICML 2023 Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits Jongyeong Lee, Junya Honda, Chao-Kai Chiang, Masashi Sugiyama
MLJ 2023 Positive-Unlabeled Classification Under Class-Prior Shift: A Prior-Invariant Approach Based on Density Ratio Estimation Shota Nakajima, Masashi Sugiyama
ICLR 2023 Seeing Differently, Acting Similarly: Heterogeneously Observable Imitation Learning Xin-Qiang Cai, Yao-Xiang Ding, Zixuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou
ACML 2023 Thompson Exploration with Best Challenger Rule in Best Arm Identification Jongyeong Lee, Junya Honda, Masashi Sugiyama
JMLR 2023 Universal Approximation Property of Invertible Neural Networks Isao Ishikawa, Takeshi Teshima, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
AISTATS 2022 Pairwise Supervision Can Provably Elicit a Decision Boundary Han Bao, Takuya Shimada, Liyuan Xu, Issei Sato, Masashi Sugiyama
AISTATS 2022 Predictive Variational Bayesian Inference as Risk-Seeking Optimization Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama
NeurIPS 2022 Adapting to Online Label Shift with Provable Guarantees Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou
ICML 2022 Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama
ICML 2022 Adversarial Attack and Defense for Non-Parametric Two-Sample Tests Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli
NeurIPS 2022 Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama
ICLR 2022 Exploiting Class Activation Value for Partial-Label Learning Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama
JMLR 2022 Fast and Robust Rank Aggregation Against Model Misspecification Yuangang Pan, Ivor W. Tsang, Weijie Chen, Gang Niu, Masashi Sugiyama
ICLR 2022 Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama
NeurIPS 2022 Generalizing Consistent Multi-Class Classification with Rejection to Be Compatible with Arbitrary Losses Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama
CVPR 2022 Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama
NeurIPS 2022 Learning Contrastive Embedding in Low-Dimensional Space Shuo Chen, Chen Gong, Jun Li, Jian Yang, Gang Niu, Masashi Sugiyama
JMLR 2022 Learning from Noisy Pairwise Similarity and Unlabeled Data Songhua Wu, Tongliang Liu, Bo Han, Jun Yu, Gang Niu, Masashi Sugiyama
ICLR 2022 Meta Discovery: Learning to Discover Novel Classes Given Very Limited Data Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
ACML 2022 Multi-Class Classification from Multiple Unlabeled Datasets with Partial Risk Regularization Yuting Tang, Nan Lu, Tianyi Zhang, Masashi Sugiyama
TMLR 2022 NoiLin: Improving Adversarial Training and Correcting Stereotype of Noisy Labels Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Lizhen Cui, Gang Niu, Masashi Sugiyama
ICLR 2022 Rethinking Class-Prior Estimation for Positive-Unlabeled Learning Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, Dacheng Tao
ACML 2022 Robust Computation of Optimal Transport by $β$-Potential Regularization Shintaro Nakamura, Han Bao, Masashi Sugiyama
ICLR 2022 Sample Selection with Uncertainty of Losses for Learning with Noisy Labels Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
NeurIPS 2022 Synergy-of-Experts: Collaborate to Improve Adversarial Robustness Sen Cui, Jingfeng Zhang, Jian Liang, Bo Han, Masashi Sugiyama, Changshui Zhang
ICML 2022 To Smooth or Not? When Label Smoothing Meets Noisy Labels Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, Yang Liu
IJCAI 2022 Towards Adversarially Robust Deep Image Denoising Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan
AISTATS 2021 A Unified View of Likelihood Ratio and Reparameterization Gradients Paavo Parmas, Masashi Sugiyama
AISTATS 2021 Fenchel-Young Losses with Skewed Entropies for Class-Posterior Probability Estimation Han Bao, Masashi Sugiyama
AISTATS 2021 Robust Imitation Learning from Noisy Demonstrations Voot Tangkaratt, Nontawat Charoenphakdee, Masashi Sugiyama
AISTATS 2021 Γ-ABC: Outlier-Robust Approximate Bayesian Computation Based on a Robust Divergence Estimator Masahiro Fujisawa, Takeshi Teshima, Issei Sato, Masashi Sugiyama
ICLR 2021 A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima Zeke Xie, Issei Sato, Masashi Sugiyama
ICML 2021 Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama
ICML 2021 CIFS: Improving Adversarial Robustness of CNNs via Channel-Wise Importance-Based Feature Selection Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Tan, Masashi Sugiyama
ICML 2021 Classification with Rejection Based on Cost-Sensitive Classification Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama
ICML 2021 Confidence Scores Make Instance-Dependent Label-Noise Learning Possible Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
ICLR 2021 Geometry-Aware Instance-Reweighted Adversarial Training Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli
UAI 2021 Incorporating Causal Graphical Prior Knowledge into Predictive Modeling via Simple Data Augmentation Takeshi Teshima, Masashi Sugiyama
ICML 2021 Large-Margin Contrastive Learning with Distance Polarization Regularizer Shuo Chen, Gang Niu, Chen Gong, Jun Li, Jian Yang, Masashi Sugiyama
ICML 2021 Learning Diverse-Structured Networks for Adversarial Robustness Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
ICML 2021 Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization Yivan Zhang, Gang Niu, Masashi Sugiyama
ECML-PKDD 2021 Learning from Noisy Similar and Dissimilar Data Soham Dan, Han Bao, Masashi Sugiyama
ICML 2021 Learning from Similarity-Confidence Data Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama
NeurIPS 2021 Loss Function Based Second-Order Jensen Inequality and Its Application to Particle Variational Inference Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama
ICML 2021 Lower-Bounded Proper Losses for Weakly Supervised Classification Shuhei M Yoshida, Takashi Takenouchi, Masashi Sugiyama
ICML 2021 Maximum Mean Discrepancy Test Is Aware of Adversarial Attacks Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
ICML 2021 Mediated Uncoupled Learning: Learning Functions Without Direct Input-Output Correspondences Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama
CVPR 2021 On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama
NeurIPSW 2021 On the Role of Pre-Training for Meta Few-Shot Learning Chia-You Chen, Hsuan-Tien Lin, Masashi Sugiyama, Gang Niu
ICML 2021 Pointwise Binary Classification with Pairwise Confidence Comparisons Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama
ICML 2021 Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama
NeurIPS 2021 Probabilistic Margins for Instance Reweighting in Adversarial Training Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
ICML 2021 Provably End-to-End Label-Noise Learning Without Anchor Points Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama
ACML 2020 A One-Step Approach to Covariate Shift Adaptation Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama
ICML 2020 Accelerating the Diffusion-Based Ensemble Sampling by Non-Reversible Dynamics Futoshi Futami, Issei Sato, Masashi Sugiyama
MLJ 2020 Active Deep Q-Learning with Demonstration Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama
NeurIPS 2020 Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring Taira Tsuchiya, Junya Honda, Masashi Sugiyama
ICML 2020 Attacks Which Do Not Kill Training Make Adversarial Learning Stronger Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli
IJCAI 2020 Binary Classification from Positive Data with Skewed Confidence Kazuhiko Shinoda, Hirotaka Kaji, Masashi Sugiyama
MLJ 2020 Binary Classification with Ambiguous Training Data Naoya Otani, Yosuke Otsubo, Tetsuya Koike, Masashi Sugiyama
COLT 2020 Calibrated Surrogate Losses for Adversarially Robust Classification Han Bao, Clay Scott, Masashi Sugiyama
AISTATS 2020 Calibrated Surrogate Maximization of Linear-Fractional Utility in Binary Classification Han Bao, Masashi Sugiyama
NeurIPS 2020 Coupling-Based Invertible Neural Networks Are Universal Diffeomorphism Approximators Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
ICML 2020 Do We Need Zero Training Loss After Achieving Zero Training Error? Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama
NeurIPS 2020 Dual T: Reducing Estimation Error for Transition Matrix in Label-Noise Learning Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama
ICML 2020 Few-Shot Domain Adaptation by Causal Mechanism Transfer Takeshi Teshima, Issei Sato, Masashi Sugiyama
ICMLW 2020 Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent Mehdi Abbana Bennani, Masashi Sugiyama
NeurIPS 2020 Learning from Aggregate Observations Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama
ICML 2020 Learning with Multiple Complementary Labels Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama
AISTATS 2020 Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach Nan Lu, Tianyi Zhang, Gang Niu, Masashi Sugiyama
ICML 2020 Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks Using PAC-Bayesian Analysis Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama
ICML 2020 Online Dense Subgraph Discovery via Blurred-Graph Feedback Yuko Kuroki, Atsushi Miyauchi, Junya Honda, Masashi Sugiyama
NeurIPS 2020 Part-Dependent Label Noise: Towards Instance-Dependent Label Noise Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, Dacheng Tao, Masashi Sugiyama
WACV 2020 Partially Zero-Shot Domain Adaptation from Incomplete Target Data with Missing Classes Masato Ishii, Takashi Takenouchi, Masashi Sugiyama
MLJ 2020 Principled Analytic Classifier for Positive-Unlabeled Learning via Weighted Integral Probability Metric Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, Myunghee Cho Paik
ICML 2020 Progressive Identification of True Labels for Partial-Label Learning Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama
NeurIPS 2020 Provably Consistent Partial-Label Learning Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
NeurIPS 2020 Rethinking Importance Weighting for Deep Learning Under Distribution Shift Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama
ICML 2020 SIGUA: Forgetting May Make Learning with Noisy Labels More Robust Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, Masashi Sugiyama
ICML 2020 Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels Yu-Ting Chou, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
ICML 2020 Variational Imitation Learning with Diverse-Quality Demonstrations Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama
ICLRW 2019 A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
NeurIPS 2019 Are Anchor Points Really Indispensable in Label-Noise Learning? Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama
AAAI 2019 Bayesian Posterior Approximation via Greedy Particle Optimization Futoshi Futami, Zhenghang Cui, Issei Sato, Masashi Sugiyama
AAAI 2019 Bézier Simplex Fitting: Describing Pareto Fronts of Simplicial Problems with Small Samples in Multi-Objective Optimization Ken Kobayashi, Naoki Hamada, Akiyoshi Sannai, Akinori Tanaka, Kenichi Bannai, Masashi Sugiyama
ICML 2019 Classification from Positive, Unlabeled and Biased Negative Data Yu-Guan Hsieh, Gang Niu, Masashi Sugiyama
AAAI 2019 Clipped Matrix Completion: A Remedy for Ceiling Effects Takeshi Teshima, Miao Xu, Issei Sato, Masashi Sugiyama
ICML 2019 Complementary-Label Learning for Arbitrary Losses and Models Takashi Ishida, Gang Niu, Aditya Menon, Masashi Sugiyama
AAAI 2019 Dueling Bandits with Qualitative Feedback Liyuan Xu, Junya Honda, Masashi Sugiyama
MLJ 2019 Foreword: Special Issue for the Journal Track of the 10th Asian Conference on Machine Learning (ACML 2018) Masashi Sugiyama, Yung-Kyun Noh
MLJ 2019 Good Arm Identification via Bandit Feedback Hideaki Kano, Junya Honda, Kentaro Sakamaki, Kentaro Matsuura, Atsuyoshi Nakamura, Masashi Sugiyama
ICLR 2019 Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization Takayuki Osa, Voot Tangkaratt, Masashi Sugiyama
ICML 2019 How Does Disagreement Help Generalization Against Label Corruption? Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, Masashi Sugiyama
ICML 2019 Imitation Learning from Imperfect Demonstration Yueh-Hua Wu, Nontawat Charoenphakdee, Han Bao, Voot Tangkaratt, Masashi Sugiyama
MLJ 2019 Millionaire: A Hint-Guided Approach for Crowdsourcing Bo Han, Quanming Yao, Yuangang Pan, Ivor W. Tsang, Xiaokui Xiao, Qiang Yang, Masashi Sugiyama
ICML 2019 On Symmetric Losses for Learning from Corrupted Labels Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama
NeurIPS 2019 On the Calibration of Multiclass Classification with Rejection Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama
ICLR 2019 On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data Nan Lu, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama
NeurIPS 2019 Uncoupled Regression from Pairwise Comparison Data Liyuan Xu, Junya Honda, Gang Niu, Masashi Sugiyama
AAAI 2019 Unsupervised Domain Adaptation Based on Source-Guided Discrepancy Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, Masashi Sugiyama
ACML 2019 Zero-Shot Domain Adaptation Based on Attribute Information Masato Ishii, Takashi Takenouchi, Masashi Sugiyama
AISTATS 2018 A Fully Adaptive Algorithm for Pure Exploration in Linear Bandits Liyuan Xu, Junya Honda, Masashi Sugiyama
ICML 2018 Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model Hideaki Imamura, Issei Sato, Masashi Sugiyama
AISTATS 2018 Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling Hongyi Ding, Mohammad Emtiyaz Khan, Issei Sato, Masashi Sugiyama
NeurIPS 2018 Binary Classification from Positive-Confidence Data Takashi Ishida, Gang Niu, Masashi Sugiyama
ICML 2018 Classification from Pairwise Similarity and Unlabeled Data Han Bao, Gang Niu, Masashi Sugiyama
NeurIPS 2018 Co-Teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama
NeurIPS 2018 Continuous-Time Value Function Approximation in Reproducing Kernel Hilbert Spaces Motoya Ohnishi, Masahiro Yukawa, Mikael Johansson, Masashi Sugiyama
MLJ 2018 Correction to: Semi-Supervised AUC Optimization Based on Positive-Unlabeled Learning Tomoya Sakai, Gang Niu, Masashi Sugiyama
ICML 2018 Does Distributionally Robust Supervised Learning Give Robust Classifiers? Weihua Hu, Gang Niu, Issei Sato, Masashi Sugiyama
ICLR 2018 Guide Actor-Critic for Continuous Control Voot Tangkaratt, Abbas Abdolmaleki, Masashi Sugiyama
AAAI 2018 Hierarchical Policy Search via Return-Weighted Density Estimation Takayuki Osa, Masashi Sugiyama
NeurIPS 2018 Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama
NeurIPS 2018 Masking: A New Perspective of Noisy Supervision Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, Masashi Sugiyama
MLJ 2018 Semi-Supervised AUC Optimization Based on Positive-Unlabeled Learning Tomoya Sakai, Gang Niu, Masashi Sugiyama
NeurIPS 2018 Uplift Modeling from Separate Labels Ikko Yamane, Florian Yger, Jamal Atif, Masashi Sugiyama
AISTATS 2018 Variational Inference Based on Robust Divergences Futoshi Futami, Issei Sato, Masashi Sugiyama
UAI 2018 Variational Inference for Gaussian Processes with Panel Count Data Hongyi Ding, Young Lee, Issei Sato, Masashi Sugiyama
MLJ 2017 Class-Prior Estimation for Learning from Positive and Unlabeled Data Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama
AISTATS 2017 Estimating Density Ridges by Direct Estimation of Density-Derivative-Ratios Hiroaki Sasaki, Takafumi Kanamori, Masashi Sugiyama
NeurIPS 2017 Expectation Propagation for T-Exponential Family Using Q-Algebra Futoshi Futami, Issei Sato, Masashi Sugiyama
MLJ 2017 Foreword: Special Issue for the Journal Track of the 8th Asian Conference on Machine Learning (ACML 2016) Robert J. Durrant, Kee-Eung Kim, Geoffrey Holmes, Stephen Marsland, Masashi Sugiyama, Zhi-Hua Zhou
NeurIPS 2017 Generative Local Metric Learning for Kernel Regression Yung-Kyun Noh, Masashi Sugiyama, Kee-Eung Kim, Frank Park, Daniel D Lee
MLJ 2017 Geometry-Aware Principal Component Analysis for Symmetric Positive Definite Matrices Inbal Horev, Florian Yger, Masashi Sugiyama
MLJ 2017 Homotopy Continuation Approaches for Robust SV Classification and Regression Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Masayuki Karasuyama, Ichiro Takeuchi
MLJ 2017 Introduction: Special Issue of Selected Papers from ACML 2015 Geoffrey Holmes, Tie-Yan Liu, Hang Li, Irwin King, Masashi Sugiyama, Zhi-Hua Zhou
ICML 2017 Learning Discrete Representations via Information Maximizing Self-Augmented Training Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama
NeurIPS 2017 Learning from Complementary Labels Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama
AISTATS 2017 Least-Squares Log-Density Gradient Clustering for Riemannian Manifolds Mina Ashizawa, Hiroaki Sasaki, Tomoya Sakai, Masashi Sugiyama
AAAI 2017 Policy Search with High-Dimensional Context Variables Voot Tangkaratt, Herke van Hoof, Simone Parisi, Gerhard Neumann, Jan Peters, Masashi Sugiyama
NeurIPS 2017 Positive-Unlabeled Learning with Non-Negative Risk Estimator Ryuichi Kiryo, Gang Niu, Marthinus C du Plessis, Masashi Sugiyama
ICML 2017 Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data Tomoya Sakai, Marthinus Christoffel Plessis, Gang Niu, Masashi Sugiyama
FnTML 2017 Tensor Networks for Dimensionality Reduction and Large-Scale Optimization: Part 2 Applications and Future Perspectives Andrzej Cichocki, Anh Huy Phan, Qibin Zhao, Namgil Lee, Ivan V. Oseledets, Masashi Sugiyama, Danilo P. Mandic
ACML 2017 Whitening-Free Least-Squares Non-Gaussian Component Analysis Hiroaki Shiino, Hiroaki Sasaki, Gang Niu, Masashi Sugiyama
UAI 2016 Faster Stochastic Variational Inference Using Proximal-Gradient Methods with General Divergence Functions Mohammad Emtiyaz Khan, Reza Babanezhad, Wu Lin, Mark Schmidt, Masashi Sugiyama
ACML 2016 Geometry-Aware Stationary Subspace Analysis Inbal Horev, Florian Yger, Masashi Sugiyama
ACML 2016 Multitask Principal Component Analysis Ikko Yamane, Florian Yger, Maxime Berar, Masashi Sugiyama
AISTATS 2016 Non-Gaussian Component Analysis with Log-Density Gradient Estimation Hiroaki Sasaki, Gang Niu, Masashi Sugiyama
ICML 2016 Structure Learning of Partitioned Markov Networks Song Liu, Taiji Suzuki, Masashi Sugiyama, Kenji Fukumizu
NeurIPS 2016 Theoretical Comparisons of Positive-Unlabeled Learning Against Positive-Negative Learning Gang Niu, Marthinus Christoffel du Plessis, Tomoya Sakai, Yao Ma, Masashi Sugiyama
ACML 2015 Class-Prior Estimation for Learning from Positive and Unlabeled Data Marthinus Christoffel, Gang Niu, Masashi Sugiyama
JMLR 2015 Condition for Perfect Dimensionality Recovery by Variational Bayesian PCA Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. Derin Babacan
ACML 2015 Continuous Target Shift Adaptation in Supervised Learning Tuan Duong Nguyen, Marthinus Christoffel, Masashi Sugiyama
ICML 2015 Convex Formulation for Learning from Positive and Unlabeled Data Marthinus Du Plessis, Gang Niu, Masashi Sugiyama
MLJ 2015 Direct Conditional Probability Density Estimation with Sparse Feature Selection Motoki Shiga, Voot Tangkaratt, Masashi Sugiyama
AISTATS 2015 Direct Density-Derivative Estimation and Its Application in KL-Divergence Approximation Hiroaki Sasaki, Yung-Kyun Noh, Masashi Sugiyama
ACML 2015 Geometry-Aware Principal Component Analysis for Symmetric Positive Definite Matrices Inbal Horev, Florian Yger, Masashi Sugiyama
MLJ 2015 Introduction: Special Issue of Selected Papers of ACML 2013 Cheng Soon Ong, Wray L. Buntine, Tu Bao Ho, Masashi Sugiyama, Geoffrey I. Webb
ACML 2015 Regularized Policy Gradients: Direct Variance Reduction in Policy Gradient Estimation Tingting Zhao, Gang Niu, Ning Xie, Jucheng Yang, Masashi Sugiyama
IJCAI 2015 Stroke-Based Stylization Learning and Rendering with Inverse Reinforcement Learning Ning Xie, Tingting Zhao, Feng Tian, Xiaohua Zhang, Masashi Sugiyama
ACML 2015 Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities Hiroaki Sasaki, Voot Tangkaratt, Masashi Sugiyama
AAAI 2015 Support Consistency of Direct Sparse-Change Learning in Markov Networks Song Liu, Taiji Suzuki, Masashi Sugiyama
ECML-PKDD 2014 An Online Policy Gradient Algorithm for Markov Decision Processes with Continuous States and Actions Yao Ma, Tingting Zhao, Kohei Hatano, Masashi Sugiyama
AISTATS 2014 Analysis of Empirical MAP and Empirical Partially Bayes: Can They Be Alternatives to Variational Bayes? Shinichi Nakajima, Masashi Sugiyama
NeurIPS 2014 Analysis of Learning from Positive and Unlabeled Data Marthinus C du Plessis, Gang Niu, Masashi Sugiyama
NeurIPS 2014 Analysis of Variational Bayesian Latent Dirichlet Allocation: Weaker Sparsity than MAP Shinichi Nakajima, Issei Sato, Masashi Sugiyama, Kazuho Watanabe, Hiroko Kobayashi
AISTATS 2014 Bias Reduction and Metric Learning for Nearest-Neighbor Estimation of Kullback-Leibler Divergence Yung-Kyun Noh, Masashi Sugiyama, Song Liu, Marthinus Christoffel du Plessis, Frank Chongwoo Park, Daniel D. Lee
ECML-PKDD 2014 Clustering via Mode Seeking by Direct Estimation of the Gradient of a Log-Density Hiroaki Sasaki, Aapo Hyvärinen, Masashi Sugiyama
MLJ 2014 Least-Squares Independence Regression for Non-Linear Causal Inference Under Non-Gaussian Noise Makoto Yamada, Masashi Sugiyama, Jun Sese
NeurIPS 2014 Multitask Learning Meets Tensor Factorization: Task Imputation via Convex Optimization Kishan Wimalawarne, Masashi Sugiyama, Ryota Tomioka
ICML 2014 Outlier Path: A Homotopy Algorithm for Robust SVM Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Ichiro Takeuchi
ICML 2014 Transductive Learning with Multi-Class Volume Approximation Gang Niu, Bo Dai, Christoffel Plessis, Masashi Sugiyama
MLJ 2013 Computational Complexity of Kernel-Based Density-Ratio Estimation: A Condition Number Analysis Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama
ECML-PKDD 2013 Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation Song Liu, John A. Quinn, Michael U. Gutmann, Masashi Sugiyama
JMLR 2013 Global Analytic Solution of Fully-Observed Variational Bayesian Matrix Factorization Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan, Ryota Tomioka
NeurIPS 2013 Global Solver and Its Efficient Approximation for Variational Bayesian Low-Rank Subspace Clustering Shinichi Nakajima, Akiko Takeda, S. Derin Babacan, Masashi Sugiyama, Ichiro Takeuchi
ICML 2013 Infinitesimal Annealing for Training Semi-Supervised Support Vector Machines Kohei Ogawa, Motoki Imamura, Ichiro Takeuchi, Masashi Sugiyama
JMLR 2013 Maximum Volume Clustering: A New Discriminative Clustering Approach Gang Niu, Bo Dai, Lin Shang, Masashi Sugiyama
NeurIPS 2013 Parametric Task Learning Ichiro Takeuchi, Tatsuya Hongo, Masashi Sugiyama, Shinichi Nakajima
ICML 2013 Squared-Loss Mutual Information Regularization: A Novel Information-Theoretic Approach to Semi-Supervised Learning Gang Niu, Wittawat Jitkrittum, Bo Dai, Hirotaka Hachiya, Masashi Sugiyama
MLJ 2013 Variational Bayesian Sparse Additive Matrix Factorization Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan
ICML 2012 Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting Ning Xie, Hirotaka Hachiya, Masashi Sugiyama
NeurIPS 2012 Density-Difference Estimation Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus D. Plessis, Song Liu, Ichiro Takeuchi
AISTATS 2012 Fast Learning Rate of Multiple Kernel Learning: Trade-Off Between Sparsity and Smoothness Taiji Suzuki, Masashi Sugiyama
ICML 2012 Information-Theoretic Semi-Supervised Metric Learning via Entropy Regularization Gang Niu, Bo Dai, Makoto Yamada, Masashi Sugiyama
MLJ 2012 Multi-Parametric Solution-Path Algorithm for Instance-Weighted Support Vector Machines Masayuki Karasuyama, Naoyuki Harada, Masashi Sugiyama, Ichiro Takeuchi
NeurIPS 2012 Perfect Dimensionality Recovery by Variational Bayesian PCA Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. D. Babacan
ICML 2012 Semi-Supervised Learning of Class Balance Under Class-Prior Change by Distribution Matching Marthinus Christoffel du Plessis, Masashi Sugiyama
ACML 2012 Sparse Additive Matrix Factorization for Robust PCA and Its Generalization Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan
MLJ 2012 Statistical Analysis of Kernel-Based Least-Squares Density-Ratio Estimation Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama
JMLR 2011 A Refined Margin Analysis for Boosting Algorithms via Equilibrium Margin Liwei Wang, Masashi Sugiyama, Zhaoxiang Jing, Cheng Yang, Zhi-Hua Zhou, Jufu Feng
NeurIPS 2011 Analysis and Improvement of Policy Gradient Estimation Tingting Zhao, Hirotaka Hachiya, Gang Niu, Masashi Sugiyama
ACML 2011 Computationally Efficient Sufficient Dimension Reduction via Squared-Loss Mutual Information Makoto Yamada, Gang Niu, Jun Takagi, Masashi Sugiyama
AISTATS 2011 Cross-Domain Object Matching with Model Selection Makoto Yamada, Masashi Sugiyama
AAAI 2011 Direct Density-Ratio Estimation with Dimensionality Reduction via Hetero-Distributional Subspace Analysis Makoto Yamada, Masashi Sugiyama
NeurIPS 2011 Global Solution of Fully-Observed Variational Bayesian Matrix Factorization Is Column-Wise Independent Shinichi Nakajima, Masashi Sugiyama, S. D. Babacan
AISTATS 2011 Maximum Volume Clustering Gang Niu, Bo Dai, Lin Shang, Masashi Sugiyama
ICML 2011 On Bayesian PCA: Automatic Dimensionality Selection and Analytic Solution Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan
ICML 2011 On Information-Maximization Clustering: Tuning Parameter Selection and Analytic Solution Masashi Sugiyama, Makoto Yamada, Manabu Kimura, Hirotaka Hachiya
NeurIPS 2011 Relative Density-Ratio Estimation for Robust Distribution Comparison Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Masashi Sugiyama
JMLR 2011 Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama
NeurIPS 2011 Target Neighbor Consistent Feature Weighting for Nearest Neighbor Classification Ichiro Takeuchi, Masashi Sugiyama
JMLR 2011 Theoretical Analysis of Bayesian Matrix Factorization Shinichi Nakajima, Masashi Sugiyama
AAAI 2011 Trajectory Regression on Road Networks Tsuyoshi Idé, Masashi Sugiyama
ICML 2010 A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama, Hisashi Kashima
AISTATS 2010 Conditional Density Estimation via Least-Squares Density Ratio Estimation Masashi Sugiyama, Ichiro Takeuchi, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Daisuke Okanohara
AAAI 2010 Dependence Minimizing Regression with Model Selection for Non-Linear Causal Inference Under Non-Gaussian Noise Makoto Yamada, Masashi Sugiyama
ECML-PKDD 2010 Feature Selection for Reinforcement Learning: Evaluating Implicit State-Reward Dependency via Conditional Mutual Information Hirotaka Hachiya, Masashi Sugiyama
NeurIPS 2010 Global Analytic Solution for Variational Bayesian Matrix Factorization Shinichi Nakajima, Masashi Sugiyama, Ryota Tomioka
ICML 2010 Implicit Regularization in Variational Bayesian Matrix Factorization Shinichi Nakajima, Masashi Sugiyama
ICML 2010 Nonparametric Return Distribution Approximation for Reinforcement Learning Tetsuro Morimura, Masashi Sugiyama, Hisashi Kashima, Hirotaka Hachiya, Toshiyuki Tanaka
UAI 2010 Parametric Return Density Estimation for Reinforcement Learning Tetsuro Morimura, Masashi Sugiyama, Hisashi Kashima, Hirotaka Hachiya, Toshiyuki Tanaka
ACML 2010 Preface Masashi Sugiyama, Qiang Yang
MLJ 2010 Semi-Supervised Local Fisher Discriminant Analysis for Dimensionality Reduction Masashi Sugiyama, Tsuyoshi Idé, Shinichi Nakajima, Jun Sese
ACML 2010 Single Versus Multiple Sorting in All Pairs Similarity Search Yasuo Tabei, Takeaki Uno, Masashi Sugiyama, Koji Tsuda
AISTATS 2010 Sufficient Dimension Reduction via Squared-Loss Mutual Information Estimation Taiji Suzuki, Masashi Sugiyama
JMLR 2009 A Least-Squares Approach to Direct Importance Estimation Takafumi Kanamori, Shohei Hido, Masashi Sugiyama
IJCAI 2009 Active Policy Iteration: Efficient Exploration Through Active Learning for Value Function Approximation in Reinforcement Learning Takayuki Akiyama, Hirotaka Hachiya, Masashi Sugiyama
ECML-PKDD 2009 Efficient Sample Reuse in EM-Based Policy Search Hirotaka Hachiya, Jan Peters, Masashi Sugiyama
AISTATS 2009 Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression Nicole Kramer, Masashi Sugiyama, Mikio Braun
MLJ 2009 Pool-Based Active Learning in Approximate Linear Regression Masashi Sugiyama, Shinichi Nakajima
AAAI 2008 Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation Hirotaka Hachiya, Takayuki Akiyama, Masashi Sugiyama, Jan Peters
NeurIPS 2008 Efficient Direct Density Ratio Estimation for Non-Stationarity Adaptation and Outlier Detection Takafumi Kanamori, Shohei Hido, Masashi Sugiyama
ICML 2008 Nu-Support Vector Machine as Conditional Value-at-Risk Minimization Akiko Takeda, Masashi Sugiyama
COLT 2008 On the Margin Explanation of Boosting Algorithms Liwei Wang, Masashi Sugiyama, Cheng Yang, Zhi-Hua Zhou, Jufu Feng
ECML-PKDD 2008 Pool-Based Agnostic Experiment Design in Linear Regression Masashi Sugiyama, Shinichi Nakajima
ICML 2007 Asymptotic Bayesian Generalization Error When Training and Test Distributions Are Different Keisuke Yamazaki, Motoaki Kawanabe, Sumio Watanabe, Masashi Sugiyama, Klaus-Robert Müller
JMLR 2007 Covariate Shift Adaptation by Importance Weighted Cross Validation Masashi Sugiyama, Matthias Krauledat, Klaus-Robert Müller
JMLR 2007 Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis Masashi Sugiyama
NeurIPS 2007 Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul V. Buenau, Motoaki Kawanabe
NeurIPS 2007 Multi-Task Learning via Conic Programming Tsuyoshi Kato, Hisashi Kashima, Masashi Sugiyama, Kiyoshi Asai
JMLR 2006 Active Learning in Approximately Linear Regression Based on Conditional Expectation of Generalization Error Masashi Sugiyama
JMLR 2006 In Search of Non-Gaussian Components of a High-Dimensional Distribution Gilles Blanchard, Motoaki Kawanabe, Masashi Sugiyama, Vladimir Spokoiny, Klaus-Robert Müller
ICML 2006 Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Masashi Sugiyama
NeurIPS 2006 Mixture Regression for Covariate Shift Masashi Sugiyama, Amos J. Storkey
NeurIPS 2005 Active Learning for Misspecified Models Masashi Sugiyama
NeurIPS 2005 Non-Gaussian Component Analysis: A Semi-Parametric Framework for Linear Dimension Reduction Gilles Blanchard, Masashi Sugiyama, Motoaki Kawanabe, Vladimir Spokoiny, Klaus-Robert Müller
NeCo 2004 Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression Masashi Sugiyama, Motoaki Kawanabe, Klaus-Robert Müller
JMLR 2002 The Subspace Information Criterion for Infinite Dimensional Hypothesis Spaces (Kernel Machines Section) Masashi Sugiyama, Klaus-Robert Müller
MLJ 2002 Theoretical and Experimental Evaluation of the Subspace Information Criterion Masashi Sugiyama, Hidemitsu Ogawa
NeCo 2001 Incremental Active Learning for Optimal Generalization Masashi Sugiyama, Hidemitsu Ogawa
NeCo 2001 Subspace Information Criterion for Model Selection Masashi Sugiyama, Hidemitsu Ogawa
NeurIPS 1999 Training Data Selection for Optimal Generalization in Trigonometric Polynomial Networks Masashi Sugiyama, Hidemitsu Ogawa