Towards Improved Risk Bounds for Transductive Learning
Abstract
Transductive learning is a popular setting in statistic learning theory, reasoning from observed, specific training cases to specific test cases, which has been widely used in many fields such as graph neural networks and semi-supervised learning. Existing results provide fast rates of convergence based on the traditional local techniques, which need the surrogate function that upper bounds the uniform error within a localized region to be ``sub-root''. We derive new version of concentration inequality for empirical processes in transductive learning and apply generic chaining technique to relax the assumptions and gain tighter results in empirical risk minimization. Furthermore, we concentrate on the generalization of moment penalization algorithm. We design a novel estimator based on the second moment (variance) penalization and derive its learning rates, which is the first theoretical generalization analysis considering variance-based algorithms.
Cite
Text
Zhu et al. "Towards Improved Risk Bounds for Transductive Learning." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/806Markdown
[Zhu et al. "Towards Improved Risk Bounds for Transductive Learning." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/zhu2025ijcai-improved/) doi:10.24963/IJCAI.2025/806BibTeX
@inproceedings{zhu2025ijcai-improved,
title = {{Towards Improved Risk Bounds for Transductive Learning}},
author = {Zhu, Bowei and Li, Shaojie and Liu, Yong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {7245-7253},
doi = {10.24963/IJCAI.2025/806},
url = {https://mlanthology.org/ijcai/2025/zhu2025ijcai-improved/}
}