Transductive Active Learning: Theory and Applications

Abstract

We study a generalization of classical active learning to real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region.We analyze a family of decision rules that sample adaptively to minimize uncertainty about prediction targets.We are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data.We demonstrate their strong sample efficiency in two key applications: active fine-tuning of large neural networks and safe Bayesian optimization, where they achieve state-of-the-art performance.

Cite

Text

Hübotter et al. "Transductive Active Learning: Theory and Applications." Neural Information Processing Systems, 2024. doi:10.52202/079017-3961

Markdown

[Hübotter et al. "Transductive Active Learning: Theory and Applications." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/hubotter2024neurips-transductive/) doi:10.52202/079017-3961

BibTeX

@inproceedings{hubotter2024neurips-transductive,
  title     = {{Transductive Active Learning: Theory and Applications}},
  author    = {Hübotter, Jonas and Sukhija, Bhavya and Treven, Lenart and As, Yarden and Krause, Andreas},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3961},
  url       = {https://mlanthology.org/neurips/2024/hubotter2024neurips-transductive/}
}