Testing Using Privileged Information by Adapting Features with Statistical Dependence

Abstract

Given an imperfect predictor, we exploit additional features at test time to improve the predictions made, without retraining and without knowledge of the prediction function. This scenario arises if training labels or data are proprietary, restricted, or no longer available, or if training itself is prohibitively expensive. We assume that the additional features are useful if they exhibit strong statistical dependence to the underlying perfect predictor. Then, we empirically estimate and strengthen the statistical dependence between the initial noisy predictor and the additional features via manifold denoising. As an example, we show that this approach leads to improvement in real-world visual attribute ranking.

Cite

Text

Kim and Tompkin. "Testing Using Privileged Information by Adapting Features with Statistical Dependence." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00927

Markdown

[Kim and Tompkin. "Testing Using Privileged Information by Adapting Features with Statistical Dependence." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/kim2021iccv-testing/) doi:10.1109/ICCV48922.2021.00927

BibTeX

@inproceedings{kim2021iccv-testing,
  title     = {{Testing Using Privileged Information by Adapting Features with Statistical Dependence}},
  author    = {Kim, Kwang In and Tompkin, James},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {9405-9413},
  doi       = {10.1109/ICCV48922.2021.00927},
  url       = {https://mlanthology.org/iccv/2021/kim2021iccv-testing/}
}