Learning Adaptive Kernels for Statistical Independence Tests

Abstract

We propose a novel framework for kernel-based statistical independence tests that enable adaptatively learning parameterized kernels to maximize test power. Our framework can effectively address the pitfall inherent in the existing signal-to-noise ratio criterion by modeling the change of the null distribution during the learning process. Based on the proposed framework, we design a new class of kernels that can adaptatively focus on the significant dimensions of variables to judge independence, which makes the tests more flexible than using simple kernels that are adaptive only in length-scale, and especially suitable for high-dimensional complex data. Theoretically, we demonstrate the consistency of our independence tests, and show that the non-convex objective function used for learning fits the L-smoothing condition, thus benefiting the optimization. Experimental results on both synthetic and real data show the superiority of our method. The source code and datasets are available at \url{https://github.com/renyixin666/HSIC-LK.git}.

Cite

Text

Ren et al. "Learning Adaptive Kernels for Statistical Independence Tests." Artificial Intelligence and Statistics, 2024.

Markdown

[Ren et al. "Learning Adaptive Kernels for Statistical Independence Tests." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/ren2024aistats-learning/)

BibTeX

@inproceedings{ren2024aistats-learning,
  title     = {{Learning Adaptive Kernels for Statistical Independence Tests}},
  author    = {Ren, Yixin and Xia, Yewei and Zhang, Hao and Guan, Jihong and Zhou, Shuigeng},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {2494-2502},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/ren2024aistats-learning/}
}