Block Sparse Bayesian Learning: A Diversified Scheme

Abstract

This paper introduces a novel prior called Diversified Block Sparse Prior to characterize the widespread block sparsity phenomenon in real-world data. By allowing diversification on intra-block variance and inter-block correlation matrices, we effectively address the sensitivity issue of existing block sparse learning methods to pre-defined block information, which enables adaptive block estimation while mitigating the risk of overfitting. Based on this, a diversified block sparse Bayesian learning method (DivSBL) is proposed, utilizing EM algorithm and dual ascent method for hyperparameter estimation. Moreover, we establish the global and local optimality theory of our model. Experiments validate the advantages of DivSBL over existing algorithms.

Cite

Text

Zhang et al. "Block Sparse Bayesian Learning: A Diversified Scheme." Neural Information Processing Systems, 2024. doi:10.52202/079017-4130

Markdown

[Zhang et al. "Block Sparse Bayesian Learning: A Diversified Scheme." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhang2024neurips-block/) doi:10.52202/079017-4130

BibTeX

@inproceedings{zhang2024neurips-block,
  title     = {{Block Sparse Bayesian Learning: A Diversified Scheme}},
  author    = {Zhang, Yanhao and Zhu, Zhihan and Xia, Yong},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4130},
  url       = {https://mlanthology.org/neurips/2024/zhang2024neurips-block/}
}