MMIM: An Interpretable Regularization Method for Neural Networks (Student Abstract)

Abstract

In deep learning models, most of network architectures are designed artificially and empirically. Although adding new structures such as convolution kernels in CNN is widely used, there are few methods to design new structures and mathematical tools to evaluate feature representation capabilities of new structures. Inspired by ensemble learning, we propose an interpretable regularization method named Minimize Mutual Information Method(MMIM), which minimize the generalization error by minimizing the mutual information of hidden neurons. The experimental results also verify the effectiveness of our proposed MMIM.

Cite

Text

Xie and Hou. "MMIM: An Interpretable Regularization Method for Neural Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I18.17963

Markdown

[Xie and Hou. "MMIM: An Interpretable Regularization Method for Neural Networks (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/xie2021aaai-mmim/) doi:10.1609/AAAI.V35I18.17963

BibTeX

@inproceedings{xie2021aaai-mmim,
  title     = {{MMIM: An Interpretable Regularization Method for Neural Networks (Student Abstract)}},
  author    = {Xie, Nan and Hou, Yuexian},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {15933-15934},
  doi       = {10.1609/AAAI.V35I18.17963},
  url       = {https://mlanthology.org/aaai/2021/xie2021aaai-mmim/}
}