On Parameter Tying by Quantization

Abstract

The maximum likelihood estimator (MLE) is generally asymptotically consistent but is susceptible to over-fitting. To combat this problem, regularization methods which reduce the variance at the cost of (slightly) increasing the bias are often employed in practice. In this paper, we present an alternative variance reduction (regularization) technique that quantizes the MLE estimates as a post processing step, yielding a smoother model having several tied parameters. We provide and prove error bounds for our new technique and demonstrate experimentally that it often yields models having higher test-set log-likelihood than the ones learned using the MLE. We also propose a new importance sampling algorithm for fast approximate inference in models having several tied parameters. Our experiments show that our new inference algorithm is superior to existing approaches such as Gibbs sampling and MC-SAT on models having tied parameters, learned using our quantization-based approach.

Cite

Text

Chou et al. "On Parameter Tying by Quantization." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10429

Markdown

[Chou et al. "On Parameter Tying by Quantization." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/chou2016aaai-parameter/) doi:10.1609/AAAI.V30I1.10429

BibTeX

@inproceedings{chou2016aaai-parameter,
  title     = {{On Parameter Tying by Quantization}},
  author    = {Chou, Li and Sarkhel, Somdeb and Ruozzi, Nicholas and Gogate, Vibhav},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {3241-3247},
  doi       = {10.1609/AAAI.V30I1.10429},
  url       = {https://mlanthology.org/aaai/2016/chou2016aaai-parameter/}
}