Mutual Information Computation and Maximization Using GPU

Abstract

We present a GPU implementation to compute both mutual information and its derivatives. Mutual information computation is a highly demanding process due to the enormous number of exponential computations. It is therefore the bottleneck in many image registration applications. However, we show that these computations are fully parallizable and can be efficiently ported onto the GPU architecture. Compared with the same CPU implementation running on a workstation level CPU, we reached a factor of 170 in computing mutual information, and a factor of 400 in computing its derivatives.

Cite

Text

Lin and Medioni. "Mutual Information Computation and Maximization Using GPU." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2008. doi:10.1109/CVPRW.2008.4563101

Markdown

[Lin and Medioni. "Mutual Information Computation and Maximization Using GPU." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2008.](https://mlanthology.org/cvprw/2008/lin2008cvprw-mutual/) doi:10.1109/CVPRW.2008.4563101

BibTeX

@inproceedings{lin2008cvprw-mutual,
  title     = {{Mutual Information Computation and Maximization Using GPU}},
  author    = {Lin, Yuping and Medioni, Gérard G.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2008},
  pages     = {1-6},
  doi       = {10.1109/CVPRW.2008.4563101},
  url       = {https://mlanthology.org/cvprw/2008/lin2008cvprw-mutual/}
}