Hierarchical 3D Diffusion Wavelet Shape Priors

Abstract

In this paper, we propose a novel representation of prior knowledge for image segmentation, using diffusion wavelets that can reflect arbitrary continuous interdependencies in shape data. The application of diffusion wavelets has, so far, largely been confined to signal processing. In our approach, and in contrast to state-of-the-art methods, we optimize the coefficients, the number and the position of landmarks, and the object topology - the domain on which the wavelets are defined - during the model learning phase, in a coarse-to-fine manner. The resulting paradigm supports hierarchies both in the model and the search space, can encode complex geometric and photometric dependencies of the structure of interest, and can deal with arbitrary topologies. We report results on two challenging medical data sets, that illustrate the impact of the soft parameterization and the potential of the diffusion operator.

Cite

Text

Essafi et al. "Hierarchical 3D Diffusion Wavelet Shape Priors." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459385

Markdown

[Essafi et al. "Hierarchical 3D Diffusion Wavelet Shape Priors." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/essafi2009iccv-hierarchical/) doi:10.1109/ICCV.2009.5459385

BibTeX

@inproceedings{essafi2009iccv-hierarchical,
  title     = {{Hierarchical 3D Diffusion Wavelet Shape Priors}},
  author    = {Essafi, Salma and Langs, Georg and Paragios, Nikos},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2009},
  pages     = {1717-1724},
  doi       = {10.1109/ICCV.2009.5459385},
  url       = {https://mlanthology.org/iccv/2009/essafi2009iccv-hierarchical/}
}