Bayesian Tactile Face

Abstract

Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable active shape model (ASM) (Cootes et al., 1995), which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.

Cite

Text

Wang et al. "Bayesian Tactile Face." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008. doi:10.1109/CVPR.2008.4587374

Markdown

[Wang et al. "Bayesian Tactile Face." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008.](https://mlanthology.org/cvpr/2008/wang2008cvpr-bayesian/) doi:10.1109/CVPR.2008.4587374

BibTeX

@inproceedings{wang2008cvpr-bayesian,
  title     = {{Bayesian Tactile Face}},
  author    = {Wang, Zheshen and Xu, Xinyu and Li, Baoxin},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2008},
  doi       = {10.1109/CVPR.2008.4587374},
  url       = {https://mlanthology.org/cvpr/2008/wang2008cvpr-bayesian/}
}