Integrating Shape from Shading and Range Data Using Neural Networks

Abstract

This paper presents a framework for integrating multiple sensory data, sparse range data and dense depth maps from shape from shading in order to improve the 3D reconstruction of visible surfaces of 3D objects. The integration process is based on propagating the error difference between the two data sets by fitting a surface to that difference and using it to correct the visible surface obtained from shape from shading. A feedforward neural network is used to fit a surface to the sparse data. We also study the use of the extended Kalman filter for supervised learning and compare it with the backpropagation algorithm. A performance analysis is done to obtain the best neural network architecture and learning algorithm. It is found that the integration of sparse depth measurements has greatly enhanced the 3D visible surface obtained from shape from shading in terms of metric measurements.

Cite

Text

Mostafa et al. "Integrating Shape from Shading and Range Data Using Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1999. doi:10.1109/CVPR.1999.784602

Markdown

[Mostafa et al. "Integrating Shape from Shading and Range Data Using Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1999.](https://mlanthology.org/cvpr/1999/mostafa1999cvpr-integrating/) doi:10.1109/CVPR.1999.784602

BibTeX

@inproceedings{mostafa1999cvpr-integrating,
  title     = {{Integrating Shape from Shading and Range Data Using Neural Networks}},
  author    = {Mostafa, Mostafa Gadal-Haqq M. and Yamany, Sameh M. and Farag, Aly A.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {1999},
  pages     = {2015-2020},
  doi       = {10.1109/CVPR.1999.784602},
  url       = {https://mlanthology.org/cvpr/1999/mostafa1999cvpr-integrating/}
}