Visual Routines for Autonomous Driving
Abstract
The paper describes visual routines based on models of color and shape, as well as crucial issues involving the scheduling of such routines. The visual routines are developed in a unique platform. The view from a car driving in a simulated world is feed into a Datacube pipeline video processor. The use of this simulation provides a flexible environment from which to set crucial image processing parameters of the individual routines. In addition to the simulations, the routines are also tested in similar images generated by driving in the real world, to assure the generalizability of the simulation.
Cite
Text
Salgian and Ballard. "Visual Routines for Autonomous Driving." IEEE/CVF International Conference on Computer Vision, 1998. doi:10.1109/ICCV.1998.710820Markdown
[Salgian and Ballard. "Visual Routines for Autonomous Driving." IEEE/CVF International Conference on Computer Vision, 1998.](https://mlanthology.org/iccv/1998/salgian1998iccv-visual/) doi:10.1109/ICCV.1998.710820BibTeX
@inproceedings{salgian1998iccv-visual,
title = {{Visual Routines for Autonomous Driving}},
author = {Salgian, Garbis and Ballard, Dana H.},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {1998},
pages = {876-882},
doi = {10.1109/ICCV.1998.710820},
url = {https://mlanthology.org/iccv/1998/salgian1998iccv-visual/}
}