Leveraging Context to Support Automated Food Recognition in Restaurants
Abstract
The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant's online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).
Cite
Text
Bettadapura et al. "Leveraging Context to Support Automated Food Recognition in Restaurants." IEEE/CVF Winter Conference on Applications of Computer Vision, 2015. doi:10.1109/WACV.2015.83Markdown
[Bettadapura et al. "Leveraging Context to Support Automated Food Recognition in Restaurants." IEEE/CVF Winter Conference on Applications of Computer Vision, 2015.](https://mlanthology.org/wacv/2015/bettadapura2015wacv-leveraging/) doi:10.1109/WACV.2015.83BibTeX
@inproceedings{bettadapura2015wacv-leveraging,
title = {{Leveraging Context to Support Automated Food Recognition in Restaurants}},
author = {Bettadapura, Vinay and Thomaz, Edison and Parnami, Aman and Abowd, Gregory D. and Essa, Irfan A.},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2015},
pages = {580-587},
doi = {10.1109/WACV.2015.83},
url = {https://mlanthology.org/wacv/2015/bettadapura2015wacv-leveraging/}
}