Variational Autoencoders for Generating Hyperspectral Imaging Honey Adulteration Data

Abstract

Honey fraud and adulteration are an increasing concern globally. Hyperspectral imaging and machine learning can detect adulterated honey within a known set of honey, where we have captured data at different sugar concentrations. Previous work in this area has used a minimal number of honey types, as sample preparation and data capture is a time-consuming process. This paper develops a new approach using variational autoencoders (VAEs) for generating adulterated honey data for unseen honey types. The results show that the binary adulteration detector can achieve on average 81.3% accuracy on unseen honey types by adding the generated data to the existing training data. Without including the generated data while training, the classifier can only achieve 44% on unseen honey types.

Cite

Text

Phillips and Abdulla. "Variational Autoencoders for Generating Hyperspectral Imaging Honey Adulteration Data." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00035

Markdown

[Phillips and Abdulla. "Variational Autoencoders for Generating Hyperspectral Imaging Honey Adulteration Data." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/phillips2022cvprw-variational/) doi:10.1109/CVPRW56347.2022.00035

BibTeX

@inproceedings{phillips2022cvprw-variational,
  title     = {{Variational Autoencoders for Generating Hyperspectral Imaging Honey Adulteration Data}},
  author    = {Phillips, Tessa and Abdulla, Waleed H.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {213-220},
  doi       = {10.1109/CVPRW56347.2022.00035},
  url       = {https://mlanthology.org/cvprw/2022/phillips2022cvprw-variational/}
}