Dynamically Identifying Deep Multimodal Features for Image Privacy Prediction
Abstract
With millions of images shared online, privacy concerns are on the rise. In this paper, we propose an approach to image privacy prediction by dynamically identifying powerful features corresponding to objects, scene context, and image tags derived from Convolutional Neural Networks for each test image. Specifically, our approach identifies the set of most “competent” features on the fly, according to each test image whose privacy has to be predicted. Experimental results on thousands of Flickr images show that our approach predicts the sensitive (or private) content more accurately than the models trained on each individual feature set (object, scene, and tags alone) or their combination.
Cite
Text
Tonge and Caragea. "Dynamically Identifying Deep Multimodal Features for Image Privacy Prediction." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.330110057Markdown
[Tonge and Caragea. "Dynamically Identifying Deep Multimodal Features for Image Privacy Prediction." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/tonge2019aaai-dynamically/) doi:10.1609/AAAI.V33I01.330110057BibTeX
@inproceedings{tonge2019aaai-dynamically,
title = {{Dynamically Identifying Deep Multimodal Features for Image Privacy Prediction}},
author = {Tonge, Ashwini and Caragea, Cornelia},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {10057-10058},
doi = {10.1609/AAAI.V33I01.330110057},
url = {https://mlanthology.org/aaai/2019/tonge2019aaai-dynamically/}
}