Representing Face Images for Emotion Classification
Abstract
We compare the generalization performance of three distinct rep(cid:173) resentation schemes for facial emotions using a single classification strategy (neural network). The face images presented to the clas(cid:173) sifiers are represented as: full face projections of the dataset onto their eigenvectors (eigenfaces); a similar projection constrained to eye and mouth areas (eigenfeatures); and finally a projection of the eye and mouth areas onto the eigenvectors obtained from 32x32 random image patches from the dataset. The latter system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from a database in which human sub(cid:173) jects consistently identify a single emotion for the face .
Cite
Text
Padgett and Cottrell. "Representing Face Images for Emotion Classification." Neural Information Processing Systems, 1996.Markdown
[Padgett and Cottrell. "Representing Face Images for Emotion Classification." Neural Information Processing Systems, 1996.](https://mlanthology.org/neurips/1996/padgett1996neurips-representing/)BibTeX
@inproceedings{padgett1996neurips-representing,
title = {{Representing Face Images for Emotion Classification}},
author = {Padgett, Curtis and Cottrell, Garrison W.},
booktitle = {Neural Information Processing Systems},
year = {1996},
pages = {894-900},
url = {https://mlanthology.org/neurips/1996/padgett1996neurips-representing/}
}