Feature super-resolution based Facial Expression Recognition for multi-scale low-resolution images

Fang Nan, Wei Jing, Feng Tian, Jizhong Zhang, Zhenxin Hong, Qinghua Zheng

Research output: Contribution to journalArticlepeer-review

Abstract

Facial Expression Recognition (FER) for various low-resolution images is an important task and need in applications of analyzing crowd scenes (station, classroom, etc.). Due to the discriminative feature loss caused by reduced resolution, classifying various low-resolution facial images into the right category is still a challenging task. In this work, we proposed a novel generative adversarial network-based feature level super-resolution method for robust facial expression recognition (FSR-FER), which can reduce the chance of privacy leaking without restoring high-resolution facial images. In particular, a pre-trained FER model was employed as a feature extractor, and a generator network G and a discriminator network D are trained with features extracted from low-resolution and corresponding high-resolution images. Generator network G tries to transform features of low-resolution images to more discriminative ones by making them closer to the ones of corresponding high-resolution images. For better classification performance, we also proposed an effective classification-aware loss reweighting strategy based on the classification probability calculated by a fixed FER model to make our model focus more on samples that are prone to misclassification. Experimental results on the Real-World Affective Faces (RAF) Database and Static Facial Expressions in the Wild (SFEW) 2.0 dataset demonstrate that our method achieves satisfying results on various down-sample factors with a single model and has better performance on low-resolution images compared with methods using image super-resolution and expression recognition separately.
Original languageEnglish
Article number236
JournalKnowledge-Based Systems
Volume236
Issue number107678
DOIs
Publication statusPublished - 25 Jan 2022

Cite this