Date of Award

2020

Document Type

Dissertation

Degree Name

Ph.D.

Organizational Unit

Daniel Felix Ritchie School of Engineering and Computer Science, Electrical and Computer Engineering

First Advisor

Mohammad H. Mahoor

Second Advisor

Mark Siemens

Third Advisor

Haluk Ogmen

Fourth Advisor

Kimon P. Valavanis

Keywords

Artificial intelligence, Computer vision, Deep neural networks, Facial expression recognition, Machine learning

Abstract

Automated Facial Expression Recognition (FER) has been a topic of study in the field of computer vision and machine learning for decades. In spite of efforts made to improve the accuracy of FER systems, existing methods still are not generalizable and accurate enough for use in real-world applications. Many of the traditional methods use hand-crafted (a.k.a. engineered) features for representation of facial images. However, these methods often require rigorous hyper-parameter tuning to achieve favorable results.

Recently, Deep Neural Networks (DNNs) have shown to outperform traditional methods in visual object recognition. DNNs require huge data as well as powerful computing units for training generalizable and robust classification models. The problem of automated FER especially with images captured in the wild setting is even more challenging since there are subtle differences between various facial emotions. This dissertation presents the recent efforts I made in 1) creating a large annotated database of facial expressions, 2) developing novel DNN-based methods for automated recognition of facial expressions described by two main models of affect, the categorical model and the dimensional model, and 3) developing a robust face detection and emotion recognition system based on our state-of-the-art DNN and trained on our proposed database of facial expressions.

Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To address these needs, we developed the largest database of human affect (called AffectNet). For AffectNet, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild. AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models.

This dissertation also presents three major and novel DNN-based methods for automated facial affect estimation. The methods are: 1) 3D Inception-ResNet (3DIR), 2) BReGNet, and 3) BReG-NeXt architectures. These methods modify the residual unit -proposed in the original ResNets- with different operations. Comprehensive experiments are conducted to evaluate the performance of each of the proposed methods as well as their efficiency using Affect and few other facial expression databases. Our final proposed method -BReG-NeXt- achieves state-of-the-art results in predicting both dimensional and categorical models of affect with significantly fewer training parameters and less number of FLOPs. Additionally, a robust face detection network is developed based on the BReG-NeXt architecture which leverages AffectNet’s diverse training data and BReG-NeXt’s efficient feature extraction powers.

Publication Statement

Copyright is held by the author. User is responsible for all copyright compliance.

Rights Holder

Behzad Hasani

Provenance

Received from ProQuest

File Format

application/pdf

Language

en

File Size

158 p.

Discipline

Computer science, Artificial intelligence



Share

COinS