Date of Award
1-1-2018
Document Type
Masters Thesis
Degree Name
M.S.
Organizational Unit
Daniel Felix Ritchie School of Engineering and Computer Science, Computer Science
First Advisor
Mohammad Mahoor, Ph.D.
Second Advisor
Timothy Sweeny
Third Advisor
Matthew Rutherford
Keywords
Adaptive human robot interaction, Autism spectrum disorder, Facial expression recognition, Social assistant robots
Abstract
Children with Autism Spectrum Disorder (ASD) experience limited abilities in recognizing non-verbal elements of social interactions such as facial expressions [1]. They also show deficiencies in imitating facial expressions in social situations. In this Master thesis, we focus on studying the ability of children with ASD in recognizing facial expressions and imitating the expressions using a rear-projected expressive humanoid robot, called Ryan. Recent studies show that social robots such as Ryan have great potential for autism therapy. We designed and developed three studies, first to evaluate the ability of children with ASD in recognizing facial expressions that are presented to them with different methods (i.e. robot versus video), and second to determine the effect of various methods on the facial expression imitation performance of children with ASD using Reinforcement Learning (RL).
In the first study, we compared the facial expression recognition ability of children with ASD with Typically Developing (TD) children using Ryan. Overall, the results did not show a significant difference between the performance of the ASD and groups in expression recognition. The study revealed the significant effect of increasing the expression intensity level on the expression recognition accuracy. The study also revealed both groups perform significantly worse in recognizing fear and disgust expressions.
The second study focused on the effect of context on the facial expression recognition ability of children with ASD compared to their TD peers. The result of this study showed a higher general performance of TD children compared to the ASD group. Within the TD group, fear and in the ASD group sadness were recognized with the lowest accuracy compared to the average accuracy of other expressions. The result of this study did not show any difference between groups; however, we found that there is a significant effect of different background categories in both groups. It means, we found a significant higher recognition accuracy for the negative backgrounds compared to positive backgrounds in 20% intensity for the fear and sadness expressions.
In the third study, we designed an active learning method using RL algorithm to identify and adapt based on the individual differences in expression imitation in response to different conditions. We implemented the RL to first, identify the effective imitation method based on individual's performance and preference; and second, to make an online adaptation and adjustment based on the effective method for each individual. The result of this study showed that the active learning method could successfully identify and adjust the session based on participant's strength and preference. The results also showed that each participant responded differently to each method in general and for each expression.
Publication Statement
Copyright is held by the author. User is responsible for all copyright compliance.
Rights Holder
Farzaneh Askari
Provenance
Received from ProQuest
File Format
application/pdf
Language
en
File Size
74 p.
Recommended Citation
Askari, Farzaneh, "Studying Facial Expression Recognition and Imitation Ability of Children with Autism Spectrum Disorder in Interaction with a Social Robot" (2018). Electronic Theses and Dissertations. 1521.
https://digitalcommons.du.edu/etd/1521
Copyright date
2018
Discipline
Computer engineering
Included in
Behavior and Behavior Mechanisms Commons, Early Childhood Education Commons, Educational Technology Commons, Robotics Commons