FACIAL EMOTION DETECTION USING PRE-TRAINED ADVANCED CONVOLUTIONAL NEURAL NETWORKS AND THE IMPROVED FER-2013 DATASET
Abstract
Emotion recognition from human facial images is one of the important
directions for human-computer interaction, security systems, mental health monitoring, and intelligent systems. Especially in the development of humanoid robots in the field of human-computer interaction, one of the important features of a humanoid robot is that the robot can communicate with humans while sensing their emotions. The development of emotion recognition systems is a significant challenge in computer vision and deep learning. In this study, we suggest a good way to recognize emotions from people's facial images by using well-known convolutional neural networks that have been trained on a better version of the FER-2013 dataset. In this study, the effectiveness of implementing the task of emotional state recognition using advanced convolutional neural networks is studied. In particular, the results of the popular pre-trained architectures ResNet-50, VGGNet-16, DenseNet-121, and EfficientNet-B0 on facial expression recognition are analyzed. The study used an improved and augmented version of the FER-2013 dataset. During data processing, imbalances between facial expressions, low-quality images, and incorrectly labeled images were detected and corrected. In addition, data augmentation techniques were used to reduce the problem of overfitting. The model results were evaluated based on criteria such as accuracy and loss function.