Emotion recognition using facial expression and electroencephalography features with support vector machine classifier
- Publication Type:
- Thesis
- Issue Date:
- 2017
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Recognizing emotions from facial expression and electroencephalography (EEG) emotion signals are complicated tasks that require substantial issues to be solved in order to achieve higher performance of the classifications, i.e. facial expression has to deal with features, features dimensionality, and classification processing time, while EEG emotion recognition has the concerned with features, number of channels and sub band frequency, and also non-stationary behaviour of EEG signals. This thesis addresses the aforementioned challenges.
First, a feature for facial expression recognition using a combination of Viola-Jones algorithm and improved Histogram of Oriented Gradients (HOG) descriptor termed Edge-HOG or E–HOG is proposed which has the advantage of insensitivity to lighting conditions. The issue of dimensionality and classification processing time was resolved using a combination of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which has successfully reduced both the dimension and the classification processing time resulting in a new low dimension of feature called Reduced E–HOG (RED E–HOG).
In the case of EEG emotion recognition, a method to recognize 4 discrete emotions from arousal-valence dimensional plane using wavelet energy and entropy features was developed. The effects of EEG channel and subband selection were also addressed, which managed to reduce the channels from 32 to 18 channels and the subband from 5 to 3 bands.
To deal with the non-stationary behaviour of EEG signals, an Optimal Window Selection (OWS) method as feature-agnostic pre-processing was proposed. The main objective of OWS is window segmentation with varying window which was applied to 7 various features to improve the classification results of 4 dimensional plane emotions, namely arousal, valence, dominance, and liking, to distinguish between the high or low state of the aforementioned emotions. The improvement of accuracy makes the OWS method a potential solution to dealing with the non-stationary behaviour of EEG signals in emotion recognition. The implementation of OWS provides the information that the EEG emotions may be appropriately localized at 4–12 seconds time segments.
In addition, a feature concatenating of both Wavelet Entropy and average Wavelet Approximation Coefficients was developed for EEG emotion recognition. The SVM classifier trained using this feature provides a higher classification result consistently compared to various different features such as: simple average, Fast Fourier Transform (FFT), and Wavelet Energy.
In all the experiments, the classification was conducted using optimized SVM with a Radial Basis Function (RBF) kernel. The RBF kernel parameters were properly optimized using a particle swarm ensemble clustering algorithm called Ensemble Rapid Centroid Estimation (ERCE). The algorithm estimates the number of clusters directly from the data using swarm intelligence and ensemble aggregation. The SVM is then trained using the optimized RBF kernel parameters and Sequential Minimal Optimization (SMO) algorithm.
Please use this identifier to cite or link to this item: