Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
650
result(s) for
"EEG emotion analysis"
Sort by:
Emotion dysregulation as a marker in adolescent mental health with EEG-based prediction model
2025
This study comprehensively tackles the critical challenge of understanding and mitigating adolescent violent crime by integrating advanced insights from psychological and environmental research with cutting-edge digital public health tools. Current methods for examining adolescent aggression often fail to provide a holistic framework that effectively accounts for the intricate interplay of emotional dysregulation, environmental influences, and relational dynamics, thereby limiting the scope and efficacy of intervention strategies. In response to these limitations, we propose a comprehensive approach that leverages EEG-based emotion analysis in combination with a novel Psycho-Social Risk Interaction Model (PRIM), designed to uncover latent variables and dynamic interactions underlying violent behavior in adolescents. PRIM is a robust framework that encapsulates psychological vulnerabilities such as impulsivity and aggression, environmental stressors like socioeconomic pressures, and relational influences within peer and family networks, offering a nuanced understanding of the multifaceted factors contributing to violent tendencies. Building upon the PRIM framework, we introduce the Targeted Intervention and Risk Reduction Strategy (TIRRS), an innovative system that translates theoretical insights into actionable, personalized, and adaptive interventions. TIRRS dynamically modulates the interaction of psychological, environmental, and relational factors by employing real-time monitoring tools and resource optimization frameworks, ensuring that interventions are both responsive and impactful. Experimental results demonstrate that our approach improves the prediction accuracy of violent tendencies to 87.5%, representing a 21.3% increase compared to traditional statistical models (which averaged 66.2% accuracy). Moreover, the intervention success rate improved by 18.7% relative to standard counseling-based approaches. These outcomes enable the development of cost-effective, scalable, and sustainable prevention strategies.
Journal Article
Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings
2023
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
Journal Article
Analysis of frequency domain features for the classification of evoked emotions using EEG signals
by
Phadikar, Souvik
,
Choudhury, Nitin
,
Adhikari, Samannaya
in
Activities of daily living
,
Adult
,
algorithms
2025
Emotion is a natural instinctive state of mind that greatly influences human physiological activities and daily life decisions. Electroencephalogram (EEG) signals created from the central nervous system are very useful for emotion recognition and classification. In this study, EEG signals of individuals are analyzed by the variational mode decomposition (VMD) for frequency domain features to recognize visual stimuli-based evoked emotions (happy, sad, fear). After cleaning EEG signals from artifacts, VMD is employed to decompose the signal into its respective intrinsic mode functions (IMFs). A sliding windowing approach is adopted to calculate the power distributions in each of the predefined frequency bands. The results reveal that extracting frequency domain features using a sliding window of 3 s significantly enhances the efficiency of analyzing induced emotions in subjects. The random forest model shows promising results in classifying various emotions, achieving an accuracy of 99.57% for validation and 99.36% for testing. Moreover, it is observed that the fifth IMF has a strong relationship with emotion elicited from visual stimuli. In addition, the features of the trained model are analyzed by Shapley additive explanations.
Journal Article
Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture
by
Wang, Yanjiang
,
Xu, Guixun
,
Guo, Wenhui
in
Arousal
,
Artificial neural networks
,
Deep learning
2023
Recently, various deep learning frameworks have shown excellent performance in decoding electroencephalogram (EEG) signals, especially in human emotion recognition. However, most of them just focus on temporal features and ignore the features based on spatial dimensions. Traditional gated recurrent unit (GRU) model performs well in processing time series data, and convolutional neural network (CNN) can obtain spatial characteristics from input data. Therefore, this paper introduces a hybrid GRU and CNN deep learning framework named GRU-Conv to fully leverage the advantages of both. Nevertheless, contrary to most previous GRU architectures, we retain the output information of all GRU units. So, the GRU-Conv model could extract crucial spatio-temporal features from EEG data. And more especially, the proposed model acquires the multi-dimensional features of multi-units after temporal processing in GRU and then uses CNN to extract spatial information from the temporal features. In this way, the EEG signals with different characteristics could be classified more accurately. Finally, the subject-independent experiment shows that our model has good performance on SEED and DEAP databases. The average accuracy of the former is 87.04%. The mean accuracy of the latter is 70.07% for arousal and 67.36% for valence.
Journal Article
Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain
2017
This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.
Journal Article
Development of a Measurement Procedure for Emotional States Detection Based on Single-Channel Ear-EEG: A Proof-of-Concept Study
by
Arnesano, Marco
,
Cosoli, Gloria
,
Pollastro, Andrea
in
Adult
,
Affect (Psychology)
,
Arousal - physiology
2026
Real-time emotion monitoring is increasingly relevant in healthcare, automotive, and workplace applications, where adaptive systems can enhance user experience and well-being. This study investigates the feasibility of classifying emotions along the valence–arousal dimensions of the Circumplex Model of Affect using EEG signals acquired from a single mastoid channel positioned near the ear. Twenty-four participants viewed emotion-eliciting videos and self-reported their affective states using the Self-Assessment Manikin. EEG data were recorded with an OpenBCI Cyton board and both spectral and temporal features (including power in multiple frequency bands and entropy-based complexity measures) were extracted from the single ear-channel. A dual analytical framework was adopted: classical statistical analyses (ANOVA, Mann–Whitney U) and artificial neural networks combined with explainable AI methods (Gradient × Input, Integrated Gradients) were used to identify features associated with valence and arousal. Results confirmed the physiological validity of single-channel ear-EEG, and showed that absolute β- and γ-band power, spectral ratios, and entropy-based metrics consistently contributed to emotion classification. Overall, the findings demonstrate that reliable and interpretable affective information can be extracted from minimal EEG configurations, supporting their potential for wearable, real-world emotion monitoring. Nonetheless, practical considerations—such as long-term comfort, stability, and wearability of ear-EEG devices—remain important challenges and motivate future research on sustained use in naturalistic environments.
Journal Article
Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals
2014
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for ‘Depressing’ with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach.
•A system for emotion recognition from physiological signals has been proposed.•A novel multiresolution approach is being used for multimodal fusion.•Promising results have been achieved in this category of emotion recognition system.•Reviewed multimodal fusion approaches/strategies along with our proposed approach
Journal Article
Neuroimaging cognitive reappraisal in clinical populations to define neural targets for enhancing emotion regulation. A systematic review
by
Parvaz, Muhammad A.
,
Goldstein, Rita Z.
,
Zilverstand, Anna
in
Addictions
,
Affective Symptoms - complications
,
Affective Symptoms - diagnostic imaging
2017
Reduced capacity to cognitively regulate emotional responses is a common impairment across major neuropsychiatric disorders. Brain systems supporting one such strategy, cognitive reappraisal of emotion, have been investigated extensively in the healthy population, a research focus that has led to influential meta-analyses and literature reviews. However, the emerging literature on neural substrates underlying cognitive reappraisal in clinical populations is yet to be systematically reviewed. Therefore, the goal of the current review was to summarize the literature on cognitive reappraisal and highlight common and distinct neural correlates of impaired emotion regulation in clinical populations. We performed a two-stage systematic literature search, selecting 32 studies on cognitive reappraisal in individuals with mood disorders (n=12), anxiety disorders (n=14), addiction (n=2), schizophrenia (n=2), and personality disorders (n=5). Comparing findings across these disorders allowed us to determine underlying mechanisms that were either disorder-specific or common across disorders. Results showed that across clinical populations, individuals consistently demonstrated reduced recruitment of the ventrolateral prefrontal cortex (vlPFC) and dorsolateral prefrontal cortex (dlPFC) during downregulation of negative emotion, indicating that there may be a core deficit in selection, manipulation and inhibition during reappraisal. Further, in individuals with mood disorders, amygdala responses were enhanced during downregulation of emotion, suggesting hyperactive bottom-up responses or reduced modulatory capacity. In individuals with anxiety disorders, however, emotion regulation revealed reduced activity in the dorsal anterior cingulate cortex (dACC) and inferior/superior parietal cortex, possibly indicating a deficit in allocation of attention. The reviewed studies thus provide evidence for both disorder-specific and common deficits across clinical populations. These findings highlight the role of distinct neural substrates as targets for developing/assessing novel therapeutic approaches that are geared towards cognitive regulation of emotion, as well as the importance of transdiagnostic research to identify both disorder specific and core mechanisms.
•A systematic review of 32 neuroimaging studies on cognitive reappraisal in patients•Lower vlPFC/dlPFC activation is a core deficit in downregulation across patients.•Amygdala hyperactivity is a specific deficit in downregulation in mood disorders.•dACC/parietal hypoactivity specific deficit in downregulation in anxiety disorders•Implications: neural targets for therapeutic interventions need to be tailored
Journal Article
Similar brains blend emotion in similar ways: Neural representations of individual difference in emotion profiles
2022
Our daily emotional experience is a complex construct that usually involves multiple emotions blended in a context-dependent manner. However, the co-occurring and context-dependent nature of human emotions was understated in previous studies when addressing the individual difference in emotional experiences. The present study proposed a situated and blended ‘profile’ perspective to characterize individualized emotional experiences. Eighty participants watched a series of emotional videos with their EEG recorded, and the individual differences in their emotion profiles were measured as the vector distances between their multidimensional emotion ratings for these video stimuli. This measure was found to be a reliable descriptor of individualized emotional experiences and could efficiently predict classical emotional complexity indices. More importantly, inter-subject representational analyses revealed that similar emotion profiles were associated with similar delta-band activities over the prefrontal and temporo-parietal regions and similar theta-band activities over the frontal regions. Furthermore, left- and right-lateralized temporo-parietal representations were observed for positive and negative emotion profiles, respectively. Our findings demonstrate the potential of taking a ‘profile’ perspective for understanding individual differences in human emotions.
Journal Article
Exploring EEG Features in Cross-Subject Emotion Recognition
by
Zhang, Yazhou
,
Zhang, Peng
,
Li, Xiang
in
Biomedical engineering
,
Cognition & reasoning
,
Correlation analysis
2018
Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the \"leave-one-subject-out\" verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC = 0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition.
Journal Article