Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,751
result(s) for
"emotion classification"
Sort by:
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
by
Pandeya, Yagya Raj
,
Bhattarai, Bhuwan
,
Lee, Joonwhoan
in
channel and filter separable convolution
,
Datasets
,
Emotions
2021
Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the standard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classifier attained 74% accuracy, an f1-score of 0.73, and an area under the curve score of 0.926.
Journal Article
Real-Time Emotion Classification Using EEG Data Stream in E-Learning Contexts
by
Nandi, Arijit
,
Fort, Santi
,
Xhafa, Fatos
in
Algorithms
,
Computer-Assisted Instruction
,
e-learning
2021
In face-to-face and online learning, emotions and emotional intelligence have an influence and play an essential role. Learners’ emotions are crucial for e-learning system because they promote or restrain the learning. Many researchers have investigated the impacts of emotions in enhancing and maximizing e-learning outcomes. Several machine learning and deep learning approaches have also been proposed to achieve this goal. All such approaches are suitable for an offline mode, where the data for emotion classification are stored and can be accessed infinitely. However, these offline mode approaches are inappropriate for real-time emotion classification when the data are coming in a continuous stream and data can be seen to the model at once only. We also need real-time responses according to the emotional state. For this, we propose a real-time emotion classification system (RECS)-based Logistic Regression (LR) trained in an online fashion using the Stochastic Gradient Descent (SGD) algorithm. The proposed RECS is capable of classifying emotions in real-time by training the model in an online fashion using an EEG signal stream. To validate the performance of RECS, we have used the DEAP data set, which is the most widely used benchmark data set for emotion classification. The results show that the proposed approach can effectively classify emotions in real-time from the EEG data stream, which achieved a better accuracy and F1-score than other offline and online approaches. The developed real-time emotion classification system is analyzed in an e-learning context scenario.
Journal Article
Academic Emotion Classification and Recognition Method for Large-scale Online Learning Environment—Based on A-CNN and LSTM-ATT Deep Learning Pipeline Method
by
Qiu, Longhui
,
Wei, Yaojia
,
Ma, Yongmei
in
Artificial intelligence
,
Distance learning
,
Emotions
2020
Subjective well-being is a comprehensive psychological indicator for measuring quality of life. Studies have found that emotional measurement methods and measurement accuracy are important for well-being-related research. Academic emotion is an emotion description in the field of education. The subjective well-being of learners in an online learning environment can be studied by analyzing academic emotions. However, in a large-scale online learning environment, it is extremely challenging to classify learners’ academic emotions quickly and accurately for specific comment aspects. This study used literature analysis and data pre-analysis to build a dimensional classification system of academic emotion aspects for students’ comments in an online learning environment, as well as to develop an aspect-oriented academic emotion automatic recognition method, including an aspect-oriented convolutional neural network (A-CNN) and an academic emotion classification algorithm based on the long short-term memory with attention mechanism (LSTM-ATT) and the attention mechanism. The experiments showed that this model can provide quick and effective identification. The A-CNN model accuracy on the test set was 89%, and the LSTM-ATT model accuracy on the test set was 71%. This research provides a new method for the measurement of large-scale online academic emotions, as well as support for research related to students’ well-being in online learning environments.
Journal Article
A Systematic Review for Human EEG Brain Signals Based Emotion Classification, Feature Extraction, Brain Condition, Group Comparison
2018
The study of electroencephalography (EEG) signals is not a new topic. However, the analysis of human emotions upon exposure to music considered as important direction. Although distributed in various academic databases, research on this concept is limited. To extend research in this area, the researchers explored and analysed the academic articles published within the mentioned scope. Thus, in this paper a systematic review is carried out to map and draw the research scenery for EEG human emotion into a taxonomy. Systematically searched all articles about the, EEG human emotion based music in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 1999 to 2016. These databases feature academic studies that used EEG to measure brain signals, with a focus on the effects of music on human emotions. The screening and filtering of articles were performed in three iterations. In the first iteration, duplicate articles were excluded. In the second iteration, the articles were filtered according to their titles and abstracts, and articles outside of the scope of our domain were excluded. In the third iteration, the articles were filtered by reading the full text and excluding articles outside of the scope of our domain and which do not meet our criteria. Based on inclusion and exclusion criteria, 100 articles were selected and separated into five classes. The first class includes 39 articles (39%) consists of emotion, wherein various emotions are classified using artificial intelligence (AI). The second class includes 21 articles (21%) is composed of studies that use EEG techniques. This class is named ‘brain condition’. The third class includes eight articles (8%) is related to feature extraction, which is a step before emotion classification. That this process makes use of classifiers should be noted. However, these articles are not listed under the first class because these eight articles focus on feature extraction rather than classifier accuracy. The fourth class includes 26 articles (26%) comprises studies that compare between or among two or more groups to identify and discover human emotion-based EEG. The final class includes six articles (6%) represents articles that study music as a stimulus and its impact on brain signals. Then, discussed the five main categories which are action types, age of the participants, and number size of the participants, duration of recording and listening to music and lastly countries or authors’ nationality that published these previous studies. it afterward recognizes the main characteristics of this promising area of science in: motivation of using EEG process for measuring human brain signals, open challenges obstructing employment and recommendations to improve the utilization of EEG process.
Journal Article
Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
2022
Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model’s learning capacity.
Journal Article
Feedback through emotion extraction using logistic regression and CNN
by
Panda, Mohit Ranjan
,
Panda, Susmita
,
Bisoy, Sukant Kishoro
in
Algorithms
,
Artificial Intelligence
,
Artificial neural networks
2022
The feedback process in the contemporary world is done on a timely basis filled particularly by the individual concerned. This hectic procedure often turns out to be peer-driven jeopardization of the primary objective of the process. To prevent this vulnerability, this work proposes a dynamic method of generating feedback automatically based on emotion classification by nonlinear logistic regression model and neural network-based convolutional neural networks (CNN). For a given test sample, our working project detects multiple faces followed by the cropping of these detected faces and finally, these cropped faces are stored in a destination folder. Iterating through the contents in this destination folder one by one, first, the binary classifier logistic regression gives a probabilistic output in the form of a percentage, the level of interest found on the concerned cropped facial image. Second, these iterated contents are passed on to the sophisticated CNN model, having the capability to detect and extract specific emotion features from an image. The CNN gives a detailed analysis report of the concerned individual by classifying them into emotions like Anger, Disgust, Contempt, Happiness, Neutral, Surprise or Fear. The outputs of these two models that are machine-generated feedback, would effectively encourage organizational, structural or end-user policy changes necessary to develop and evolve in the current competitive world.
Journal Article
A review of channel selection algorithms for EEG signal processing
by
El-Samie, Fathi E Abd
,
Ahmad, Ishtiaq
,
Alshebeili, Saleh A
in
Algorithms
,
Channels
,
Classification
2015
Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.
Journal Article
Functional connectivity profiles of the default mode and visual networks reflect temporal accumulative effects of sustained naturalistic emotional experience
2023
•Happiness and sadness have discrete neural representations in terms of FC profiles.•Emotions are represented in distributed networks rather than a single network.•VN and DMN contribute to distinct representation of sustained emotional experience.•Temporal accumulative emotional experiences are reflected in neural representations.
Determining and decoding emotional brain processes under ecologically valid conditions remains a key challenge in affective neuroscience. The current functional Magnetic Resonance Imaging (fMRI) based emotion decoding studies are mainly based on brief and isolated episodes of emotion induction, while sustained emotional experience in naturalistic environments that mirror daily life experiences are scarce. Here we used 12 different 10-minute movie clips as ecologically valid emotion-evoking procedures in n = 52 individuals to explore emotion-specific fMRI functional connectivity (FC) profiles on the whole-brain level at high spatial resolution (432 parcellations including cortical and subcortical structures). Employing machine-learning based decoding and cross validation procedures allowed to investigate FC profiles contributing to classification that can accurately distinguish sustained happiness and sadness and that generalize across subjects, movie clips, and parcellations. Both functional brain network-based and subnetwork-based emotion classification results suggested that emotion manifests as distributed representation of multiple networks, rather than a single functional network or subnetwork. Further, the results showed that the Visual Network (VN) and Default Mode Network (DMN) associated functional networks, especially VN-DMN, exhibited a strong contribution to emotion classification. To further estimate the temporal accumulative effect of naturalistic long-term movie-based video-evoking emotions, we divided the 10-min episode into three stages: early stimulation (1∼200 s), middle stimulation (201∼400 s), and late stimulation (401∼600 s) and examined the emotion classification performance at different stimulation stages. We found that the late stimulation contributes most to the classification (accuracy=85.32%, F1-score=85.62%) compared to early and middle stimulation stages, implying that continuous exposure to emotional stimulation can lead to more intense emotions and further enhance emotion-specific distinguishable representations. The present work demonstrated that sustained happiness and sadness under naturalistic conditions are presented in emotion-specific network profiles and these expressions may play different roles in the generation and modulation of emotions. These findings elucidated the importance of network level adaptations for sustained emotional experiences during naturalistic contexts and open new venues for imaging network level contributions under naturalistic conditions.
Journal Article
Learning Multi-level Deep Representations for Image Emotion Classification
2020
In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classification (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual features from both global and local views. Existing image emotion classification works using hand-crafted features or deep features mainly focus on either low-level visual features or semantic-level image representations without taking all factors into consideration. The proposed MldrNet combines deep representations of different levels, i.e. image semantics, image aesthetics and low-level visual features to effectively classify the emotion types of different kinds of images, such as abstract paintings and web images. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-crafted features. The proposed approach also outperforms the state-of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.
Journal Article
Emotion Recognition from Multiband EEG Signals Using CapsNet
by
Dong, Liang
,
Chao, Hao
,
Lu, Baoyun
in
Artificial intelligence
,
CapsNet
,
Cerebral Cortex - physiology
2019
Emotion recognition based on multi-channel electroencephalograph (EEG) signals is becoming increasingly attractive. However, the conventional methods ignore the spatial characteristics of EEG signals, which also contain salient information related to emotion states. In this paper, a deep learning framework based on a multiband feature matrix (MFM) and a capsule network (CapsNet) is proposed. In the framework, the frequency domain, spatial characteristics, and frequency band characteristics of the multi-channel EEG signals are combined to construct the MFM. Then, the CapsNet model is introduced to recognize emotion states according to the input MFM. Experiments conducted on the dataset for emotion analysis using EEG, physiological, and video signals (DEAP) indicate that the proposed method outperforms most of the common models. The experimental results demonstrate that the three characteristics contained in the MFM were complementary and the capsule network was more suitable for mining and utilizing the three correlation characteristics.
Journal Article