Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
70 result(s) for "DEAP"
Sort by:
An Overview of Novel Actuators for Soft Robotics
In this systematic survey, an overview of non-conventional actuators particularly used in soft-robotics is presented. The review is performed by using well-defined performance criteria with a direction to identify the exemplary and potential applications. In addition to this, initial guidelines to compare the performance and applicability of these novel actuators are provided. The meta-analysis is restricted to five main types of actuators: shape memory alloys (SMAs), fluidic elastomer actuators (FEAs), shape morphing polymers (SMPs), dielectric electro-activated polymers (DEAPs), and magnetic/electro-magnetic actuators (E/MAs). In exploring and comparing the capabilities of these actuators, the focus was on eight different aspects: compliance, topology-geometry, scalability-complexity, energy efficiency, operation range, modality, controllability, and technological readiness level (TRL). The overview presented here provides a state-of-the-art summary of the advancements and can help researchers to select the most convenient soft actuators using the comprehensive comparison of the suggested quantitative and qualitative criteria.
A Multi-Column CNN Model for Emotion Recognition from EEG Signals
We present a multi-column CNN-based model for emotion recognition from EEG signals. Recently, a deep neural network is widely employed for extracting features and recognizing emotions from various biosignals including EEG signals. A decision from a single CNN-based emotion recognizing module shows improved accuracy than the conventional handcrafted feature-based modules. To further improve the accuracy of the CNN-based modules, we devise a multi-column structured model, whose decision is produced by a weighted sum of the decisions from individual recognizing modules. We apply the model to EEG signals from DEAP dataset for comparison and demonstrate the improved accuracy of our model.
Exploring EEG Features in Cross-Subject Emotion Recognition
Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the \"leave-one-subject-out\" verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC = 0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition.
Review on Emotion Recognition Based on Electroencephalography
Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.
A comprehensive survey on emotion recognition based on electroencephalograph (EEG) signals
Emotion recognition using electroencephalography (EEG) is becoming an interesting topic among researchers. It has made a remarkable entry in the domain of biomedical, smart environment, brain-computer interface (BCI), communication, security, and safe driving. In the past decade, several studies have been published that viewed emotion recognition tasks in a variety of manners. Multiple algorithms have been developed to accurately capture the EEG signal and identify the emotions from such EEG signals. The advent of artificial intelligence (AI) has changed the landscape of every application including emotion recognition. Two categories of AI-based algorithms such as machine learning and deep learning algorithms are becoming popular in the emotion recognition domain. This narrative review is an attempt to provide deep insight into the AI-based techniques, their role in EEG-based emotion recognition, and their potential future possibilities in accurate emotion identification. Furthermore, this review also provides an overview of the several important topics in emotion recognition such as emotion paradigms, EEG and its processing, and the public databases.
Spatio-Temporal Representation of an Electoencephalogram for Emotion Recognition Using a Three-Dimensional Convolutional Neural Network
Emotion recognition plays an important role in the field of human–computer interaction (HCI). An electroencephalogram (EEG) is widely used to estimate human emotion owing to its convenience and mobility. Deep neural network (DNN) approaches using an EEG for emotion recognition have recently shown remarkable improvement in terms of their recognition accuracy. However, most studies in this field still require a separate process for extracting handcrafted features despite the ability of a DNN to extract meaningful features by itself. In this paper, we propose a novel method for recognizing an emotion based on the use of three-dimensional convolutional neural networks (3D CNNs), with an efficient representation of the spatio-temporal representations of EEG signals. First, we spatially reconstruct raw EEG signals represented as stacks of one-dimensional (1D) time series data to two-dimensional (2D) EEG frames according to the original electrode position. We then represent a 3D EEG stream by concatenating the 2D EEG frames to the time axis. These 3D reconstructions of the raw EEG signals can be efficiently combined with 3D CNNs, which have shown a remarkable feature representation from spatio-temporal data. Herein, we demonstrate the accuracy of the emotional classification of the proposed method through extensive experiments on the DEAP (a Dataset for Emotion Analysis using EEG, Physiological, and video signals) dataset. Experimental results show that the proposed method achieves a classification accuracy of 99.11%, 99.74%, and 99.73% in the binary classification of valence and arousal, and, in four-class classification, respectively. We investigate the spatio-temporal effectiveness of the proposed method by comparing it to several types of input methods with 2D/3D CNN. We then verify the best performing shape of both the kernel and input data experimentally. We verify that an efficient representation of an EEG and a network that fully takes advantage of the data characteristics can outperform methods that apply handcrafted features.
Bi-hemisphere asymmetric attention network: recognizing emotion from EEG signals based on the transformer
EEG-based emotion recognition is not only an important branch in the field of affective computing, but is also an indispensable task for harmonious human–computer interaction. Recently, many deep learning emotion recognition algorithms have achieved good results, but most of them have been based on convolutional and recurrent neural networks, resulting in complex model design, poor modeling of long-distance dependency, and the inability to parallelize computations. Here, we proposed a novel bi-hemispheric asymmetric attention network (Bi-AAN) combining a transformer structure with the asymmetric property of the brain’s emotional response. In this way, we modeled the difference of bi-hemispheric attention, and mined the long-term dependency between EEG sequences, which exacts more discriminative emotional representations. First, the differential entropy (DE) features of each frequency band were calculated using the DE-embedding block, and the spatial information between the electrode positions was extracted using positional encoding. Then, a bi-headed attention mechanism was employed to capture the intra-attention of frequency bands in each hemisphere and the attentional differences between the bi-hemispheric frequency bands. After carring out experiments both in DEAP and DREAMER datasets, we found that the proposed Bi-AAN achieved superior recognition performance as compared to state-of-the-art EEG emotion recognition methods.
EEG-Based Emotion Estimation Model Integrating Structural and Time-Series Information Based on Deep Learning Architecture Optimization
Emotion recognition is increasingly important for applications in mental health and personalized marketing. Traditional methods based on facial and vocal cues lack robustness due to voluntary control, motivating the use of EEG signals that capture neural dynamics with high temporal resolution. Existing EEG-based approaches using CNNs and LSTMs have improved spatial and temporal feature extraction; however, they still face critical limitations. These models struggle to represent electrode connectivity and adapt to inter-individual variability, and their architectures are typically handcrafted, requiring extensive manual tuning of hyperparameters and structural design. Such constraints hinder scalability and personalization, highlighting the need for automated architecture optimization. To address these challenges, we propose a dual-pipeline architecture that integrates frequency-domain and time-domain EEG features. The frequency-domain branch employs a Graph Convolutional Network (GCN) to model spatial relationships among electrodes, while the time-domain branch uses LSTM enhanced with Channel Attention to emphasize subject-specific informative channels. Furthermore, we introduce Differentiable Architecture Search (DARTS) to automatically discover optimal architectures tailored to individual EEG patterns, significantly reducing search cost compared to manual tuning. Experimental results demonstrate that our framework achieves competitive accuracy and high adaptability compared to state-of-the-art baselines, marking the first integration of GCN, LSTM, channel attention, and architecture search for EEG-based emotion recognition.
Accelerating 3D Convolutional Neural Network with Channel Bottleneck Module for EEG-Based Emotion Recognition
Deep learning-based emotion recognition using EEG has received increasing attention in recent years. The existing studies on emotion recognition show great variability in their employed methods including the choice of deep learning approaches and the type of input features. Although deep learning models for EEG-based emotion recognition can deliver superior accuracy, it comes at the cost of high computational complexity. Here, we propose a novel 3D convolutional neural network with a channel bottleneck module (CNN-BN) model for EEG-based emotion recognition, with the aim of accelerating the CNN computation without a significant loss in classification accuracy. To this end, we constructed a 3D spatiotemporal representation of EEG signals as the input of our proposed model. Our CNN-BN model extracts spatiotemporal EEG features, which effectively utilize the spatial and temporal information in EEG. We evaluated the performance of the CNN-BN model in the valence and arousal classification tasks. Our proposed CNN-BN model achieved an average accuracy of 99.1% and 99.5% for valence and arousal, respectively, on the DEAP dataset, while significantly reducing the number of parameters by 93.08% and FLOPs by 94.94%. The CNN-BN model with fewer parameters based on 3D EEG spatiotemporal representation outperforms the state-of-the-art models. Our proposed CNN-BN model with a better parameter efficiency has excellent potential for accelerating CNN-based emotion recognition without losing classification performance.