Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
74 result(s) for "Cross-subjects"
Sort by:
Error-related potentials in EEG signals: feature-based detection for human-robot interaction
This study explores how to improve the detection of Error-Related Potentials (ErrPs), namely brain signals generated when a person perceives an unexpected action performed by an interacting agent. ErrPs are promising for improving interactions between humans and robots because they offer a way for robots to understand the user’s needs and expectations without explicit input. The proposed method aims at characterizing ErrP signals using a wide set of features extracted from electroencephalography (EEG) data, collected from subjects performing different tasks. This feature-based method results more accurate and efficient than traditional approaches, especially when applied to multiple users, or across different experimental setups. This work paves the way to feature-based ErrP detection to enhance human-robot interaction in dynamic environments.
A new common spatial pattern-based unified channels algorithm for driver’s fatigue EEG signals classification
The common spatial pattern (CSP) algorithm is efficient and accurate for channels selection and features extraction for electroencephalogram (EEG) signals classification. The CSP algorithm is usually applied on a subject-by-subject basis by measuring only intra-subject variations for selecting the most significant channels; we refer to this algorithm as CSP-based customized channels selection (CSP-CC). In practice, deploying the CSP-CC algorithm requires to set up a customized EEG device for each subject separately, which can be very costly. In this paper, we propose a new algorithm, called CSP-based unified channels (CSP-UC), for overcoming the aforementioned difficulties. The aim of the proposed algorithm is to extract unified channels that are valid for any subject; hence, one EEG device can be deployed for all subjects. Moreover, a methodology for developing both binary-class and ternary-class EEG signals classification models using either customized or unified channels is introduced. This methodology is applicable for both subject-by-subject and cross-subjects basis. In ternary-class classification models, the traditional “Max_Vote” method, used for voting the predicted class labels, has been modified to a more accurate method called “Max_Vote_then_Max_Probability.” On a subject-by-subject basis, the experimental results on EEG-based driver’s fatigue dataset have shown that the accuracy of the classification models that are based on the proposed CSP-UC algorithm is slightly lower than that of those based on the CSP-CC algorithm. Nevertheless, the former algorithm is more practical and cost-effective than the latter. But in cross-subjects, the classification models based on the CSP-UC algorithm outperform those based on the CSP-CC algorithm in both accuracy and the number of used channels.
A temporal-spectral graph convolutional neural network model for EEG emotion recognition within and across subjects
EEG-based emotion recognition uses high-level information from neural activities to predict emotional responses in subjects. However, this information is sparsely distributed in frequency, time, and spatial domains and varied across subjects. To address these challenges in emotion recognition, we propose a novel neural network model named Temporal-Spectral Graph Convolutional Network (TSGCN). To capture high-level information distributed in time, spatial, and frequency domains, TSGCN considers both neural oscillation changes in different time windows and topological structures between different brain regions. Specifically, a Minimum Category Confusion (MCC) loss is used in TSGCN to reduce the inconsistencies between subjective ratings and predefined labels. In addition, to improve the generalization of TSGCN on cross-subject variation, we propose Deep and Shallow feature Dynamic Adversarial Learning (DSDAL) to calculate the distance between the source domain and the target domain. Extensive experiments were conducted on public datasets to demonstrate that TSGCN outperforms state-of-the-art methods in EEG-based emotion recognition. Ablation studies show that the mixed neural networks and our proposed methods in TSGCN significantly contribute to its high performance and robustness. Detailed investigations further provide the effectiveness of TSGCN in addressing the challenges in emotion recognition.
An App that Changes Mentalities about Mobile Learning—The EduPARK Augmented Reality Activity
The public usually associates mobile devices to distraction and learning disruption, and they are not frequently used in formal education. Additionally, games and parks are both associated with play and leisure time, and not to learn. This study shows that the combination of mobiles, games, and parks can promote authentic learning and contributes to changing conventional mentalities. The study is framed by the EduPARK project that created an innovative app for authentic learning, supported by mobile and augmented reality (AR) technologies, for game-based approaches in a green park. A case study of the EduPARK strategy’s educational value, according to 86 Basic Education undergraduate students, was conducted. The participants experienced the app in the park and presented their opinion about: (i) mobile learning; (ii) the app’s usability; and (iii) the impact of the educational strategy in terms of factors, such as intrinsic motivation and authentic learning. Data collection included a survey and document collection of student reflections. Data were subjected to descriptive statistics, System Usability score computing, and content analysis. Students considered that the EduPARK strategy has educational value, particularly regarding content learning and motivation. From this study emerged seven supporting pillars that constitute a set of guidelines for future development of mobile game-based learning.
The impact of traditional neuroimaging methods on the spatial localization of cortical areas
Localizing human brain functions is a long-standing goal in systems neuroscience. Toward this goal, neuroimaging studies have traditionally used volume-based smoothing, registered data to volume-based standard spaces, and reported results relative to volume-based parcellations. A novel 360-area surface-based cortical parcellation was recently generated using multimodal data from the Human Connectome Project, and a volume-based version of this parcellation has frequently been requested for use with traditional volume-based analyses. However, given the major methodological differences between traditional volumetric and Human Connectome Project-style processing, the utility and interpretability of such an altered parcellation must first be established. By starting from automatically generated individual-subject parcellations and processing them with different methodological approaches, we show that traditional processing steps, especially volume-based smoothing and registration, substantially degrade cortical area localization compared with surface-based approaches. We also show that surface-based registration using features closely tied to cortical areas, rather than to folding patterns alone, improves the alignment of areas, and that the benefits of high-resolution acquisitions are largely unexploited by traditional volume-based methods. Quantitatively, we show that the most common version of the traditional approach has spatial localization that is only 35% as good as the best surface-based method as assessed using two objective measures (peak areal probabilities and “captured area fraction” for maximum probability maps). Finally, we demonstrate that substantial challenges exist when attempting to accurately represent volume-based group analysis results on the surface, which has important implications for the interpretability of studies, both past and future, that use these volume-based methods.
Cross-Dataset Variability Problem in EEG Decoding With Deep Learning
Cross-subject variability problems hinder practical usages of Brain-Computer Interfaces. Recently, deep learning has been introduced into the BCI community due to its better generalization and feature representation abilities. However, most studies currently only have validated deep learning models for single datasets, and the generalization ability for other datasets still needs to be further verified. In this paper, we validated deep learning models for eight MI datasets and demonstrated that the cross-dataset variability problem weakened the generalization ability of models. To alleviate the impact of cross-dataset variability, we proposed an online pre-alignment strategy for aligning the EEG distributions of different subjects before training and inference processes. The results of this study show that deep learning models with online pre-alignment strategies could significantly improve the generalization ability across datasets without any additional calibration data.
Cross-subject EEG emotion recognition combined with connectivity features and meta-transfer learning
In recent years, with the rapid development of machine learning, automatic emotion recognition based on electroencephalogram (EEG) signals has received increasing attention. However, owing to the great variance of EEG signals sampled from different subjects, EEG-based emotion recognition experiences the individual difference problem across subjects, which significantly hinders recognition performance. In this study, we presented a method for EEG-based emotion recognition using a combination of a multi-scale residual network (MSRN) and meta-transfer learning (MTL) strategy. The MSRN was used to represent connectivity features of EEG signals in a multi-scale manner, which utilized different receptive fields of convolution neural networks to capture the interactions of different brain regions. The MTL strategy fully used the merits of meta-learning and transfer learning to significantly reduce the gap in individual differences between various subjects. The proposed method can not only further explore the relationship between connectivity features and emotional states but also alleviate the problem of individual differences across subjects. The average cross-subject accuracies of the proposed method were 71.29% and 71.92% for the valence and arousal tasks on the DEAP dataset, respectively. It achieved an accuracy of 87.05% for the binary classification task on the SEED dataset. The results show that the framework has a positive effect on the cross-subject EEG emotion recognition task. •The work proposed a novel method for Cross-subject EEG emotion recognition. The highlights are as follows:•We combined the multi-scale residual network, meta-transfer learning and connectivity features for superior performance.•The proposed method has good performance in cross-subject EEG emotion recognition tasks based on DEAP and SEED datasets.•MSRN was adopted to capture interactions of different brain regions in the manner of multi-scale.•MTL made full use of the merits of meta learning and transfer learning to shallow the gap of individual difference.
An EEG-Based Transfer Learning Method for Cross-Subject Fatigue Mental State Prediction
Fatigued driving is one of the main causes of traffic accidents. The electroencephalogram (EEG)-based mental state analysis method is an effective and objective way of detecting fatigue. However, as EEG shows significant differences across subjects, effectively “transfering” the EEG analysis model of the existing subjects to the EEG signals of other subjects is still a challenge. Domain-Adversarial Neural Network (DANN) has excellent performance in transfer learning, especially in the fields of document analysis and image recognition, but has not been applied directly in EEG-based cross-subject fatigue detection. In this paper, we present a DANN-based model, Generative-DANN (GDANN), which combines Generative Adversarial Networks (GAN) to enhance the ability by addressing the issue of different distribution of EEG across subjects. The comparative results show that in the analysis of cross-subject tasks, GDANN has a higher average accuracy of 91.63% in fatigue detection across subjects than those of traditional classification models, which is expected to have much broader application prospects in practical brain–computer interaction (BCI).
Cross-Subject EEG-Based Emotion Recognition Through Neural Networks With Stratified Normalization
Due to a large number of potential applications, a good deal of effort has been recently made toward creating machine learning models that can recognize evoked emotions from one's physiological recordings. In particular, researchers are investigating the use of EEG as a low-cost, non-invasive method. However, the poor homogeneity of the EEG activity across participants hinders the implementation of such a system by a time-consuming calibration stage. In this study, we introduce a new participant-based feature normalization method, named stratified normalization , for training deep neural networks in the task of cross-subject emotion classification from EEG signals. The new method is able to subtract inter-participant variability while maintaining the emotion information in the data. We carried out our analysis on the SEED dataset, which contains 62-channel EEG recordings collected from 15 participants watching film clips. Results demonstrate that networks trained with stratified normalization significantly outperformed standard training with batch normalization. In addition, the highest model performance was achieved when extracting EEG features with the multitaper method, reaching a classification accuracy of 91.6% for two emotion categories (positive and negative) and 79.6% for three (also neutral). This analysis provides us with great insight into the potential benefits that stratified normalization can have when developing any cross-subject model based on EEG.
Soft Bioelectronic Interfaces for Continuous Peripheral Neural Signal Recording and Robust Cross‐Subject Decoding
Accurate decoding of peripheral nerve signals is essential for advancing neuroscience research, developing therapeutics for neurological disorders, and creating reliable human–machine interfaces. However, the poor mechanical compliance of conventional metal electrodes and limited generalization of existing decoding models have significantly hindered progress in understanding peripheral nerve function. This study introduces low‐impedance, soft‐conducting polymer electrodes capable of forming stable, intimate contacts with peripheral nerve tissues, allowing for continuous and reliable recording of neural activity in awake animals. Using this unique dataset of neurophysiological signals, a neural network model that integrates both handcrafted and deep learning‐derived features, while incorporating parameter‐sharing and adaptation training strategies, is developed. This approach significantly improves the generalizability of the decoding model across subjects, reducing the reliance on extensive training data. The findings not only deepen the understanding of peripheral nerve function but also open avenues for developing advanced interventions to augment and restore neurological functions. A soft poly (3,4‐ethylenedioxythiophene):poly (styrenesulfonate)‐based electrode enables continuous, high‐quality recording of peripheral nerve activity. A neural network model integrating handcrafted and convolutional neural network‐based features decodes whisker movements with strong generalization, offering insights into peripheral nerve function and potential applications for therapeutic intervention.