Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
45
result(s) for
"electrooculogram (EOG)"
Sort by:
ISCEV guide to visual electrodiagnostic procedures
by
Fulton, Anne B
,
Nilsson, Josefin
,
Li, Shiying
in
Cortex
,
Electrophysiology
,
Electroretinograms
2018
Clinical electrophysiological testing of the visual system incorporates a range of noninvasive tests and provides an objective indication of function relating to different locations and cell types within the visual system. This document developed by the International Society for Clinical Electrophysiology of Vision provides an introduction to standard visual electrodiagnostic procedures in widespread use including the full-field electroretinogram (ERG), the pattern electroretinogram (pattern ERG or PERG), the multifocal electroretinogram (multifocal ERG or mfERG), the electrooculogram (EOG) and the cortical-derived visual evoked potential (VEP). The guideline outlines the basic principles of testing. Common clinical presentations and symptoms are described with illustrative examples and suggested investigation strategies.
Journal Article
An End-to-End Multi-Channel Convolutional Bi-LSTM Network for Automatic Sleep Stage Detection
2023
Sleep stage detection from polysomnography (PSG) recordings is a widely used method of monitoring sleep quality. Despite significant progress in the development of machine-learning (ML)-based and deep-learning (DL)-based automatic sleep stage detection schemes focusing on single-channel PSG data, such as single-channel electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), developing a standard model is still an active subject of research. Often, the use of a single source of information suffers from data inefficiency and data-skewed problems. Instead, a multi-channel input-based classifier can mitigate the aforementioned challenges and achieve better performance. However, it requires extensive computational resources to train the model, and, hence, a tradeoff between performance and computational resources cannot be ignored. In this article, we aim to introduce a multi-channel, more specifically a four-channel, convolutional bidirectional long short-term memory (Bi-LSTM) network that can effectively exploit spatiotemporal features of data collected from multiple channels of the PSG recording (e.g., EEG Fpz-Cz, EEG Pz-Oz, EOG, and EMG) for automatic sleep stage detection. First, a dual-channel convolutional Bi-LSTM network module has been designed and pre-trained utilizing data from every two distinct channels of the PSG recording. Subsequently, we have leveraged the concept of transfer learning circuitously and have fused two dual-channel convolutional Bi-LSTM network modules to detect sleep stages. In the dual-channel convolutional Bi-LSTM module, a two-layer convolutional neural network has been utilized to extract spatial features from two channels of the PSG recordings. These extracted spatial features are subsequently coupled and given as input at every level of the Bi-LSTM network to extract and learn rich temporal correlated features. Both Sleep EDF-20 and Sleep EDF-78 (expanded version of Sleep EDF-20) datasets are used in this study to evaluate the result. The model that includes an EEG Fpz-Cz + EOG module and an EEG Fpz-Cz + EMG module can classify sleep stage with the highest value of accuracy (ACC), Kappa (Kp), and F1 score (e.g., 91.44%, 0.89, and 88.69%, respectively) on the Sleep EDF-20 dataset. On the other hand, the model consisting of an EEG Fpz-Cz + EMG module and an EEG Pz-Oz + EOG module shows the best performance (e.g., the value of ACC, Kp, and F1 score are 90.21%, 0.86, and 87.02%, respectively) compared to other combinations for the Sleep EDF-78 dataset. In addition, a comparative study with respect to other existing literature has been provided and discussed in order to exhibit the efficacy of our proposed model.
Journal Article
Electrooculograms for Human–Computer Interaction: A Review
2019
Eye movements generate electric signals, which a user can employ to control his/her environment and communicate with others. This paper presents a review of previous studies on such electric signals, that is, electrooculograms (EOGs), from the perspective of human–computer interaction (HCI). EOGs represent one of the easiest means to estimate eye movements by using a low-cost device, and have been often considered and utilized for HCI applications, such as to facilitate typing on a virtual keyboard, moving a mouse, or controlling a wheelchair. The objective of this study is to summarize the experimental procedures of previous studies and provide a guide for researchers interested in this field. In this work the basic characteristics of EOGs, associated measurements, and signal processing and pattern recognition algorithms are briefly reviewed, and various applications reported in the existing literature are listed. It is expected that EOGs will be a useful source of communication in virtual reality environments, and can act as a valuable communication tools for people with amyotrophic lateral sclerosis.
Journal Article
SSA with CWT and k-Means for Eye-Blink Artifact Removal from Single-Channel EEG Signals
2022
Recently, the use of portable electroencephalogram (EEG) devices to record brain signals in both health care monitoring and in other applications, such as fatigue detection in drivers, has been increased due to its low cost and ease of use. However, the measured EEG signals always mix with the electrooculogram (EOG), which are results due to eyelid blinking or eye movements. The eye-blinking/movement is an uncontrollable activity that results in a high-amplitude slow-time varying component that is mixed in the measured EEG signal. The presence of these artifacts misled our understanding of the underlying brain state. As the portable EEG devices comprise few EEG channels or sometimes a single EEG channel, classical artifact removal techniques such as blind source separation methods cannot be used to remove these artifacts from a single-channel EEG signal. Hence, there is a demand for the development of new single-channel-based artifact removal techniques. Singular spectrum analysis (SSA) has been widely used as a single-channel-based eye-blink artifact removal technique. However, while removing the artifact, the low-frequency components from the non-artifact region of the EEG signal are also removed by SSA. To preserve these low-frequency components, in this paper, we have proposed a new methodology by integrating the SSA with continuous wavelet transform (CWT) and the k-means clustering algorithm that removes the eye-blink artifact from the single-channel EEG signals without altering the low frequencies of the EEG signal. The proposed method is evaluated on both synthetic and real EEG signals. The results also show the superiority of the proposed method over the existing methods.
Journal Article
Circulant Singular Spectrum Analysis and Discrete Wavelet Transform for Automated Removal of EOG Artifacts from EEG Signals
2023
Background: Portable electroencephalogram (EEG) systems are often used in health care applications to record brain signals because their ease of use. An electrooculogram (EOG) is a common, low frequency, high amplitude artifact of the eye blink signal that might confuse disease diagnosis. As a result, artifact removal approaches in single EEG portable devices are in high demand. Materials: Dataset 2a from the BCI Competition IV was employed. It contains the EEG data from nine subjects. To determine the EOG effect, each session starts with 5 min of EEG data. This recording lasted for two minutes with the eyes open, one minute with the eyes closed, and one minute with eye movements. Methodology: This article presents the automated removal of EOG artifacts from EEG signals. Circulant Singular Spectrum Analysis (CiSSA) was used to decompose the EOG contaminated EEG signals into intrinsic mode functions (IMFs). Next, we identified the artifact signal components using kurtosis and energy values and removed them using 4-level discrete wavelet transform (DWT). Results: The proposed approach was evaluated on synthetic and real EEG data and found to be effective in eliminating EOG artifacts while maintaining low frequency EEG information. CiSSA-DWT achieved the best signal to artifact ratio (SAR), mean absolute error (MAE), relative root mean square error (RRMSE), and correlation coefficient (CC) of 1.4525, 0.0801, 18.274, and 0.9883, respectively. Comparison: The developed technique outperforms existing artifact suppression techniques according to performance measures. Conclusions: This advancement is important for brain science and can contribute as an initial pre-processing step for research related to EEG signals.
Journal Article
TMU-Net: A Transformer-Based Multimodal Framework with Uncertainty Quantification for Driver Fatigue Detection
2025
Driving fatigued is a prevalent issue frequently contributing to traffic accidents, prompting the development of automated fatigue detection methods based on various data sources, particularly reliable physiological signals. However, challenges in accuracy, robustness, and practicality persist, especially for cross-subject detection. Multimodal data fusion can enhance the effective estimation of driver fatigue. In this work, we leverage the advantages of multimodal signals to propose a novel Multimodal Attention Network (TMU-Net) for driver fatigue detection, achieving precise fatigue assessment by integrating electroencephalogram (EEG) and electrooculogram (EOG) signals. The core innovation of TMU-Net lies in its unimodal feature extraction module, which combines causal convolution, ConvSparseAttention, and Transformer encoders to effectively capture spatiotemporal features, and a multimodal fusion module that employs cross-modal attention and uncertainty-weighted gating to dynamically integrate complementary information. By incorporating uncertainty quantification, TMU-Net significantly enhances robustness to noise and individual variability. Experimental validation on the SEED-VIG dataset demonstrates TMU-Net’s superior performance stability across 23 subjects in cross-subject testing, effectively leveraging the complementary strengths of EEG (2 Hz full-band and five-band features) and EOG signals for high-precision fatigue detection. Furthermore, attention heatmap visualization reveals the dynamic interaction mechanisms between EEG and EOG signals, confirming the physiological rationality of TMU-Net’s feature fusion strategy. Practical challenges and future research directions for fatigue detection methods are also discussed.
Journal Article
An EEG/EMG/EOG-Based Multimodal Human-Machine Interface to Real-Time Control of a Soft Robot Hand
2019
Brain-computer interface (BCI) technology shows potential for application to motor rehabilitation therapies that use neural plasticity to restore motor function and improve quality of life of stroke survivors. However, it is often difficult for BCI systems to provide the variety of control commands necessary for multi-task real-time control of soft robot naturally. In this study, a novel multimodal human-machine interface system (mHMI) is developed using combinations of electrooculography (EOG), electroencephalography (EEG), and electromyogram (EMG) to generate numerous control instructions. Moreover, we also explore subject acceptance of an affordable wearable soft robot to move basic hand actions during robot-assisted movement. Six healthy subjects separately perform left and right hand motor imagery, looking-left and looking-right eye movements, and different hand gestures in different modes to control a soft robot in a variety of actions. The results indicate that the number of mHMI control instructions is significantly greater than achievable with any individual mode. Furthermore, the mHMI can achieve an average classification accuracy of 93.83% with the average information transfer rate of 47.41 bits/min, which is entirely equivalent to a control speed of 17 actions per minute. The study is expected to construct a more user-friendly mHMI for real-time control of soft robot to help healthy or disabled persons perform basic hand movements in friendly and convenient way.
Journal Article
An EEG-/EOG-Based Hybrid Brain-Computer Interface: Application on Controlling an Integrated Wheelchair Robotic Arm System
2019
Most existing brain-computer Interfaces (BCIs) are designed to control a single assistive device, such as a wheelchair, a robotic arm or a prosthetic limb. However, many daily tasks require combined functions which can only be realized by integrating multiple robotic devices. Such integration raises the requirement of the control accuracy and is more challenging to achieve a reliable control compared with the single device case. In this study, we propose a novel hybrid BCI with high accuracy based on electroencephalogram (EEG) and electrooculogram (EOG) to control an integrated wheelchair robotic arm system. The user turns the wheelchair left/right by performing left/right hand motor imagery (MI), and generates other commands for the wheelchair and the robotic arm by performing eye blinks and eyebrow raising movements. Twenty-two subjects participated in a MI training session and five of them completed a mobile self-drinking experiment, which was designed purposely with high accuracy requirements. The results demonstrated that the proposed hBCI could provide satisfied control accuracy for a system that consists of multiple robotic devices, and showed the potential of BCI-controlled systems to be applied in complex daily tasks.
Journal Article
Sleep quality assessment by parameter optimization
by
Qureshi, M S
,
Koser, A A
,
Gupta, A
in
Cerebral blood flow (CBF)
,
Electrocardiogram (ECG)
,
Electroencephalography (EEG)
2021
Sleep quality measurement is a complex process requires large number of parameters to monitor sleep and sleep cycles. The Gold Standard Polysomnography (PSG) parameters are considered as standard parameters for sleep quality measurement. In the PSG process, number of monitoring parameters are involved for that large number of sensors are used which makes this process complex, expensive and obtrusive. There is need to find optimize parameters which are directly involve in providing accurate information about sleep and reduce the process complexity. Our Parameter Optimization method is based on parameter reduction by finding key parameters and their inter dependent parameters. Sleep monitoring by these optimize parameter is different from both, clinical complex (PSG) used in hospitals and commercially available devices which work on dependent and dynamic parameter sensing. Optimized parameters obtained from PSG parameters are Electrocardiogram (ECG), Electrooculogram (EOG), Electroencephalography (EEG) and Cerebral blood flow (CBF). These key parameters show close correlation with sleep and hence reduce complexity in sleep monitoring by providing simultaneous measurement of appropriate signals for sleep analysis.
Journal Article
PMMCT: A Parallel Multimodal CNN-Transformer Model to Detect Slow Eye Movement for Recognizing Driver Sleepiness
2025
Sleepiness at the wheel is an important contributor to road traffic accidents. Slow eye movement (SEM) serves as a reliable physiological indicator for the sleep onset period (SOP). To detect SEM for recognizing drivers’ SOP, a Parallel Multimodal CNN-Transformer (PMMCT) model is proposed. The model employs two parallel feature extraction modules to process bimodal signals, each comprising convolutional layers and Transformer encoder layers. The extracted features are fused and then classified using fully connected layers. The model is evaluated on two bimodal signal combinations HEOG + O2 and HEOG + HSUM, where HSUM is the sum of two single-channel horizontal electrooculogram (HEOG) signals and captures electroencephalograph (EEG) features similar to those in the conventional O2 channel. Experimental results indicate that using the PMMCT model, the HEOG + HSUM combination performs comparably to the HEOG + O2 combination and outperforms unimodal HEOG by 2.73% in F1-score, with average classification accuracy and F1-score of 99.89% and 99.35%, outperforming CNN, CNN-LSTM, and CNN-LSTM-Attention models. The model exhibits minimal false positives and false negatives, with average values of 5.2 and 0.8. By combining CNNs’ local feature extraction with Transformers’ global temporal modeling, and using only two HEOG electrodes, the system offers superior performance while enhancing wearable device comfort for real-world applications.
Journal Article