Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
764 result(s) for "pre-processing"
Sort by:
Current Status and Issues Regarding Pre-processing of fNIRS Neuroimaging Data: An Investigation of Diverse Signal Filtering Methods Within a General Linear Model Framework
Functional near-infrared spectroscopy (fNIRS) research articles show a large heterogeneity in the analysis approaches and pre-processing procedures. Additionally, there is often a lack of a complete description of the methods applied, necessary for study replication or for results comparison. The aims of this paper were (i) to review and investigate which information is generally included in published fNIRS papers, and (ii) to define a signal pre-processing procedure to set a common ground for standardization guidelines. To this goal, we have reviewed 110 fNIRS articles published in 2016 in the field of cognitive neuroscience, and performed a simulation analysis with synthetic fNIRS data to optimize the signal filtering step before applying the GLM method for statistical inference. Our results highlight the fact that many papers lack important information, and there is a large variability in the filtering methods used. Our simulations demonstrated that the optimal approach to remove noise and recover the hemodynamic response from fNIRS data in a GLM framework is to use a 1000th order band-pass Finite Impulse Response filter. Based on these results, we give preliminary recommendations as to the first step toward improving the analysis of fNIRS data and dissemination of the results.
Improved Handwritten Digit Recognition Using Convolutional Neural Networks (CNN)
Traditional systems of handwriting recognition have relied on handcrafted features and a large amount of prior knowledge. Training an Optical character recognition (OCR) system based on these prerequisites is a challenging task. Research in the handwriting recognition field is focused around deep learning techniques and has achieved breakthrough performance in the last few years. Still, the rapid growth in the amount of handwritten data and the availability of massive processing power demands improvement in recognition accuracy and deserves further investigation. Convolutional neural networks (CNNs) are very effective in perceiving the structure of handwritten characters/words in ways that help in automatic extraction of distinct features and make CNN the most suitable approach for solving handwriting recognition problems. Our aim in the proposed work is to explore the various design options like number of layers, stride size, receptive field, kernel size, padding and dilution for CNN-based handwritten digit recognition. In addition, we aim to evaluate various SGD optimization algorithms in improving the performance of handwritten digit recognition. A network’s recognition accuracy increases by incorporating ensemble architecture. Here, our objective is to achieve comparable accuracy by using a pure CNN architecture without ensemble architecture, as ensemble architectures introduce increased computational cost and high testing complexity. Thus, a CNN architecture is proposed in order to achieve accuracy even better than that of ensemble architectures, along with reduced operational complexity and cost. Moreover, we also present an appropriate combination of learning parameters in designing a CNN that leads us to reach a new absolute record in classifying MNIST handwritten digits. We carried out extensive experiments and achieved a recognition accuracy of 99.87% for a MNIST dataset.
AdapterRemoval v2: rapid adapter trimming, identification, and read merging
Background As high-throughput sequencing platforms produce longer and longer reads, sequences generated from short inserts, such as those obtained from fossil and degraded material, are increasingly expected to contain adapter sequences. Efficient adapter trimming algorithms are also needed to process the growing amount of data generated per sequencing run. Findings We introduce AdapterRemoval v2, a major revision of AdapterRemoval v1, which introduces (i) striking improvements in throughput, through the use of single instruction, multiple data (SIMD; SSE1 and SSE2) instructions and multi-threading support, (ii) the ability to handle datasets containing reads or read-pairs with different adapters or adapter pairs, (iii) simultaneous demultiplexing and adapter trimming, (iv) the ability to reconstruct adapter sequences from paired-end reads for poorly documented data sets, and (v) native gzip and bzip2 support. Conclusions We show that AdapterRemoval v2 compares favorably with existing tools, while offering superior throughput to most alternatives examined here, both for single and multi-threaded operations.
BrainAGE: Revisited and reframed machine learning workflow
Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease‐specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations. We revised our BrainAGE approach using a Gaussian process regression, which enables more stable processing of larger datasets and results in improved performance.
RELAX‐Jr: An Automated Pre‐Processing Pipeline for Developmental EEG Recordings
Automated EEG pre‐processing pipelines provide several key advantages over traditional manual data cleaning approaches; primarily, they are less time‐intensive and remove potential experimenter error/bias. Automated pipelines also require fewer technical expertise as they remove the need for manual artefact identification. We recently developed the fully automated Reduction of Electroencephalographic Artefacts (RELAX) pipeline and demonstrated its performance in cleaning EEG data recorded from adult populations. Here, we introduce the RELAX‐Jr pipeline, which was adapted from RELAX and designed specifically for pre‐processing of data collected from children. RELAX‐Jr implements multi‐channel Wiener filtering (MWF) and/or wavelet‐enhanced independent component analysis (wICA) combined with the adjusted‐ADJUST automated independent component classification algorithm to identify and reduce all artefacts using algorithms adapted to optimally identify artefacts in EEG recordings taken from children. Using a dataset of resting‐state EEG recordings (N = 136) from children spanning early‐to‐middle childhood (4–12 years), we assessed the cleaning performance of RELAX‐Jr using a range of metrics including signal‐to‐error ratio, artefact‐to‐residue ratio, ability to reduce blink and muscle contamination, and differences in estimates of alpha power between eyes‐open and eyes‐closed recordings. We also compared the performance of RELAX‐Jr against four publicly available automated cleaning pipelines. We demonstrate that RELAX‐Jr provides strong cleaning performance across a range of metrics, supporting its use as an effective and fully automated cleaning pipeline for neurodevelopmental EEG data. RELAX‐Jr is a fully‐automated toolbox for cleaning EEG data recorded from children. It is freely available as a plugin for EEGLAB and includes a graphic user interface. It is expected to facilitate effective and unbiased artefact removal in neurodevelopmental datasets.
Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art
Analysis of skin lesion images via visual inspection and manual examination to diagnose skin cancer has always been cumbersome. This manual examination of skin lesions in order to detect melanoma can be time-consuming and tedious. With the advancement in technology and rapid increase in computational resources, various machine learning techniques and deep learning models have emerged for the analysis of medical images most especially the skin lesion images. The results of these models have been impressive, however analysis of skin lesion images with these techniques still experiences some challenges due to the unique and complex features of the skin lesion images. This work presents a comprehensive survey of techniques that have been used for detecting skin cancer from skin lesion images. The paper is aimed to provide an up-to-date survey that will assist investigators in developing efficient models that automatically and accurately detects melanoma from skin lesion images. The paper is presented in five folds: First, we identify the challenges in detecting melanoma from skin lesions. Second, we discuss the pre-processing and segmentation techniques of skin lesion images. Third, we make comparative analysis of the state-of-the-arts. Fourth we discuss classification techniques for classifying skin lesions into different classes of skin cancer. We finally explore and analyse the performance of the state-of-the-arts methods employed in popular skin lesion image analysis competitions and challenges of ISIC 2018 and 2019. Application of ensemble deep learning models on well pre-processed and segmented images results in better classification performance of the skin lesion images.
Improving the accuracy of single-trial fMRI response estimates using GLMsingle
Advances in artificial intelligence have inspired a paradigm shift in human neuroscience, yielding large-scale functional magnetic resonance imaging (fMRI) datasets that provide high-resolution brain responses to thousands of naturalistic visual stimuli. Because such experiments necessarily involve brief stimulus durations and few repetitions of each stimulus, achieving sufficient signal-to-noise ratio can be a major challenge. We address this challenge by introducing GLMsingle , a scalable, user-friendly toolbox available in MATLAB and Python that enables accurate estimation of single-trial fMRI responses ( glmsingle.org ). Requiring only fMRI time-series data and a design matrix as inputs, GLMsingle integrates three techniques for improving the accuracy of trial-wise general linear model (GLM) beta estimates. First, for each voxel, a custom hemodynamic response function (HRF) is identified from a library of candidate functions. Second, cross-validation is used to derive a set of noise regressors from voxels unrelated to the experiment. Third, to improve the stability of beta estimates for closely spaced trials, betas are regularized on a voxel-wise basis using ridge regression. Applying GLMsingle to the Natural Scenes Dataset and BOLD5000, we find that GLMsingle substantially improves the reliability of beta estimates across visually-responsive cortex in all subjects. Comparable improvements in reliability are also observed in a smaller-scale auditory dataset from the StudyForrest experiment. These improvements translate into tangible benefits for higher-level analyses relevant to systems and cognitive neuroscience. We demonstrate that GLMsingle: (i) helps decorrelate response estimates between trials nearby in time; (ii) enhances representational similarity between subjects within and across datasets; and (iii) boosts one-versus-many decoding of visual stimuli. GLMsingle is a publicly available tool that can significantly improve the quality of past, present, and future neuroimaging datasets sampling brain activity across many experimental conditions.
Comparing Sentinel-1 Surface Water Mapping Algorithms and Radiometric Terrain Correction Processing in Southeast Asia Utilizing Google Earth Engine
Satellite remote sensing plays an important role in the monitoring of surface water for historical analysis and near real-time applications. Due to its cloud penetrating capability, many studies have focused on providing efficient and high quality methods for surface water mapping using Synthetic Aperture Radar (SAR). However, few studies have explored the effects of SAR pre-processing steps used and the subsequent results as inputs into surface water mapping algorithms. This study leverages the Google Earth Engine to compare two unsupervised histogram-based thresholding surface water mapping algorithms utilizing two distinct pre-processed Sentinel-1 SAR datasets, specifically one with and one without terrain correction. The resulting surface water maps from the four different collections were validated with user-interpreted samples from high-resolution Planet Scope data. It was found that the overall accuracy from the four collections ranged from 92% to 95% with Cohen’s Kappa coefficients ranging from 0.7999 to 0.8427. The thresholding algorithm that samples a histogram based on water edge information performed best with a maximum accuracy of 95%. While the accuracies varied between methods it was found that there is no statistical significant difference between the errors of the different collections. Furthermore, the surface water maps generated from the terrain corrected data resulted in a intersection over union metrics of 95.8%–96.4%, showing greater spatial agreement, as compared to 92.3%–93.1% intersection over union using the non-terrain corrected data. Overall, it was found that algorithms using terrain correction yield higher overall accuracy and yielded a greater spatial agreement between methods. However, differences between the approaches presented in this paper were not found to be significant suggesting both methods are valid for generating accurate surface water maps. High accuracy surface water maps are critical to disaster planning and response efforts, thus results from this study can help inform SAR data users on the pre-processing steps needed and its effects as inputs on algorithms for surface water mapping applications.
A review on sentiment analysis and emotion detection from text
Social networking platforms have become an essential means for communicating feelings to the entire world due to rapid expansion in the Internet era. Several people use textual content, pictures, audio, and video to express their feelings or viewpoints. Text communication via Web-based networking media, on the other hand, is somewhat overwhelming. Every second, a massive amount of unstructured data is generated on the Internet due to social media platforms. The data must be processed as rapidly as generated to comprehend human psychology, and it can be accomplished using sentiment analysis, which recognizes polarity in texts. It assesses whether the author has a negative, positive, or neutral attitude toward an item, administration, individual, or location. In some applications, sentiment analysis is insufficient and hence requires emotion detection, which determines an individual’s emotional/mental state precisely. This review paper provides understanding into levels of sentiment analysis, various emotion models, and the process of sentiment analysis and emotion detection from text. Finally, this paper discusses the challenges faced during sentiment and emotion analysis.
Exploring the Steps of Infrared (IR) Spectral Analysis: Pre-Processing, (Classical) Data Modelling, and Deep Learning
Infrared (IR) spectroscopy has greatly improved the ability to study biomedical samples because IR spectroscopy measures how molecules interact with infrared light, providing a measurement of the vibrational states of the molecules. Therefore, the resulting IR spectrum provides a unique vibrational fingerprint of the sample. This characteristic makes IR spectroscopy an invaluable and versatile technology for detecting a wide variety of chemicals and is widely used in biological, chemical, and medical scenarios. These include, but are not limited to, micro-organism identification, clinical diagnosis, and explosive detection. However, IR spectroscopy is susceptible to various interfering factors such as scattering, reflection, and interference, which manifest themselves as baseline, band distortion, and intensity changes in the measured IR spectra. Combined with the absorption information of the molecules of interest, these interferences prevent direct data interpretation based on the Beer–Lambert law. Instead, more advanced data analysis approaches, particularly artificial intelligence (AI)-based algorithms, are required to remove the interfering contributions and, more importantly, to translate the spectral signals into high-level biological/chemical information. This leads to the tasks of spectral pre-processing and data modeling, the main topics of this review. In particular, we will discuss recent developments in both tasks from the perspectives of classical machine learning and deep learning.