Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,544 result(s) for "independent component"
Sort by:
Independent component analysis: An introduction
Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without understanding its internal details. Therefore, in this paper, the basics of ICA are provided to show how it works to serve as a comprehensive source for researchers who are interested in this field. This paper starts by introducing the definition and underlying principles of ICA. Additionally, different numerical examples in a step-by-step approach are demonstrated to explain the preprocessing steps of ICA and the mixing and unmixing processes in ICA. Moreover, different ICA algorithms, challenges, and applications are presented.
Source-based morphometry: a decade of covarying structural brain patterns
In this paper, we review and discuss brain imaging studies which have used the source-based morphometry (SBM) approach over the past decade. SBM is a data-driven linear multivariate approach for decomposing structural brain imaging data into commonly covarying imaging components and subject-specific loading parameters. It is a well-established technique which has predominantly been used to study neuroanatomic differences between healthy controls and patients with neuropsychiatric diseases. We start by discussing the advantages of this technique over univariate analysis for imaging studies, followed by a discussion of results from recent studies which have successfully applied this methodology. We also present recent extensions of this framework including nonlinear SBM, biclustered independent component analysis (B-ICA) and conclude with the possible directions of work for future.
Improving the performance of independent component regression
In this study, we aim to comprehensively explore the application of principal component analysis (PCA) and independent component analysis (ICA), considering their practical utility. We compare these two methods theoretically and practically, using both real data and simulated data. PCA and ICA algorithms are often treated as black boxes, therefore they are often seen as complex algorithms. In this research, we’ll break down some of the theory behind ICA. Subsequently, we compare principal component regression (PCR) and independent component regression (ICR) in both real and simulated datasets. Our objectives include data analysis and explanation of the superiority of each method (ICA and PCA) across different datasets. We will propose solutions to improve the performance of ICR and PCR regressions for datasets with structures suited to ICA and PCA.
Identifying canonical and replicable multi‐scale intrinsic connectivity networks in 100k+ resting‐state fMRI datasets
Despite the known benefits of data‐driven approaches, the lack of approaches for identifying functional neuroimaging patterns that capture both individual variations and inter‐subject correspondence limits the clinical utility of rsfMRI and its application to single‐subject analyses. Here, using rsfMRI data from over 100k individuals across private and public datasets, we identify replicable multi‐spatial‐scale canonical intrinsic connectivity network (ICN) templates via the use of multi‐model‐order independent component analysis (ICA). We also study the feasibility of estimating subject‐specific ICNs via spatially constrained ICA. The results show that the subject‐level ICN estimations vary as a function of the ICN itself, the data length, and the spatial resolution. In general, large‐scale ICNs require less data to achieve specific levels of (within‐ and between‐subject) spatial similarity with their templates. Importantly, increasing data length can reduce an ICN's subject‐level specificity, suggesting longer scans may not always be desirable. We also find a positive linear relationship between data length and spatial smoothness (possibly due to averaging over intrinsic dynamics), suggesting studies examining optimized data length should consider spatial smoothness. Finally, consistency in spatial similarity between ICNs estimated using the full data and subsets across different data lengths suggests lower within‐subject spatial similarity in shorter data is not wholly defined by lower reliability in ICN estimates, but may be an indication of meaningful brain dynamics which average out as data length increases.
Refined GRACE/GFO‐Derived Terrestrial Water Storage Anomaly in Middle East Recovered by ICA‐Based Forward Modeling Approach
The Gravity Recovery and Climate Experiment (GRACE) mission and its successor, GRACE Follow‐On (GFO), effectively monitor terrestrial water storage anomaly (TWSA). However, their constrained spatial resolution imposes limitations, with leakage and attenuation potentially impacting the accuracy of regional TWSA. While GRACE/GFO observations capture the ongoing TWSA decline in the Middle East due to excessive groundwater extraction, the nearby Caspian Sea's long‐term water loss, combined with the seasonal signals from the coastal sea, complicate accurate TWSA estimation through signal attenuation and leakage. To address these issues, we propose a combined approach, that is, independent component analysis (ICA)‐based forward modeling (IFM), to discern and isolate the leakage effect and improve the recovery of TWSA signal. We demonstrate the impact of signal attenuation and leakage through simulation, and validate the effectiveness of IFM. This method is also confirmed through steric‐corrected altimetry estimates in the Caspian Sea, Red Sea, and Persian Gulf, and further validated in Greenland and Victoria Lake. Our results show considerable leakage in GRACE/GFO TWSA estimates for Saudi Arabia and Iran. Leakage from the Red Sea and Persian Gulf introduces a 28.6% bias in Saudi Arabia's TWSA trend, while leakage from the Caspian Sea results in a 36.4% bias in Iran. After IFM recovery, the TWSA decline rates for Saudi Arabia, Iraq, and Iran are 11.48 ± 0.32, 3.56 ± 0.44, and 7.75 ± 0.45 km3/yr, respectively. This study demonstrates the effectiveness of IFM in deriving refined TWSA signal, providing valuable insights for water resource management in arid regions.
Joint decorrelation, a versatile tool for multichannel data analysis
We review a simple yet versatile approach for the analysis of multichannel data, focusing in particular on brain signals measured with EEG, MEG, ECoG, LFP or optical imaging. Sensors are combined linearly with weights that are chosen to provide optimal signal-to-noise ratio. Signal and noise can be variably defined to match the specific need, e.g. reproducibility over trials, frequency content, or differences between stimulus conditions. We demonstrate how the method can be used to remove power line or cardiac interference, enhance stimulus-evoked or stimulus-induced activity, isolate narrow-band cortical activity, and so on. The approach involves decorrelating both the original and filtered data by joint diagonalization of their covariance matrices. We trace its origins; offer an easy-to-understand explanation; review a range of applications; and chart failure scenarios that might lead to misleading results, in particular due to overfitting. In addition to its flexibility and effectiveness, a major appeal of the method is that it is easy to understand. •Joint Decorrelation is a powerful, easy to use tool for multichannel data analysis.•It finds optimal weights to be applied to signals to maximize a criterion.•It can factor out noise, enhance weak sources, reveal oscillatory activity, etc.•It has been found effective for EEG, MEG, ECoG, LFP and optical imaging data.•We give examples of useful applications, and review failure scenarios and caveats.
An Iterative ICA-Based Reconstruction Method to Produce Consistent Time-Variable Total Water Storage Fields Using GRACE and Swarm Satellite Data
Observing global terrestrial water storage changes (TWSCs) from (inter-)seasonal to (multi-)decade time-scales is very important to understand the Earth as a system under natural and anthropogenic climate change. The primary goal of the Gravity Recovery And Climate Experiment (GRACE) satellite mission (2002–2017) and its follow-on mission (GRACE-FO, 2018–onward) is to provide time-variable gravity fields, which can be converted to TWSCs with ∼ 300 km spatial resolution; however, the one year data gap between GRACE and GRACE-FO represents a critical discontinuity, which cannot be replaced by alternative data or model with the same quality. To fill this gap, we applied time-variable gravity fields (2013–onward) from the Swarm Earth explorer mission with low spatial resolution of ∼ 1500 km. A novel iterative reconstruction approach was formulated based on the independent component analysis (ICA) that combines the GRACE and Swarm fields. The reconstructed TWSC fields of 2003–2018 were compared with a commonly applied reconstruction technique and GRACE-FO TWSC fields, whose results indicate a considerable noise reduction and long-term consistency improvement of the iterative ICA reconstruction technique. They were applied to evaluate trends and seasonal mass changes (of 2003–2018) within the world’s 33 largest river basins.
Nature-inspired metaheuristics model for gene selection and classification of biomedical microarray data
Identifying a small subset of informative genes from a gene expression dataset is an important process for sample classification in the fields of bioinformatics and machine learning. In this process, there are two objectives: first, to minimize the number of selected genes , and second, to maximize the classification accuracy of the used classifier. In this paper, a hybrid machine learning framework based on a nature-inspired cuckoo search (CS) algorithm has been proposed to resolve this problem. The proposed framework is obtained by incorporating the cuckoo search (CS) algorithm with an artificial bee colony (ABC) in the exploitation and exploration of the genetic algorithm (GA). These strategies are used to maintain an appropriate balance between the exploitation and exploration phases of the ABC and GA algorithms in the search process. In preprocessing, the independent component analysis (ICA) method extracts the important genes from the dataset. Then, the proposed gene selection algorithms along with the Naive Bayes (NB) classifier and leave-one-out cross-validation (LOOCV) have been applied to find a small set of informative genes that maximize the classification accuracy. To conduct a comprehensive performance study, proposed algorithms have been applied on six benchmark datasets of gene expression. The experimental comparison shows that the proposed framework (ICA and CS-based hybrid algorithm with NB classifier) performs a deeper search in the iterative process, which can avoid premature convergence and produce better results compared to the previously published feature selection algorithm for the NB classifier. Graphical abstract
Interseismic Coupling and Slow Slip Events on the Cascadia Megathrust
In this study, we model geodetic strain accumulation along the Cascadia subduction zone between 2007.0 and 2017.632 using position time series from 352 continuous GPS stations. First, we use the secular linear motion to determine interseismic locking along the megathrust. We determine two end member models, assuming that the megathrust is either a priori locked or creeping, which differ essentially along the trench where the inversion is poorly constrained by the data. In either case, significant locking of the megathrust updip of the coastline is needed. The downdip limit of the locked portion lies ∼ 20–80 km updip from the coast assuming a locked a priori, but very close to the coast for a creeping a priori. Second, we use a variational Bayesian Independent Component Analysis (vbICA) decomposition to model geodetic strain time variations, an approach which is effective to separate the geodetic strain signal due to non-tectonic and tectonic sources. The Slow Slip Events (SSEs) kinematics is retrieved by linearly inverting for slip on the megathrust the Independent Components related to these transient phenomena. The procedure allows the detection and modelling of 64 SSEs which spatially and temporally match with the tremors activity. SEEs and tremors occur well inland from the coastline and follow closely the estimated location of the mantle wedge corner. The transition zone, between the locked portion of the megathrust and the zone of tremors, is creeping rather steadily at the long-term slip rate and probably buffers the effect of SSEs on the megathrust seismogenic portion.
Random Forest (RF) based identification of rice powder mixture using terahertz spectroscopy
Adulteration is a severe problem in agriculture field. It may cause some dishonest traders in the marketing side. For their own wish, dishonest traders make adulteration products for higher profit. In consumer side, agro product adulterations provide some unhealthy situation. It may cause severe health problem and also some incurable disease. Its right time, need to find an adulteration various level of agro products. For detecting rice adulteration, to collect spectral data for mixed rice powder ( low quality rice powder and high quality rice powder ) , and a Fast Independent Component Analysis (FICA) algorithm is used to extract valuable information. Then, apply Random Forest Classifier algorithm for classifying purpose, whether the mixture is low quality or high quality. It achieves high accuracy with a single model and also easy to implement.