Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,574
result(s) for
"Automatic Data Processing - methods"
Sort by:
A Review of Data Fusion Techniques
The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.
Journal Article
mProphet: automated data processing and statistical validation for large-scale SRM experiments
by
Hengartner, Michael O
,
Aebersold, Ruedi
,
Picotti, Paola
in
631/1647/527/296
,
631/92/475
,
Algorithms
2011
mProphet, a computational tool for statistically validating selected reaction monitoring (SRM) mass spectrometry data, is described.
Selected reaction monitoring (SRM) is a targeted mass spectrometric method that is increasingly used in proteomics for the detection and quantification of sets of preselected proteins at high sensitivity, reproducibility and accuracy. Currently, data from SRM measurements are mostly evaluated subjectively by manual inspection on the basis of
ad hoc
criteria, precluding the consistent analysis of different data sets and an objective assessment of their error rates. Here we present mProphet, a fully automated system that computes accurate error rates for the identification of targeted peptides in SRM data sets and maximizes specificity and sensitivity by combining relevant features in the data into a statistical model.
Journal Article
Computing with networks of nonlinear mechanical oscillators
by
Coulombe, Jean C.
,
York, Mark C. A.
,
Sylvestre, Julien
in
Algorithms
,
Analysis
,
Anharmonicity
2017
As it is getting increasingly difficult to achieve gains in the density and power efficiency of microelectronic computing devices because of lithographic techniques reaching fundamental physical limits, new approaches are required to maximize the benefits of distributed sensors, micro-robots or smart materials. Biologically-inspired devices, such as artificial neural networks, can process information with a high level of parallelism to efficiently solve difficult problems, even when implemented using conventional microelectronic technologies. We describe a mechanical device, which operates in a manner similar to artificial neural networks, to solve efficiently two difficult benchmark problems (computing the parity of a bit stream, and classifying spoken words). The device consists in a network of masses coupled by linear springs and attached to a substrate by non-linear springs, thus forming a network of anharmonic oscillators. As the masses can directly couple to forces applied on the device, this approach combines sensing and computing functions in a single power-efficient device with compact dimensions.
Journal Article
Work and information processing in a solvable model of Maxwell’s demon
by
Jarzynski, Christopher
,
Mandal, Dibyendu
in
Automatic Data Processing - methods
,
Engines
,
Entropy
2012
We describe a minimal model of an autonomous Maxwell demon, a device that delivers work by rectifying thermal fluctuations while simultaneously writing information to a memory register. We solve exactly for the steady-state behavior of our model, and we construct its phase diagram. We find that our device can also act as a “Landauer eraser”, using externally supplied work to remove information from the memory register. By exposing an explicit, transparent mechanism of operation, our model offers a simple paradigm for investigating the thermodynamics of information processing by small systems.
Journal Article
A Comparison of Gene Set Analysis Methods in Terms of Sensitivity, Prioritization and Specificity
by
Bhatti, Gaurav
,
Tarca, Adi L.
,
Romero, Roberto
in
Automatic Data Processing - methods
,
Binding sites
,
Bioinformatics
2013
Identification of functional sets of genes associated with conditions of interest from omics data was first reported in 1999, and since, a plethora of enrichment methods were published for systematic analysis of gene sets collections including Gene Ontology and biological pathways. Despite their widespread usage in reducing the complexity of omics experiment results, their performance is poorly understood. Leveraging the existence of disease specific gene sets in KEGG and Metacore® databases, we compared the performance of sixteen methods under relaxed assumptions while using 42 real datasets (over 1,400 samples). Most of the methods ranked high the gene sets designed for specific diseases whenever samples from affected individuals were compared against controls via microarrays. The top methods for gene set prioritization were different from the top ones in terms of sensitivity, and four of the sixteen methods had large false positives rates assessed by permuting the phenotype of the samples. The best overall methods among those that generated reasonably low false positive rates, when permuting phenotypes, were PLAGE, GLOBALTEST, and PADOG. The best method in the category that generated higher than expected false positives was MRGSE.
Journal Article
A Hybrid CPU-GPU Accelerated Framework for Fast Mapping of High-Resolution Human Brain Connectome
by
Xu, Ningyi
,
Wang, Yu
,
Xie, Teng
in
Algorithms
,
Architectural engineering
,
Automatic Data Processing - methods
2013
Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.
Journal Article
Multiple Imputation for General Missing Data Patterns in the Presence of High-dimensional Data
2016
Multiple imputation (MI) has been widely used for handling missing data in biomedical research. In the presence of high-dimensional data, regularized regression has been used as a natural strategy for building imputation models, but limited research has been conducted for handling general missing data patterns where multiple variables have missing values. Using the idea of multiple imputation by chained equations (MICE), we investigate two approaches of using regularized regression to impute missing values of high-dimensional data that can handle general missing data patterns. We compare our MICE methods with several existing imputation methods in simulation studies. Our simulation results demonstrate the superiority of the proposed MICE approach based on an indirect use of regularized regression in terms of bias. We further illustrate the proposed methods using two data examples.
Journal Article
Impact of accelerometer data processing decisions on the sample size, wear time and physical activity level of a large cohort study
by
Keadle, Sarah Kozey
,
Freedson, Patty S
,
Lee, I-Min
in
Accelerometers
,
Accelerometry - instrumentation
,
Accelerometry - methods
2014
Background
Accelerometers objectively assess physical activity (PA) and are currently used in several large-scale epidemiological studies, but there is no consensus for processing the data. This study compared the impact of wear-time assessment methods and using either vertical (V)-axis or vector magnitude (VM) cut-points on accelerometer output.
Methods
Participants (7,650 women, mean age 71.4 y) were mailed an accelerometer (ActiGraph GT3X+), instructed to wear it for 7 days, record dates and times the monitor was worn on a log, and return the monitor and log via mail. Data were processed using three wear-time methods (logs, Troiano or Choi algorithms) and V-axis or VM cut-points.
Results
Using algorithms alone resulted in \"mail-days\" incorrectly identified as \"wear-days\" (27-79% of subjects had >7-days of valid data). Using only dates from the log and the Choi algorithm yielded: 1) larger samples with valid data than using log dates and times, 2) similar wear-times as using log dates and times, 3) more wear-time (V, 48.1 min more; VM, 29.5 min more) than only log dates and Troiano algorithm. Wear-time algorithm impacted sedentary time (~30-60 min lower for Troiano vs. Choi) but not moderate-to-vigorous (MV) PA time. Using V-axis cut-points yielded ~60 min more sedentary time and ~10 min less MVPA time than using VM cut-points.
Conclusions
Combining log-dates and the Choi algorithm was optimal, minimizing missing data and researcher burden. Estimates of time in physical activity and sedentary behavior are not directly comparable between V-axis and VM cut-points. These findings will inform consensus development for accelerometer data processing in ongoing epidemiologic studies.
Journal Article
Using a High-Speed Movie Camera to Evaluate Slice Dropping in Clinical Image Interpretation with Stack Mode Viewers
by
Yakami, Masahiro
,
Yamamoto, Akira
,
Yanagisawa, Morio
in
Automatic Data Processing - methods
,
Cameras
,
Computer Communication Networks
2013
The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate.
Journal Article