Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
11,044
result(s) for
"Quantification"
Sort by:
Relaxometry network based on MRI R2⁎ mapping revealing brain iron accumulation patterns in Parkinson's disease
by
Zang, Zhenxiang
,
Song, Tianbin
,
Lu, Weizhao
in
Iron accumulation
,
Parkinson's disease
,
R2 quantification
2024
•Iron accumulation in Parkinson's disease involves several regions.•Aberrant iron accumulation starts from substantia nigra and cerebellum.•Aberrant iron accumulation further extends to the pallidum and cortical regions.•Sequential iron accumulation is associated with disease progression in Parkinson's disease.
Excessive iron accumulation in the brain has been implicated in Parkinson's disease (PD). However, the patterns and probable sequences of iron accumulation across the PD brain remain largely unknown. This study aimed to explore the sequence of iron accumulation across the PD brain using R2* mapping and a relaxometry covariance network (RCN) approach.
R2* quantification maps were obtained from PD patients (n = 34) and healthy controls (n = 25). RCN was configured on R2* maps to identify covariance differences in iron levels between the two groups. Regions with excessive iron accumulation and large covariance changes in PD patients compared to controls were defined as propagators of iron. In the PD group, causal RCN analysis was performed on the R2* maps sequenced according to disease duration to investigate the dynamics of iron accumulations from the propagators. The associations between individual connections of the RCN and clinical information were analyzed in PD patients.
The left substantia nigra pars reticulata (SNpr), left substantia nigra pars compacta (SNpc), and lobule VII of the vermis (VER7) were identified as primary regions for iron accumulation and propagation (propagator). As the disease duration increased, iron accumulation in these three propagators demonstrated positive causal effects on the bilateral pallidum, bilateral gyrus rectus, right middle frontal gyrus, and medial and anterior orbitofrontal cortex (OFC). Furthermore, individual connections of VER7 with the left gyrus rectus and anterior OFC were positively associated with disease duration.
Our results indicate that the aberrant iron accumulation in PD involves several regions, mainly starts from the SN and cerebellum and extends to the pallidum and cortices. These findings provide preliminary information on sequences of iron accumulation in PD, which may advance our understanding of the disease.
Journal Article
Pulp & Paper Manufacturing: The best kept secret on the global sustainability stage
2025
When the public is asked to name the most globally sustainable industries, they typically respond with solar, wind, geothermal, or EVs. Yet, who would ever imagine the pulp & paper industry, which is often caricatured as a relic, is in fact one of the largest and most secretly sustainable manufacturing sectors. Its entire infrastructure and operations are built on replenishable forests and powered by renewable energy streams to produce recyclable, biodegradable products. The pulp & paper industry has a story that deserves retelling in the age of sustainability metrics and ESG frameworks. Our editorial embarks on a short simple journey to reframe pulp & paper not as a legacy industry, but as a model for sustainable manufacturing by using a clear, quantifiable system to demonstrate its global environmental impact.
Journal Article
Comparison between manta trawl and in situ pump filtration methods, and guidance for visual identification of microplastics in surface waters
by
Karlsson, Therese M.
,
Hassellöv, Martin
,
Kärrman, Anna
in
abundance
,
Aquatic Pollution
,
Atmospheric Protection/Air Quality Control/Air Pollution
2020
Owing to the development and adoption of a variety of methods for sampling and identifying microplastics, there is now data showing the presence of microplastics in surface waters from all over the world. The difference between the methods, however, hampers comparisons, and to date, most studies are qualitative rather than quantitative. In order to allow for a quantitative comparison of microplastics abundance, it is crucial to understand the differences between sampling methods. Therefore, a manta trawl and an in situ filtering pump were compared during realistic, but controlled, field tests. Identical microplastic analyses of all replicates allowed the differences between the methods with respect to (1) precision, (2) concentrations, and (3) composition to be assessed. The results show that the pump gave higher accuracy with respect to volume than the trawl. The trawl, however, sampled higher concentrations, which appeared to be due to a more efficient sampling of particles on the sea surface microlayer, such as expanded polystyrene and air-filled microspheres. The trawl also sampled a higher volume, which decreased statistical counting uncertainties. A key finding in this study was that, regardless of sampling method, it is critical that a sufficiently high volume is sampled to provide enough particles for statistical evaluation. Due to the patchiness of this type of contaminant, our data indicate that a minimum of 26 particles per sample should be recorded to allow for concentration comparisons and to avoid false null values. The necessary amount of replicates to detect temporal or spatial differences is also discussed. For compositional differences and size distributions, even higher particle counts would be necessary. Quantitative measurements and comparisons would also require an unbiased approach towards both visual and spectroscopic identification. To facilitate the development of such methods, a visual protocol that can be further developed to fit different needs is introduced and discussed. Some of the challenges encountered while using FTIR microspectroscopic particle identification are also critically discussed in relation to specific compositions found.
Journal Article
What's New with Numbers? Sociological Approaches to the Study of Quantification
2019
Calculation and quantification have been critical features of modern societies, closely linked to science, markets, and administration. In the past thirty years, the pace, purpose, and scope of quantification have greatly expanded, and there has been a corresponding increase in scholarship on quantification. We offer an assessment of the widely dispersed literature on quantification across four domains where quantification and quantification scholarship have particularly flourished: administration, democratic rule, economics, and personal life. In doing so, we seek to stimulate more cross-disciplinary debate and exchange. We caution against unifying accounts of quantification and highlight the importance of tracking quantification across different sites in order to appreciate its essential ambiguity and conduct more systematic investigations of interactions between different quantification regimes.
Journal Article
Deep learning for post-processing ensemble weather forecasts
2021
Quantifying uncertainty in weather forecasts is critical, especially for predicting extreme weather events. This is typically accomplished with ensemble prediction systems, which consist of many perturbed numerical weather simulations, or trajectories, run in parallel. These systems are associated with a high computational cost and often involve statistical post-processing steps to inexpensively improve their raw prediction qualities. We propose a mixed model that uses only a subset of the original weather trajectories combined with a post-processing step using deep neural networks. These enable the model to account for non-linear relationships that are not captured by current numerical models or post-processing methods. Applied to the global data, our mixed models achieve a relative improvement in ensemble forecast skill (CRPS) of over 14%. Furthermore, we demonstrate that the improvement is larger for extreme weather events on select case studies. We also show that our post-processing can use fewer trajectories to achieve comparable results to the full ensemble. By using fewer trajectories, the computational costs of an ensemble prediction system can be reduced, allowing it to run at higher resolution and produce more accurate forecasts.
This article is part of the theme issue ‘Machine learning for weather and climate modelling’.
Journal Article
Quantifying Streambed Grain Size, Uncertainty, and Hydrobiogeochemical Parameters Using Machine Learning Model YOLO
by
Chen, Rongyao
,
Renteria, Lupita
,
Goldman, Amy E.
in
Accuracy
,
Artificial intelligence
,
Biogeochemistry
2024
Streambed grain sizes control river hydro‐biogeochemical (HBGC) processes and functions. However, measuring their quantities, distributions, and uncertainties is challenging due to the diversity and heterogeneity of natural streams. This work presents a photo‐driven, artificial intelligence (AI)‐enabled, and theory‐based workflow for extracting the quantities, distributions, and uncertainties of streambed grain sizes from photos. Specifically, we first trained You Only Look Once, an object detection AI, using 11,977 grain labels from 36 photos collected from nine different stream environments. We demonstrated its accuracy with a coefficient of determination of 0.98, a Nash–Sutcliffe efficiency of 0.98, and a mean absolute relative error of 6.65% in predicting the median grain size of 20 ground‐truth photos representing nine typical stream environments. The AI is then used to extract the grain size distributions and determine their characteristic grain sizes, including the 10th, 50th, 60th, and 84th percentiles, for 1,999 photos taken at 66 sites within a watershed in the Northwest US. The results indicate that the 10th, median, 60th, and 84th percentiles of the grain sizes follow log‐normal distributions, with most likely values of 2.49, 6.62, 7.68, and 10.78 cm, respectively. The average uncertainties associated with these values are 9.70%, 7.33%, 9.27%, and 11.11%, respectively. These data allow for the computation of the quantities, distributions, and uncertainties of streambed HBGC parameters, including Manning's coefficient, Darcy‐Weisbach friction factor, top layer interstitial velocity magnitude, and nitrate uptake velocity. Additionally, major sources of uncertainty in grain sizes and their impact on HBGC parameters are examined.
Plain Language Summary
Streambed grain sizes control river hydro‐biogeochemical function by modulating the resistance, speed of water exchange, and nutrient transport at water‐sediment interface. Consequently, quantifying grain sizes and size‐dependent hydro‐biogeochemical parameters is critical for predicting river's function. In natural streams, measuring these sizes and parameters, however, is challenging because grain sizes vary from millimeters to a few meters, change from small creeks to big streams, and could be concealed by complex non‐grain materials such as water, ice, mud, and grasses. All these factors make the size measurements a time‐consuming and high‐uncertain task. We address these challenges by demonstrating a workflow that combines computer vision artificial intelligence (AI), smartphone photos, and new uncertainty quantification theories. The AI performs well across various sizes, locations, and stream environments as indicated by an accuracy metric of 0.98. We apply the AI to extract the grain sizes and their characteristic percentiles for 1,999 photos. These characteristic grain sizes are then input into existing and our new theories to derive the quantities, distributions, and uncertainties of hydrobiogeochemical parameters. The high accuracy of the AI and the success of extracting grain sizes and hydro‐biogeochemical parameters demonstrate the potential to advance river science with computer vision AI and mobile devices.
Key Points
Stream sediments bigger than 44 microns can be detected from smartphone photos by You Only Look Once with a Nash–Sutcliffe efficiency of 0.98
Quantities, distributions, and uncertainties of streambed grain sizes can be determined from photos
Impact of grain size uncertainty on hydrobiogeochemical parameters is examined
Journal Article
Solving parametric PDE problems with artificial neural networks
by
KHOO, YUEHAW
,
YING, LEXING
,
LU, JIANFENG
in
Applied mathematics
,
Artificial neural networks
,
Coefficients
2021
The curse of dimensionality is commonly encountered in numerical partial differential equations (PDE), especially when uncertainties have to be modelled into the equations as random coefficients. However, very often the variability of physical quantities derived from PDE can be captured by a few features on the space of the coefficient fields. Based on such observation, we propose using neural network to parameterise the physical quantity of interest as a function of input coefficients. The representability of such quantity using a neural network can be justified by viewing the neural network as performing time evolution to find the solutions to the PDE. We further demonstrate the simplicity and accuracy of the approach through notable examples of PDEs in engineering and physics.
Journal Article
Use and Misuse of Cq in qPCR Data Analysis and Reporting
by
van den Hoff, Maurice J. B.
,
Ruiz-Villalba, Adrián
,
Ruijter, Jan M.
in
Data analysis
,
Efficiency
,
Gene expression
2021
In the analysis of quantitative PCR (qPCR) data, the quantification cycle (Cq) indicates the position of the amplification curve with respect to the cycle axis. Because Cq is directly related to the starting concentration of the target, and the difference in Cq values is related to the starting concentration ratio, the only results of qPCR analysis reported are often Cq, ΔCq or ΔΔCq values. However, reporting of Cq values ignores the fact that Cq values may differ between runs and machines, and, therefore, cannot be compared between laboratories. Moreover, Cq values are highly dependent on the PCR efficiency, which differs between assays and may differ between samples. Interpreting reported Cq values, assuming a 100% efficient PCR, may lead to assumed gene expression ratios that are 100-fold off. This review describes how differences in quantification threshold setting, PCR efficiency, starting material, PCR artefacts, pipetting errors and sampling variation are at the origin of differences and variability in Cq values and discusses the limits to the interpretation of observed Cq values. These issues can be avoided by calculating efficiency-corrected starting concentrations per reaction. The reporting of gene expression ratios and fold difference between treatments can then easily be based on these starting concentrations.
Journal Article
Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
by
Willcox, Karen
,
Peherstorfer, Benjamin
,
Gunzburger, Max
in
MATHEMATICS AND COMPUTING
,
SURVEY and REVIEW
2018
In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive highfidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the highfidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.
Journal Article
Uncertainty quantification in classical molecular dynamics
2021
Molecular dynamics simulation is now a widespread approach for understanding complex systems on the atomistic scale. It finds applications from physics and chemistry to engineering, life and medical science. In the last decade, the approach has begun to advance from being a computer-based means of rationalizing experimental observations to producing apparently credible predictions for a number of real-world applications within industrial sectors such as advanced materials and drug discovery. However, key aspects concerning the reproducibility of the method have not kept pace with the speed of its uptake in the scientific community. Here, we present a discussion of uncertainty quantification for molecular dynamics simulation designed to endow the method with better error estimates that will enable it to be used to report actionable results. The approach adopted is a standard one in the field of uncertainty quantification, namely using ensemble methods, in which a sufficiently large number of replicas are run concurrently, from which reliable statistics can be extracted. Indeed, because molecular dynamics is intrinsically chaotic, the need to use ensemble methods is fundamental and holds regardless of the duration of the simulations performed. We discuss the approach and illustrate it in a range of applications from materials science to ligand–protein binding free energy estimation.
This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico’.
Journal Article