Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
44,516
result(s) for
"Statistical modeling"
Sort by:
Climate Change May Alter Rainfall‐Partitioning in Ways Unlikely To Be Detected in the Coming Decades
2025
Droughts around the world have resulted in less annual streamflow relative to annual rainfall. The decrease in streamflow cannot be explained by changes in land‐use, precipitation or temperature alone. Rather, the decrease in annual streamflow has been linked to changes in annual rainfall‐partitioning. Climate change is predicted to induce similar, if not worse conditions than previous droughts. This could make the disproportionate shifts in annual streamflow more frequent or intense. Currently, most methods assume annual rainfall‐partitioning remains constant and thus are unlikely to accurately predict how climate change could impact streamflow. Our aim is to conduct a thought‐experiment to examine the various ways climate change could alter annual rainfall‐partitioning and to determine what types of changes in rainfall‐partitioning are likely to be statistically detectable in the coming decades. Using a synthetic streamflow model, we show how changing the rainfall‐runoff intercept, rainfall‐runoff slope, lag‐1 autocorrelation, standard deviation per rainfall depth and, skewness per rainfall depth alters rainfall‐partitioning. Of all the rainfall‐partitioning changes examined only the rainfall‐runoff intercept and slope are likely to be statistically detectable in the coming decades. Changes impacting lag‐1 autocorrelation of annual streamflow, standard deviation per rainfall depth and skewness per rainfall depth are unlikely to be statistically detectable in the coming decades. Although being unlikely to be statistically identified, these changes still alter both the rainfall‐runoff and streamflow‐time relationships. Rainfall‐partitioning changes that alter streamflow and are unlikely to be statistically detected in the coming decades will make prediction and allocation of water resources more challenging.
Journal Article
Integrated statistical modeling method: part I—statistical simulations for symmetric distributions
by
Kang, Young-Jin
,
Noh, Yoojeong
,
Lim, O-Kaung
in
Adequacy
,
Computational Mathematics and Numerical Analysis
,
Computer simulation
2019
The use of parametric and nonparametric statistical modeling methods differs depending on data sufficiency. For sufficient data, the parametric statistical modeling method is preferred owing to its high convergence to the population distribution. Conversely, for insufficient data, the nonparametric method is preferred owing to its high flexibility and conservative modeling of the given data. However, it is difficult for users to select either a parametric or nonparametric modeling method because the adequacy of using one of these methods depends on how well the given data represent the population model, which is unknown to users. For insufficient data or limited prior information on random variables, the interval approach, which uses interval information of data or random variables, can be used. However, it is still difficult to be used in uncertainty analysis and design, owing to imprecise probabilities. In this study, to overcome this problem, an integrated statistical modeling (ISM) method, which combines the parametric, nonparametric, and interval approaches, is proposed. The ISM method uses the two-sample Kolmogorov–Smirnov (K–S) test to determine whether to use either the parametric or nonparametric method according to data sufficiency. The sequential statistical modeling (SSM) and kernel density estimation with estimated bounded data (KDE-ebd) are used as the parametric and nonparametric methods combined with the interval approach, respectively. To verify the modeling accuracy, conservativeness, and convergence of the proposed method, it is compared with the original SSM and KDE-ebd according to various sample sizes and distribution types in simulation tests. Through an engineering and reliability analysis example, it is shown that the proposed ISM method has the highest accuracy and reliability in the statistical modeling, regardless of data sufficiency. The ISM method is applicable to real engineering data and is conservative in the reliability analysis for insufficient data, unlike the SSM, and converges to an exact probability of failure more rapidly than KDE-ebd as data increase.
Journal Article
3D Analysis of the Proximal Femur Compared to 2D Analysis for Hip Fracture Risk Prediction in a Clinical Population
by
Quenneville, Cheryl E
,
Jazinizadeh Fatemeh
in
Biomedical materials
,
Cadavers
,
Clinical medicine
2021
Due to the adverse impacts of hip fractures on patients’ lives, it is crucial to enhance the identification of people at high risk through accessible clinical techniques. Reconstructing the 3D geometry and BMD distribution of the proximal femur could be beneficial in enhancing hip fracture risk predictions; however, it is associated with a high computational burden. It is also not clear whether it provides a better performance than 2D model analysis. Therefore, the purpose of this study was to compare the 2D and 3D model reconstruction’s ability to predict hip fracture risk in a clinical population of patients. The DXA scans and CT scans of 16 cadaveric femurs were used to create training sets for the 2D and 3D model reconstruction based on statistical shape and appearance modeling. Subsequently, these methods were used to predict the risk of sustaining a hip fracture in a clinical population of 150 subjects (50 fractured, and 100 non-fractured) that were monitored for five years in the Canadian Multicentre Osteoporosis Study. 3D model reconstruction was able to improve the identification of patients who sustained a hip fracture more accurately than the standard clinical practice (by 40%). Also, the predictions from the 2D statistical model didn’t differ significantly from the 3D ones (p > 0.76). These results indicated that to enhance hip fracture risk prediction in clinical practice implementing 2D statistical modeling has comparable performance with lower associated computational load.
Journal Article
Compound Drought and Temperature Events Intensify Wheat Yield Loss in Australia
by
Li, Siyi
,
Liu, De Li
,
Huete, Alfredo
in
Agricultural production
,
Australia's crop belt
,
biophysical‐statistical modeling approach
2026
The escalation in extreme weather events has raised concerns for agriculture. The quantification of the impacts of extreme events on crop yield has predominantly concentrated on individual events like drought or heat. Numerous instances have showcased the destructive effects of compound extreme events on crop yields, surpassing those of individual events. However, their influence extent is region‐specific and not fully understood in Australia's crop belt. Using a biophysical‐statistical modeling approach, we quantified the individual impacts of drought, heat, frost, and compound drought and extreme temperature (DET) events on wheat yield variations in Australia. We first developed indices for these different extreme events during the wheat reproductive period based on the APSIM (Agricultural Production System sIMulator) model and then used these indices in multiple linear regression models to quantify their impacts on wheat yield variations. We found that, during 1990–2021, drought, heat, and frost events explained 48% of yield variation, while the percentage increased to 54% after including DET events, with some regions even up to 86%. In extreme low‐yield years, the relative importance of DET events surpassed the sum importance of individual drought, heat, and frost events, reaching 52% in years with yields below the 10th percentiles, respectively. Our findings highlight the need to factor compound extreme weather events into climate risk management to inform the mitigation of yield losses or crop failure. Plain Language Summary Global warming has brought more extreme weather events like drought, heatwaves, frost, etc. These extreme events seriously threaten agricultural production and food security, especially multiple co‐occurring weather events, which usually cause far more destructive effects than individual ones. In this study, we used a combined modeling approach to precisely quantify the impacts of individual and co‐occurring extreme weather events on wheat yield variation over the past three decades in Australia. We found that these extreme weather events were responsible for more than half of the wheat yield variation, with multiple co‐occurring extreme weather events particularly responsible for severe wheat yield losses in Australia. These findings highlight the need to factor co‐occurring extreme weather events into climate risk management to inform the mitigation of yield losses or crop failure. Key Points We developed a combined modeling approach to quantify the effects of compound drought and extreme temperature (DET) on wheat yield Annual average DET intensity contributed an additional 6% of Australia's wheat yield variation beyond univariate drought, heat, and frost intensities In extreme low‐yield years, annual average DET intensity dominated wheat yield loss, with relative contribution exceeding 50%
Journal Article
Probabilistic reconstructions of local temperature and soil moisture from tree-ring data with potentially time-varying climatic response
by
Evans, M. N.
,
Hughes, M. K.
,
Tingley, M. P.
in
Atmospheric temperature
,
Calibration
,
California
2015
We explore a probabilistic, hierarchical Bayesian approach to the simultaneous reconstruction of local temperature and soil moisture from tree-ring width observations. The model explicitly allows for differing calibration and reconstruction interval responses of the ring-width series to climate due to slow changes in climatology coupled with the biological climate thresholds underlying tree-ring growth. A numerical experiment performed using synthetically generated data demonstrates that bimodality can occur in posterior estimates of past climate when the data do not contain enough information to determine whether temperature or moisture limitation controlled reconstruction-interval tree-ring variability. This manifestation of nonidentifiability is a result of the many-to-one mapping from bivariate climate to time series of tree-ring widths. The methodology is applied to reconstruct temperature and soil moisture conditions over the 1080–1129 C.E. interval at Methusalah Walk in the White Mountains of California, where co-located isotopic dendrochronologies suggest that observed moisture limitations on tree growth may have been alleviated. Our model allows for assimilation of both data sources, and computation of the probability of a change in the climatic controls on ring-width relative to those observed in the calibration period. While the probability of a change in control is sensitive to the choice of prior distribution, the inference that conditions were moist and cool at Methuselah Walk during the 1080–1129 C.E. interval is robust. Results also illustrate the power of combining multiple proxy data sets to reduce uncertainty in reconstructions of paleoclimate.
Journal Article
Alternative measures to evaluate the accuracy and bias of genomic predictions with censored records
2023
This study aimed to propose and compare metrics of accuracy and bias of genomic prediction of breeding values for traits with censored data. Genotypic and censored-phenotypic information were simulated for four traits with QTL heritability and polygenic heritability, respectively: C1: 0.07-0.07, C2: 0.07-0.00, C3: 0.27-0.27, and C4: 0.27-0.00. Genomic breeding values were predicted using the Mixed Cox and Truncated Normal models. The accuracy of the models was estimated based on the Pearson (PC), maximal (MC), and Pearson correlation for censored data (PCC) while the genomic bias was calculated via simple linear regression (SLR) and Tobit (TB). MC and PCC were statistically superior to PC for the trait C3 with 10 and 40% censored information, for 70% censorship, PCC yielded better results than MC and PC. For the other traits, the proposed measures were superior or statistically equal to the PC. The coefficients associated with the marginal effects (TB) presented estimates close to those obtained for the SLR method, while the coefficient related to the latent variable showed almost unchanged pattern with the increase in censorship in most cases. From a statistical point of view, the use of methodologies for censored data should be prioritized, even for low censoring percentages.
Journal Article
Assessment of the Impacts of Urbanization on Landslide Susceptibility in Hakha City, a Mountainous Region of Western Myanmar
2023
In July 2015, more than 100 landslides caused by Cyclone Komen resulted in damage to approximately 1000 buildings in the mountainous region of Hakha City, Myanmar. This study aimed to identify potential landslide susceptibility for newly developed resettlement areas in Hakha City before and after urbanization. The study evaluated landslide susceptibility through statistical modeling and compared the level of susceptibility before and after urbanization in the region. The information value model was used to predict landslide susceptibility before and after urbanization, using 10 parameter maps as independent variables and 1 landslide inventory map as the dependent variable. Four landslide types were identified in the study area: shallow earth slide, deep slide, earth slump, and debris flow. Susceptibility analyses were conducted separately for each type to better recognize the different aspects of landslide susceptibility in planned urban areas. By comparing the results of the susceptibility index before and after urbanization, suitable urban areas with lower landslide susceptibility could be identified. The results showed that high-potential landslide susceptibility increased by 10%, 16%, and 5% after urbanization compared with before urbanization in three Town Plans, respectively. Therefore, Town Plan 3 is selected as the most suitable location for the resettlement area in terms of low risk of landslides.
Journal Article
Analysis of a dose-response assay in Scaptotrigona bipunctata bees, Lepeletier, 1836 (Hymenoptera: Apidae) using the logistic regression model under the Bayesian approach
by
Lobos, Cristian Marcelo Villegas
,
Pereira, Naiara Climas
,
Silva, Breno Gabriel da
in
Agrochemicals
,
Apidae
,
Bayesian analysis
2021
This paper shows the results of a dose-response study in Scaptotrigona bipunctata bees, Lepeletier, 1836 (Hymenoptera: Apidae) exposed to the insecticide Fastac Duo. The aim was to evaluate the lethal concentration that causes the death of 50% of bees (LC50) and investigate the odd of mortality after exposure to different concentrations, using the logistic regression model under the Bayesian approach. In this approach, it is possible to incorporate a prior information and gives more accurate inferential results. Three independent dose-response experiments were analyzed, dissimilar in their lead time according to guidelines from the Organisation for Economic Co-operation and Development (OECD), in which each assay contained four replicates at the concentration levels investigated, including control. Observing exposure to the agrochemical, it was identified that the higher the concentration, the greater the odd of mortality. Regarding the estimated lethal concentrations for each experiment, the following values were found, 0.03 g a.i. L-1, for 24 hours, 0.04 g a.i. L-1, for 48 hours and 0.06 g a.i. L-1 for 72 hours, showing that in experiments with longer exposure times there was an increase in LC50. Concluding, the study showed an alternative approach to classical methods for dose-response studies in Scaptotrigona bipunctata bees exposed to the insecticide Fastac Duo.
Journal Article