Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
149
result(s) for
"expected Bayesian estimation"
Sort by:
E-Bayesian and H-Bayesian Inferences for a Simple Step-Stress Model with Competing Failure Model under Progressively Type-II Censoring
by
Wang, Ying
,
Chen, Yan
,
Yan, Zaizai
in
Bayesian analysis
,
Bayesian estimate
,
Bayesian statistical decision theory
2022
In this paper, we discuss the statistical analysis of a simple step-stress accelerated competing failure model under progressively Type-II censoring. It is assumed that there is more than one cause of failure, and the lifetime of the experimental units at each stress level follows exponential distribution. The distribution functions under different stress levels are connected through the cumulative exposure model. The maximum likelihood, Bayesian, Expected Bayesian, and Hierarchical Bayesian estimations of the model parameters are derived based on the different loss function. Based on Monte Carlo Simulations. We also get the average length and the coverage probability of the 95% confidence intervals and highest posterior density credible intervals of the parameters. From the numerical studies, it can be seen that the proposed Expected Bayesian estimations and Hierarchical Bayesian estimations have better performance in terms of the average estimates and mean squared errors, respectively. Finally, the methods of statistical inference discussed here are illustrated with a numerical example.
Journal Article
Entropy Estimation of Generalized Half-Logistic Distribution (GHLD) Based on Type-II Censored Samples
2014
This paper derives the entropy of a generalized half-logistic distribution based on Type-II censored samples, obtains some entropy estimators by using Bayes estimators of an unknown parameter in the generalized half-logistic distribution based on Type-II censored samples and compares these estimators in terms of the mean squared error and the bias through Monte Carlo simulations.
Journal Article
The Evolution of the Goddard Profiling Algorithm to a Fully Parametric Scheme
by
Wang, Nai-Yu
,
Kummerow, Christian D.
,
Kulie, Mark
in
Algorithms
,
Bayesian analysis
,
Bayesian theory
2015
The Goddard profiling algorithm has evolved from a pseudoparametric algorithm used in the current TRMM operational product (GPROF 2010) to a fully parametric approach used operationally in the GPM era (GPROF 2014). The fully parametric approach uses a Bayesian inversion for all surface types. The algorithm thus abandons rainfall screening procedures and instead uses the full brightness temperature vector to obtain the most likely precipitation state. This paper offers a complete description of the GPROF 2010 and GPROF 2014 algorithms and assesses the sensitivity of the algorithm to assumptions related to channel uncertainty as well as ancillary data. Uncertainties in precipitation are generally less than 1%–2% for realistic assumptions in channel uncertainties. Consistency among different radiometers is extremely good over oceans. Consistency over land is also good if the diurnal cycle is accounted for by sampling GMI product only at the time of day that different sensors operate. While accounting for only a modest amount of the total precipitation, snow-covered surfaces exhibit differences of up to 25% between sensors traceable to the availability of high-frequency (166 and 183 GHz) channels. In general, comparisons against early versions of GPM’s Ku-band radar precipitation estimates are fairly consistent but absolute differences will be more carefully evaluated once GPROF 2014 is upgraded to use the full GPM-combined radar–radiometer product for its a priori database. The combined algorithm represents a physically constructed database that is consistent with both the GPM radars and the GMI observations, and thus it is the ideal basis for a Bayesian approach that can be extended to an arbitrary passive microwave sensor.
Journal Article
Toward a New Flood Assessment Paradigm: From Exceedance Probabilities to the Expected Maximum Floods and Damages
2024
To assess flood risks, we seek to estimate the probability distribution of the worst possible single‐event over a contiguous period of N years rather than the cumulative losses expected over a planning horizon. For this we use the probability distribution FN of extreme flood events over a multi‐year period, which is different from using the conventional single‐valued exceedance probability of 1/N years. FN can be used to estimate the hazard and then proceed to the estimation of risk, which we define as the “largest expected damage” over the set period. It also allows for a more coherent determination of design values, which descend from fully acknowledging the aleatoric uncertainty of the underlying natural river flow process. The epistemic uncertainty is removed by marginalizing the aleatoric‐epistemic uncertainty joint distribution over the parameter space. The advantage of the proposed Bayesian approach, which can be summarized in 12 steps, is demonstrated for the 2021 River Ahr flood in Germany, which caused casualties and huge material damage. Adopting the multi‐year maxima extreme value distribution can potentially lead to the reclassification of vulnerability levels for flood‐prone areas and reconsideration of current policy‐making and flood risk communication. Plain Language Summary To assess risks from extreme flood events, we seek to estimate the probability distribution of the worst possible single‐event over a contiguous period of for example, N = 100 or more year years and the cumulative losses expected over the planning horizon for a hydraulic protection measure. For this we use the probability distribution of extreme flood events over a multi‐year period, which is different from using the conventional single‐valued exceedance probability of 1/N years. This distribution can be used to estimate the “largest expected damage” over the entire planning horizon. It also allows for a more coherent determination of design values, which we obtain from fully acknowledging the climatological (and not epistemic) uncertainty of extreme flow events. The advantage of the proposed approach is demonstrated for the 2021 River Ahr flood in Germany, which caused casualties and huge material damage. Using the multi‐year maxima extreme value distribution instead of return periods should lead to a more consistent classification of vulnerability levels for flood‐prone areas and to reconsideration of current policy‐making and flood risk communication. Key Points Approach examines probability distribution of largest flood during a multi‐year planning horizon Yields different estimates of maximum flood in a multi‐year period than exceedance probabilities Bayesian inference enables consideration of both aleatory and epistemic uncertainties Expected maximum damages can be computed using expected maximum flood New assessment approach demonstrated through case study in Germany
Journal Article
A practical guide to estimating the light extinction coefficient with nonlinear models—a case study on maize
by
Ciampitti, Ignacio A.
,
Otegui, María E.
,
Hefley, Trevor J.
in
Bayesian analysis
,
Bayesian models
,
Bayesian theory
2021
Background
The fraction of intercepted photosynthetically active radiation (fPARi) is typically described with a non-linear function of leaf area index (LAI) and
k
, the light extinction coefficient. The parameter
k
is used to make statistical inference, as an input into crop models, and for phenotyping. It may be estimated using a variety of statistical techniques that differ in assumptions, which ultimately influences the numerical value
k
and associated uncertainty estimates. A systematic search of peer-reviewed publications for maize (
Zea Mays
L.) revealed: (i) incompleteness in reported estimation techniques; and (ii) that most studies relied on dated techniques with unrealistic assumptions, such as log-transformed linear models (LogTLM) or normally distributed data. These findings suggest that knowledge of the variety and trade-offs among statistical estimation techniques is lacking, which hinders the use of modern approaches such as Bayesian estimation (BE) and techniques with appropriate assumptions, e.g. assuming beta-distributed data.
Results
The parameter
k
was estimated for seven maize genotypes with five different methods: least squares estimation (LSE), LogTLM, maximum likelihood estimation (MLE) assuming normal distribution, MLE assuming beta distribution, and BE assuming beta distribution. Methods were compared according to the appropriateness for statistical inference, point estimates’ properties, and predictive performance. LogTLM produced the worst predictions for fPARi, whereas both LSE and MLE with normal distribution yielded unrealistic predictions (i.e. fPARi < 0 or > 1) and the greatest coefficients for
k
. Models with beta-distributed fPARi (either MLE or Bayesian) were recommended to obtain point estimates.
Conclusion
Each estimation technique has underlying assumptions which may yield different estimates of
k
and change inference, like the magnitude and rankings among genotypes. Thus, for reproducibility, researchers must fully report the statistical model, assumptions, and estimation technique. LogTLMs are most frequently implemented, but should be avoided to estimate
k
. Modeling fPARi with a beta distribution was an absent practice in the literature but is recommended, applying either MLE or BE. This workflow and technique comparison can be applied to other plant canopy models, such as the vertical distribution of nitrogen, carbohydrates, photosynthesis, etc. Users should select the method balancing benefits and tradeoffs matching the purpose of the study.
Journal Article
Controlling the reinforcement in Bayesian non-parametric mixture models
by
Prünster, Igor
,
Lijoi, Antonio
,
Mena, Ramsés H.
in
Bayesian analysis
,
Bayesian clustering
,
Bayesian method
2007
The paper deals with the problem of determining the number of components in a mixture model. We take a Bayesian non-parametric approach and adopt a hierarchical model with a suitable non-parametric prior for the latent structure. A commonly used model for such a problem is the mixture of Dirichlet process model. Here, we replace the Dirichlet process with a more general non-parametric prior obtained from a generalized gamma process. The basic feature of this model is that it yields a partition structure for the latent variables which is of Gibbs type. This relates to the well-known (exchangeable) product partition models. If compared with the usual mixture of Dirichlet process model the advantage of the generalization that we are examining relies on the availability of an additional parameter σ belonging to the interval (0,1): it is shown that such a parameter greatly influences the clustering behaviour of the model. A value of σ that is close to 1 generates a large number of clusters, most of which are of small size. Then, a reinforcement mechanism which is driven by σ acts on the mass allocation by penalizing clusters of small size and favouring those few groups containing a large number of elements. These features turn out to be very useful in the context of mixture modelling. Since it is difficult to specify a priori the reinforcement rate, it is reasonable to specify a prior for σ. Hence, the strength of the reinforcement mechanism is controlled by the data.
Journal Article
Expected Bayesian estimation for exponential model based on simple step stress with Type-I hybrid censored data
by
Adel Fahad Alrasheedi
,
Rabie, A
,
Nagy, M
in
Bayesian analysis
,
Comparative studies
,
Estimates
2022
The procedure of selecting the values of hyper-parameters for prior distributions in Bayesian estimate has produced many problems and has drawn the attention of many authors, therefore the expected Bayesian (E-Bayesian) estimation method to overcome these problems. These approaches are used based on the step-stress acceleration model under the Exponential Type-I hybrid censored data in this study. The values of the distribution parameters are derived. To compare the E-Bayesian estimates to the other estimates, a comparative study was conducted using the simulation research. Four different loss functions are used to generate the Bayesian and E-Bayesian estimators. In addition, three alternative hyper-parameter distributions were used in E-Bayesian estimation. Finally, a real-world data example is examined for demonstration and comparative purposes.
Journal Article
A Review of Bayesian Spatiotemporal Models in Spatial Epidemiology
by
Wang, Yufeng
,
Xue, Feng
,
Chen, Xue
in
Bayesian analysis
,
Bayesian spatiotemporal model
,
Bayesian theory
2024
Spatial epidemiology investigates the patterns and determinants of health outcomes over both space and time. Within this field, Bayesian spatiotemporal models have gained popularity due to their capacity to incorporate spatial and temporal dependencies, uncertainties, and intricate interactions. However, the complexity of modelling and computations associated with Bayesian spatiotemporal models vary across different diseases. Presently, there is a limited comprehensive overview of Bayesian spatiotemporal models and their applications in epidemiology. This article aims to address this gap through a thorough review. The review commences by delving into the historical development of Bayesian spatiotemporal models concerning disease mapping, prediction, and regression analysis. Subsequently, the article compares these models in terms of spatiotemporal data distribution, general spatiotemporal data models, environmental covariates, parameter estimation methods, and model fitting standards. Following this, essential preparatory processes are outlined, encompassing data acquisition, data preprocessing, and available statistical software. The article further categorizes and summarizes the application of Bayesian spatiotemporal models in spatial epidemiology. Lastly, a critical examination of the advantages and disadvantages of these models, along with considerations for their application, is provided. This comprehensive review aims to enhance comprehension of the dynamic spatiotemporal distribution and prediction of epidemics. By facilitating effective disease scrutiny, especially in the context of the global COVID-19 pandemic, the review holds significant academic merit and practical value. It also aims to contribute to the development of improved ecological and epidemiological prevention and control strategies.
Journal Article
Bayesian Estimation Using Expected LINEX Loss Function: A Novel Approach with Applications
by
Okasha, Hassan
,
Alotaibi, Refah
,
Wang, Liang
in
Bayesian analysis
,
Bayesian estimation
,
Bayesian risk
2022
The loss function plays an important role in Bayesian analysis and decision theory. In this paper, a new Bayesian approach is introduced for parameter estimation under the asymmetric linear-exponential (LINEX) loss function. In order to provide a robust estimation and avoid making subjective choices, the proposed method assumes that the parameter of the LINEX loss function has a probability distribution. The Bayesian estimator is then obtained by taking the expectation of the common LINEX-based Bayesian estimator over the probability distribution. This alternative proposed method is applied to estimate the exponential parameter by considering three different distributions of the LINEX parameter, and the associated Bayes risks are also obtained in consequence. Extensive simulation studies are conducted in order to compare the performance of the proposed new estimators. In addition, three real data sets are analyzed to investigate the applicability of the proposed results. The results of the simulation and real data analysis show that the proposed estimation works satisfactorily and performs better than the conventional standard Bayesian approach in terms of minimum mean square error and Bayes risk.
Journal Article
A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non-Gaussian errors
2010
Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance. Here we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. A formal generalized likelihood function is presented, which extends the applicability of previously used likelihood functions to situations where residual errors are correlated, heteroscedastic, and non‐Gaussian with varying degrees of kurtosis and skewness. The approach focuses on a correct statistical description of the data and the total model residuals, without separating out various error sources. Application to Bayesian uncertainty analysis of a conceptual rainfall‐runoff model simultaneously identifies the hydrologic model parameters and the appropriate statistical distribution of the residual errors. When applied to daily rainfall‐runoff data from a humid basin we find that (1) residual errors are much better described by a heteroscedastic, first‐order, auto‐correlated error model with a Laplacian distribution function characterized by heavier tails than a Gaussian distribution; and (2) compared to a standard least‐squares approach, proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and different parameter uncertainty estimates that are less sensitive to the particular time period used for inference. Application to daily rainfall‐runoff data from a semiarid basin with more significant residual errors and systematic underprediction of peak flows shows that (1) multiplicative bias factors can be used to compensate for some of the largest errors and (2) a skewed error distribution yields improved estimates of predictive uncertainty in this semiarid basin with near‐zero flows. We conclude that the presented methodology provides improved estimates of parameter and total prediction uncertainty and should be useful for handling complex residual errors in other hydrologic regression models as well.
Journal Article