Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,263 result(s) for "RANDOM EFFECTS MODEL"
Sort by:
Renewable Energy Pathways toward Accelerating Hydrogen Fuel Production: Evidence from Global Hydrogen Modeling
Fossil fuel consumption has triggered worries about energy security and climate change; this has promoted hydrogen as a viable option to aid in decarbonizing global energy systems. Hydrogen could substitute for fossil fuels in the future due to the economic, political, and environmental concerns related to energy production using fossil fuels. However, currently, the majority of hydrogen is produced using fossil fuels, particularly natural gas, which is not a renewable source of energy. It is therefore crucial to increase the efforts to produce hydrogen from renewable sources, rather from the existing fossil-based approaches. Thus, this study investigates how renewable energy can accelerate the production of hydrogen fuel in the future under three hydrogen economy-related energy regimes, including nuclear restrictions, hydrogen, and city gas blending, and in the scenarios which consider the geographic distribution of carbon reduction targets. A random effects regression model has been utilized, employing panel data from a global energy system which optimizes for cost and carbon targets. The results of this study demonstrate that an increase in renewable energy sources has the potential to significantly accelerate the growth of future hydrogen production under all the considered policy regimes. The policy implications of this paper suggest that promoting renewable energy investments in line with a fairer allocation of carbon reduction efforts will help to ensure a future hydrogen economy which engenders a sustainable, low carbon society.
The correlated pseudomarginal method
The pseudomarginal algorithm is a Metropolis–Hastings-type scheme which samples asymptotically from a target probability density when we can only estimate unbiasedly an unnormalized version of it. In a Bayesian context, it is a state of the art posterior simulation technique when the likelihood function is intractable but can be estimated unbiasedly by using Monte Carlo samples. However, for the performance of this scheme not to degrade as the number T of data points increases, it is typically necessary for the number N of Monte Carlo samples to be proportional to T to control the relative variance of the likelihood ratio estimator appearing in the acceptance probability of this algorithm. The correlated pseudomarginal method is a modification of the pseudomarginal method using a likelihood ratio estimator computed by using two correlated likelihood estimators. For random-effects models, we show under regularity conditions that the parameters of this scheme can be selected such that the relative variance of this likelihood ratio estimator is controlled when N increases sublinearly with T and we provide guidelines on how to optimize the algorithm on the basis of a non-standard weak convergence analysis. The efficiency of computations for Bayesian inference relative to the pseudomarginal method empirically increases with T and exceeds two orders of magnitude in some examples.
Prevalence of Undiagnosed Hypertension in Bangladesh: A Systematic Review and Meta‐Analysis
ABSTRACT Undiagnosed hypertension (UHTN) remains a significant public health concern in Bangladesh, leading to severe complications due to delayed diagnosis and management. This systematic review and meta‐analysis examined the prevalence of UHTN among adults aged 18 years and older, using data from studies conducted in Bangladesh and published between 2010 and 2024. A comprehensive search of major databases yielded 1028 records, from which nine relevant studies, encompassing a total of 28949 participants, were selected and evaluated for quality using the Newcastle–Ottawa Scale, providing valuable insights into the prevalence of UHTN within the Bangladeshi population. The pooled prevalence of UHTN was 11% (95% CI: 6%–19%) based on a random‐effects model, with substantial heterogeneity (I2 = 99.5%, p < 0.0001). Subgroup analyses revealed higher prevalence in rural areas (13%; 95% CI: 4%–35%) compared to urban areas (12%; 95% CI: 1%–54%) and elevated occupational risk among bankers (17%; 95% CI: 0%–94%). While funnel plot asymmetry was noted, Egger's test (p = 0.3113) indicated no significant publication bias. Sensitivity analyses, including Leave‐One‐Out Analysis, affirmed the robustness of the pooled estimate. The findings underscore notable geographic, occupational, and sociodemographic disparities in UHTN prevalence, highlighting the need for nationwide screening programs and targeted community awareness campaigns, particularly in underserved rural areas. Further research is imperative to explore causal factors and inform effective prevention and management strategies.
re-evaluation of random-effects meta-analysis
Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of 'set shifting' ability in people with eating disorders.
A Matrix-Based Method of Moments for Fitting Multivariate Network Meta-Analysis Models with Multiple Outcomes and Random Inconsistency Effects
Random-effects meta-analyses are very commonly used in medical statistics. Recent methodological developments include multivariate (multiple outcomes) and network (multiple treatments) meta-analysis. Here, we provide a new model and corresponding estimation procedure for multivariate network meta-analysis, so that multiple outcomes and treatments can be included in a single analysis. Our new multivariate model is a direct extension of a univariate model for network metaanalysis that has recently been proposed. We allow two types of unknown variance parameters in our model, which represent between-study heterogeneity and inconsistency. Inconsistency arises when different forms of direct and indirect evidence are not in agreement, even having taken between-study heterogeneity into account. However, the consistency assumption is often assumed in practice and so we also explain how to fit a reduced model which makes this assumption. Our estimation method extends several other commonly used methods for meta-analysis, including the method proposed by DerSimonian and Laird (1986). We investigate the use of our proposed methods in the context of both a simulation study and a real example.
Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization
We propose an ℓ 1 -penalized estimation procedure for high-dimensional linear mixedeffects models. The models are useful whenever there is a grouping structure among highdimensional observations, that is, for clustered data. We prove a consistency and an oracle optimality result and we develop an algorithm with provable numerical convergence. Furthermore, we demonstrate the performance of the method on simulated and a real high-dimensional data set.
Using temporal variability to improve spatial mapping with application to satellite data
The National Aeronautics and Space Administration (NASA) has a remote-sensing program with a large array of satellites whose mission is earth-system science. To carry out this mission, NASA produces data at various levels; level-2 data have been calibrated to the satellite's footprint at high temporal resolution, although there is often a lot of missing data. Level-3 data are produced on a regular latitude-longitude grid over the whole globe at a coarser spatial and temporal resolution (such as a day, a month, or a repeat-cycle of the satellite), and there are still missing data. This article demonstrates that spatio-temporal statistical models can be made operational and provide a way to estimate level-3 values over the whole grid and attach to each value a measure of its uncertainty. Specifically, a hierarchical statistical model is presented that includes a spatio-temporal random effects (STRE) model as a dynamical component and a temporally independent spatial component for the fine-scale variation. Optimal spatio-temporal predictions and their mean squared prediction errors are derived in terms of a fixed-dimensional Kalman filter. The predictions provide estimates of missing values and filter out unwanted noise. The resulting fixed-rank filter is scalable, in that it can handle very large data sets. Its functionality relies on estimation of the model's parameters, which is presented in detail. It is demonstrated how both past and current remote-sensing observations on aerosol optical depth (AOD) can be combined, yielding an optimal statistical predictor of AOD on the log scale along with its prediction standard error. La NASA a un programme de télédétection avec un grand ensemble de satellites dédiés aux sciences du système terrestre. Pour mener à bien sa mission, la NASA fournit des données à différents niveaux; les données du niveau 2 sont étalonnées par rapport à la zone de couverture du satellite à une haute résolution temporelle quoiqu'il y ait souvent beaucoup de données manquantes. Les données du niveau 3 sont produites sur une grille régulière de latitude-longitude répartie sur l'ensemble de la planète et elles ont une résolution temporelle et spatiale plus grossière (tels une journée, un mois ou la durée du cycle du satellite), et il y aura encore des données manquantes. Cet article montre qu'un modéle statistique spatio-temporel peut être rendu opérationnel et qu'il peut procurer une façon d'estimer les valeurs de niveau 3 sur l'ensemble de la grille et d'attacher une mesure d'incertitude à chaque valeur. En particulier, un modèle statistique hiérarchique est présenté incluant un modèle à effets aléatoires spatio-temporels (STRE) comme une composante dynamique et une composante spatiale temporellement indépendante pour les variations à petite échelle. Des prévisions spatio-temporelles optimales et leur erreur quadratique moyenne de prévision sont obtenues en terme d'un filtre de Kalman à dimension fixe. Les prévisions procurent des estimations pour les valeurs manquantes et elles filtrent le bruit excédentaire. Le filtre à rangs fixes obtenu est extensible afin de pouvoir travailler avec de très grandes bases de données. Sa fonctionnalité réside dans l'estimation des paramètres du modèle qui est présenté en détail. Nous démontrons comment les données présentes et passées de télédétection sur la profondeur optique des aérosols (AOD) peuvent être combinées ce qui entraîne en une prévision statistique optimale de l'AOD sur une échelle logarithmique et à une prévision de son erreur standard.
Classical and Bayesian random-effects meta-analysis models with sample quality weights in gene expression studies
Background Random-effects (RE) models are commonly applied to account for heterogeneity in effect sizes in gene expression meta-analysis. The degree of heterogeneity may differ due to inconsistencies in sample quality. High heterogeneity can arise in meta-analyses containing poor quality samples. We applied sample-quality weights to adjust the study heterogeneity in the DerSimonian and Laird (DSL) and two-step DSL (DSLR2) RE models and the Bayesian random-effects (BRE) models with unweighted and weighted data, Gibbs and Metropolis-Hasting (MH) sampling algorithms, weighted common effect, and weighted between-study variance. We evaluated the performance of the models through simulations and illustrated application of the methods using Alzheimer’s gene expression datasets. Results Sample quality adjusting within study variance ( w P 6 ) models provided an appropriate reduction of differentially expressed (DE) genes compared to other weighted functions in classical RE models. The BRE model with a uniform(0,1) prior was appropriate for detecting DE genes as compared to the models with other prior distributions. The precision of DE gene detection in the heterogeneous data was increased with the DSLR2 w P 6 weighted model compared to the DSL w P 6 weighted model. Among the BRE weighted models, the w P 6 weighted- and unweighted-data models and both Gibbs- and MH-based models performed similarly. The w P 6 weighted common-effect model performed similarly to the unweighted model in the homogeneous data, but performed worse in the heterogeneous data. The w P 6 weighted data were appropriate for detecting DE genes with high precision, while the w P 6 weighted between-study variance models were appropriate for detecting DE genes with high overall accuracy. Without the weight, when the number of genes in microarray increased, the DSLR2 performed stably, while the overall accuracy of the BRE model was reduced. When applying the weighted models in the Alzheimer’s gene expression data, the number of DE genes decreased in all metadata sets with the DSLR2 w P 6 weighted and the w P 6 weighted between study variance models. Four hundred and forty-six DE genes identified by the w P 6 weighted between study variance model could be potentially down-regulated genes that may contribute to good classification of Alzheimer’s samples. Conclusions The application of sample quality weights can increase precision and accuracy of the classical RE and BRE models; however, the performance of the models varied depending on data features, levels of sample quality, and adjustment of parameter estimates.
Cross-sectional human immunodeficiency virus incidence estimation accounting for heterogeneity across communities
Accurate estimation of human immunodeficiency virus (HIV) incidence rates is crucial for the monitoring of HIV epidemics, the evaluation of prevention programs, and the design of prevention studies. Traditional cohort approaches to measure HIV incidence require repeatedly testing large cohorts of HIV-uninfected individuals with an HIV diagnostic test (eg, enzyme-linked immunosorbent assay) for long periods of time to identify new infections, which can be prohibitively costly, time-consuming, and subject to loss to follow-up. Cross-sectional approaches based on the usual HIV diagnostic test and biomarkers of recent infection offer important advantages over standard cohort approaches, in terms of time, cost, and attrition. Cross-sectional samples usually consist of individuals from different communities. However, small sample sizes limit the ability to estimate community-specific incidence and existing methods typically ignore heterogeneity in incidence across communities. We propose a permutation test for the null hypothesis of no heterogeneity in incidence rates across communities, develop a random-effects model to account for this heterogeneity and to estimate community-specific incidence, and provide one way to estimate the coefficient of variation. We evaluate the performance of the proposed methods through simulation studies and apply them to the data from the National Institute of Mental Health Project ACCEPT, a phase 3 randomized controlled HIV prevention trial in Sub-Saharan Africa, to estimate the overall and community-specific HIV incidence rates.
Unraveling the Drivers of Tuberculosis: A Retrospective Panel Data Study Across 70 Developing Countries
ABSTRACT Background and Aims Tuberculosis (TB) remains a major global cause of death, particularly in developing countries. This study aims to identify key risk factors contributing to high TB incidence in these nations, analyze regional variations, and assess how risk factors differ across continents. Methods We conducted a retrospective analysis using data from 70 developing countries spanning 2000 to 2020, sourced from the World Bank Open Data. Variables included TB incidence, HIV prevalence, smoking rates, literacy rates, undernourishment, and population density. A random‐effects model was employed to examine the associations between these factors and TB incidence. Results HIV prevalence (coefficient = 37.53, 95% CI: 34.28–40.79), smoking (3.51, 2.99–4.02), undernourishment (1.56, 1.02–2.10), and population density (0.16, 0.07–0.24) showed significant positive associations with TB incidence. Literacy rate was negatively associated with TB incidence (−0.11, −0.54 to 0.33), though not significantly. These findings highlight the strong influence of socio‐demographic and health‐related factors on TB burden. Conclusion TB continues to pose a serious health challenge in developing countries. HIV control, reduction of undernourishment and smoking, and managing population density are critical to reducing TB incidence. Regional differences underscore the need for tailored prevention strategies.