Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
238 result(s) for "multimodel"
Sort by:
Scientist’s guide to developing explanatory statistical models using causal analysis principles
Recent discussions of model selection and multimodel inference highlight a general challenge for researchers: how to convey the explanatory content of a hypothesized model or set of competing models clearly. The advice from statisticians for scientists employing multimodel inference is to develop a well-thought-out set of candidate models for comparison, though precise instructions for how to do that are typically not given. A coherent body of knowledge, which falls under the general term causal analysis, now exists for examining the explanatory scientific content of candidate models. Much of the literature on causal analysis has been recently developed, and we suspect may not be familiar to many ecologists. This body of knowledge comprises a set of graphical tools and axiomatic principles to support scientists in their endeavors to create “well-formed hypotheses,” as statisticians are asking them to do. Causal analysis is complementary to methods such as structural equation modeling, which provides the means for evaluation of proposed hypotheses against data. In this paper, we summarize and illustrate a set of principles that can guide scientists in their quest to develop explanatory hypotheses for evaluation. The principles presented in this paper have the capacity to close the communication gap between statisticians, who urge scientists to develop well-thought-out coherent models, and scientists, who would like some practical advice for exactly how to do that.
A multimodel random forest ensemble method for an improved assessment of Chinese terrestrial vegetation carbon density
Assessing the terrestrial vegetation carbon density (TVCD) is crucial for evaluating the national carbon balance. However, current national‐scale TVCD assessments show strong disparities, despite the good estimation method of their underlying models. Here, we attribute this contradiction to a flaw in the methods of using multimodel simulation results, which ignore the connections between results, leading to an overoptimistic evaluation of the multimodel ensemble mean (MMEM) method. Thus, using the state‐of‐the‐art multimodel random forest ensemble (MMRFE) method to integrate the results of 10 models, we reproduced Chinese TVCD data during 1982–2010. Compared with the nationally averaged TVCD field investigation data (27 ± 26 Mg C/ha), we found that the results of five models were overestimated by 7.4%–85.2%, and the remaining models were underestimated by 3.7%–77.8%. The MMEM TVCD method produced an overestimation of 2%, but the MMRFE method produced an underestimation of only 0.2%. Additionally, the summary Taylor diagrams of the TVCD at the national and ecosystem (forest, shrub, grass and crop ecosystems) scales all showed that the MMRFE TVCD produced the smallest standard deviations and root mean square deviations and the highest correlation coefficients. Furthermore, the MMRFE TVCDs were all significantly positively correlated with the normalized difference vegetation index (NDVI), and they had the same increasing trend, but an opposite variation trend from the MMEM TVCD and NDVI. This result implied that the spatiotemporal variation modes of the MMRFE TVCD were consistent with those of the NDVI. The results suggested that compared with the traditional MMEM method, the MMRFE TVCD and its spatiotemporal variation modes were more similar to the real TVCD. In conclusion, the MMRFE method can effectively improve the accuracy of national‐scale TVCD estimation, and effectively reduce the uncertainty of large‐scale terrestrial vegetation carbon estimation processes. Notably, we provide a new method that uses a machine learning approach to mine multimodel terrestrial carbon information to reduce the uncertainty in the estimation of terrestrial ecosystem carbon components.
H∞ Control Design for Nonlinear Systems via Multimodel Approach
Nonlinear systems are integral to contemporary engineering applications, yet their regulation remains a significant challenge due to complex and highly dynamic behaviors. Robust control frameworks, particularly H∞ methods, provide systematic tools to ensure stability and performance in the presence of disturbances and modeling uncertainties. This study proposes an integrated design methodology that combines H∞ loop-shaping techniques with multimodel approaches to achieve resilient control of nonlinear systems. The control law is structured around the H∞ loop-shaping scheme, which shapes the open-loop dynamics to meet desired robustness and performance specifications. The multimodel strategy further enhances adaptability by accommodating diverse operating conditions and capturing variations in system behavior. Several control architectures are presented that unify H∞ loop-shaping with multimodel representations, offering a flexible framework for nonlinear system control. The design methodology also ensures desirable transient responses, thereby improving practical applicability for complex systems. A study is conducted to validate the proposed approaches. Simulation results confirm the effectiveness of multimodel H∞ control systems, underscoring their potential as a robust solution for complex nonlinear applications.
Multi-model fusion-based sensor fault tolerant second order sliding mode control application on a chemical reactor
The decoupled multimodel framework offers an effective alternative for modeling, control, and diagnosis of nonlinear systems. Its structure relies on interacting with sub-models, with the degree of interaction varying across systems, which directly influences the control design. This paper proposes a novel fault-tolerant control strategy that combines second-order sliding mode control with a multimodel fusion approach to compensate sensor faults. A second-order sliding mode multiobserver is developed to simultaneously estimate the system states and sensor faults. The proposed methodology is validated through a practical implementation on a chemical transesterification reactor, demonstrating its effectiveness and robustness.
Is your ad hoc model selection strategy affecting your multimodel inference?
Ecologists routinely fit complex models with multiple parameters of interest, where hundreds or more competing models are plausible. To limit the number of fitted models, ecologists often define a model selection strategy composed of a series of stages in which certain features of a model are compared while other features are held constant. Defining these multi‐stage strategies requires making a series of decisions, which may potentially impact inferences, but have not been critically evaluated. We begin by identifying key features of strategies, introducing descriptive terms when they did not already exist in the literature. Strategies differ in how they define and order model building stages. Sequential‐by‐sub‐model strategies focus on one sub‐model (parameter) at a time with modeling of subsequent sub‐models dependent on the selected sub‐model structures from the previous stages. Secondary candidate set strategies model sub‐models independently and combine the top set of models from each sub‐model for selection in a final stage. Build‐up approaches define stages across sub‐models and increase in complexity at each stage. Strategies also differ in how the top set of models is selected in each stage and whether they use null or more complex sub‐model structures for non‐target sub‐models. We tested the performance of different model selection strategies using four data sets and three model types. For each data set, we determined the \"true\" distribution of AIC weights by fitting all plausible models. Then, we calculated the number of models that would have been fitted and the portion of \"true\" AIC weight we recovered under different model selection strategies. Sequential‐by‐sub‐model strategies often performed poorly. Based on our results, we recommend using a build‐up or secondary candidate sets, which were more reliable and carrying all models within 5–10 AIC of the top model forward to subsequent stages. The structure of non‐target sub‐models was less important. Multi‐stage approaches cannot compensate for a lack of critical thought in selecting covariates and building models to represent competing a priori hypotheses. However, even when competing hypotheses for different sub‐models are limited, thousands or more models may be possible so strategies to explore candidate model space reliably and efficiently will be necessary.
Model averaging and muddled multimodel inferences
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse ( Centrocercus urophasianus ) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
MMI: Multimodel inference or models with management implications?
We consider a variety of regression modeling strategies for analyzing observational data associated with typical wildlife studies, including all subsets and stepwise regression, a single full model, and Akaike's Information Criterion (AIC)-based multimodel inference. Although there are advantages and disadvantages to each approach, we suggest that there is no unique best way to analyze data. Further, we argue that, although multimodel inference can be useful in natural resource management, the importance of considering causality and accurately estimating effect sizes is greater than simply considering a variety of models. Determining causation is far more valuable than simply indicating how the response variable and explanatory variables covaried within a data set, especially when the data set did not arise from a controlled experiment. Understanding the causal mechanism will provide much better predictions beyond the range of data observed. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
A simple function for full‐subsets multiple regression in ecology with R
Full‐subsets information theoretic approaches are becoming an increasingly popular tool for exploring predictive power and variable importance where a wide range of candidate predictors are being considered. Here, we describe a simple function in the statistical programming language R that can be used to construct, fit, and compare a complete model set of possible ecological or environmental predictors, given a response variable of interest and a starting generalized additive (mixed) model fit. Main advantages include not requiring a complete model to be fit as the starting point for candidate model set construction (meaning that a greater number of predictors can potentially be explored than might be available through functions such as dredge); model sets that include interactions between factors and continuous nonlinear predictors; and automatic removal of models with correlated predictors (based on a user defined criterion for exclusion). The function takes continuous predictors, which are fitted using smoothers via either gam, gamm (mgcv) or gamm4, as well as factor variables which are included on their own or as two‐level interaction terms within the gam smooth (via use of the “by” argument), or with themselves. The function allows any model to be constructed and used as a null model, and takes a range of arguments that allow control over the model set being constructed, including specifying cyclic and linear continuous predictors, specification of the smoothing algorithm used, and the maximum complexity allowed for smooth terms. The use of the function is demonstrated via case studies that highlight how appropriate model sets can be easily constructed and the broader utility of the approach for exploratory ecology. This study describes a function developed in R that can be used by ecologists to analyze their data using a full‐subsets information theoretic approach. Main advances beyond existing packages include not requiring a complete model as the starting point for candidate models set construction allowing a greater number of predictors to be explored; model sets that include interactions between factors and continuous nonlinear predictors; and automatic removal of models with correlated predictors.
AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons
We briefly outline the information-theoretic (I-T) approaches to valid inference including a review of some simple methods for making formal inference from all the hypotheses in the model set (multimodel inference). The I-T approaches can replace the usual t tests and ANOVA tables that are so inferentially limited, but still commonly used. The I-T methods are easy to compute and understand and provide formal measures of the strength of evidence for both the null and alternative hypotheses, given the data. We give an example to highlight the importance of deriving alternative hypotheses and representing these as probability models. Fifteen technical issues are addressed to clarify various points that have appeared incorrectly in the recent literature. We offer several remarks regarding the future of empirical science and data analysis under an I-T framework.
Museum specimens reveal loss of pollen host plants as key factor driving wild bee decline in The Netherlands
Evidence for declining populations of both wild and managed bees has raised concern about a potential global pollination crisis. Strategies to mitigate bee loss generally aim to enhance floral resources. However, we do not really know whether loss of preferred floral resources is the key driver of bee decline because accurate assessment of host plant preferences is difficult, particularly for species that have become rare. Here we examine whether population trends of wild bees in The Netherlands can be explained by trends in host plants, and how this relates to other factors such as climate change. We determined host plant preference of bee species using pollen loads on specimens in entomological collections that were collected before the onset of their decline, and used atlas data to quantify population trends of bee species and their host plants. We show that decline of preferred host plant species was one of two main factors associated with bee decline. Bee body size, the other main factor, was negatively related to population trend, which, because larger bee species have larger pollen requirements than smaller species, may also point toward food limitation as a key factor driving wild bee loss. Diet breadth and other potential factors such as length of flight period or climate change sensitivity were not important in explaining twentieth century bee population trends. These results highlight the species-specific nature of wild bee decline and indicate that mitigation strategies will only be effective if they target the specific host plants of declining species. Significance Growing concern about bee declines and associated loss of pollination services has increased the urgency to identify the underlying causes. So far, the identification of the key drivers of decline of bee populations has largely been based on speculation. We assessed the relative importance of a range of proposed factors responsible for wild bee decline and show that loss of preferred host plant species is one of the main factors associated with the decline of bee populations in The Netherlands. Interestingly, species foraging on crop plant families have stable or increasing populations. These results indicate that mitigation strategies for loss of wild bees will only be effective if they target the specific host plants of declining bee species.