Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
32 result(s) for "KUHN, ESTELLE"
Sort by:
Grassland root demography responses to multiple climate change drivers depend on root morphology
Aims We examine how root system demography and morphology are affected by air warming and multiple, simultaneous climate change drivers. Methods Using minirhizotrons, we studied root growth, morphology, median longevity, risk of mortality and standing root pool in the upper soil horizon of a temperate grassland ecosystem for 3 years. Grassland monoliths were subjected to four climate treatments in a replicated additive design: control (C); elevated temperature (T); combined T and summer precipitation reduction (TD); combined TD and elevated atmospheric CO₂ (TDCO₂). Results Air warming (C vs T) and the combined climate change treatment (C vs TDCO₂) had a positive effect on root growth rate and standing root pool. However, root responses to climate treatment varied depending on diameter size class. For fine roots (≤ 0.1 mm), new root length and mortality increased under warming but decreased in response to elevated CO₂ (TD vs TDCO₂); for coarse roots (> 0.2 mm), length and mortality increased under both elevated CO₂ and combined climate change drivers. Conclusions Our data suggest that the standing roots pool in our grassland system may increase under future climatic conditions. Contrasted behaviour of fine and coarse roots may correspond to differential root activity of these extreme diameter classes in future climate.
Construction of Bayesian deformable models via a stochastic approximation algorithm: A convergence study
The problem of the definition and estimation of generative models based on deformable templates from raw data is of particular importance for modeling non-aligned data affected by various types of geometric variability. This is especially true in shape modeling in the computer vision community or in probabilistic atlas building in computational anatomy. A first coherent statistical framework modeling geometric variability as hidden variables was described in Allassonnière, Amit and Trouvé [J. R. Stat. Soc. Ser. B Stat. Methodol. 69 (2007) 3-29]. The present paper gives a theoretical proof of convergence of effective stochastic approximation expectation strategies to estimate such models and shows the robustness of this approach against noise through numerical experiments in the context of handwritten digit modeling.
Optimization of multi-environment trials for genomic selection based on crop models
Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Convergence of the Wang-Landau algorithm
We analyze the convergence properties of the Wang-Landau algorithm. This sampling method belongs to the general class of adaptive importance sampling strategies which use the free energy along a chosen reaction coordinate as a bias. Such algorithms are very helpful to enhance the sampling properties of Markov Chain Monte Carlo algorithms, when the dynamics is metastable. We prove the convergence of the Wang-Landau algorithm and an associated central limit theorem.
Recovering Power in Association Mapping Panels with Variable Levels of Linkage Disequilibrium
Association mapping has permitted the discovery of major QTL in many species. It can be applied to existing populations and, as a consequence, it is generally necessary to take into account structure and relatedness among individuals in the statistical model to control false positives. We analytically studied power in association studies by computing noncentrality parameter of the tests and its relationship with parameters characterizing diversity (genetic differentiation between groups and allele frequencies) and kinship between individuals. Investigation of three different maize diversity panels genotyped with the 50k SNPs array highlighted contrasted average power among panels and revealed gaps of power of classical mixed models in regions with high linkage disequilibrium (LD). These gaps could be related to the fact that markers are used for both testing association and estimating relatedness. We thus considered two alternative approaches to estimating the kinship matrix to recover power in regions of high LD. In the first one, we estimated the kinship with all the markers that are not located on the same chromosome than the tested SNP. In the second one, correlation between markers was taken into account to weight the contribution of each marker to the kinship. Simulations revealed that these two approaches were efficient to control false positives and were more powerful than classical models.
Assessing the correlation structure in cow udder quarter infection times through extensions of the correlated frailty model
The association between infection times of the udder quarters of a dairy cow is essential information for the preventive control of udder infections in a dairy cow herd. Extensions of the correlated frailty model are proposed to investigate and compare different correlation structures among the four udder quarter infection times clustered within a cow. Such complex frailty models can be fitted with the SAEM-MCMC algorithm. It is demonstrated that substantial correlation exists between the udder quarter infection times, with the correlation within front and rear udder quarters being larger than between front and rear udder quarters. This signifies that an infected udder quarter is a risk factor for the other udder quarters, especially when the udder quarter is in the same region, i.e., front or rear.
Estimating Fisher Information Matrix in Latent Variable Models based on the Score Function
The Fisher information matrix (FIM) is a key quantity in statistics as it is required for example for evaluating asymptotic precisions of parameter estimates, for computing test statistics or asymptotic distributions in statistical testing, for evaluating post model selection inference results or optimality criteria in experimental designs. However its exact computation is often not trivial. In particular in many latent variable models, it is intricated due to the presence of unobserved variables. Therefore the observed FIM is usually considered in this context to estimate the FIM. Several methods have been proposed to approximate the observed FIM when it can not be evaluated analytically. Among the most frequently used approaches are Monte-Carlo methods or iterative algorithms derived from the missing information principle. All these methods require to compute second derivatives of the complete data log-likelihood which leads to some disadvantages from a computational point of view. In this paper, we present a new approach to estimate the FIM in latent variable model. The advantage of our method is that only the first derivatives of the log-likelihood is needed, contrary to other approaches based on the observed FIM. Indeed we consider the empirical estimate of the covariance matrix of the score. We prove that this estimate of the Fisher information matrix is unbiased, consistent and asymptotically Gaussian. Moreover we highlight that none of both estimates is better than the other in terms of asymptotic covariance matrix. When the proposed estimate can not be directly analytically evaluated, we present a stochastic approximation estimation algorithm to compute it. This algorithm provides this estimate of the FIM as a by-product of the parameter estimates. We emphasize that the proposed algorithm only requires to compute the first derivatives of the complete data log-likelihood with respect to the parameters. We prove that the estimation algorithm is consistent and asymptotically Gaussian when the number of iterations goes to infinity. We evaluate the finite sample size properties of the proposed estimate and of the observed FIM through simulation studies in linear mixed effects models and mixture models. We also investigate the convergence properties of the estimation algorithm in non linear mixed effects models. We compare the performances of the proposed algorithm to those of other existing methods.
Estimation and variable selection in high dimension in nonlinear mixed-effects models
We consider nonlinear mixed effects models including high-dimensional covariates to model individual parameters variability. The objective is to identify relevant covariates among a large set under sparsity assumption and to estimate model parameters. To face the high dimensional setting we consider a regularized estimator namely the maximum likelihood estimator penalized with the l1-penalty. We rely on the use of the eBIC model choice criteria to select an optimal reduced model. Then we estimate the parameters by maximizing the likelihood of the reduced model. We calculate in practice the maximum likelihood estimator penalized with the l1-penalty though a weighted proximal stochastic gradient descent algorithm with an adaptive learning rate. This choice allows us to consider very general models, in particular models that do not belong to the curved exponential family. We demonstrate first in a simple linear toy model through a simulation study the good convergence properties of this optimization algorithm. We compare then the performance of the proposed methodology with those of the \\glmmLasso procedure in a linear mixed effects model in a simulation study. We illustrate also its performance in a nonlinear mixed-effects logistic growth model through simulation. We finally highlight the beneficit of the proposed procedure relying on an integrated single step approach regarding two others two steps approaches for variable selection objective.
Estimation and variable selection in high dimension in a causal joint model of survival times and longitudinal outcomes with random effects
We consider a joint survival and mixed-effects model to explain the survival time from longitudinal data and high-dimensional covariates in a population. The longitudinal data is modeled using a non linear mixed-effects model to account for the inter-individual variability in the population. The corresponding regression function serves as a link function incorporated into the survival model. In that way, the longitudinal data is related to the survival time. We consider a Cox model that takes into account both high-dimensional covariates and the link function. There are two main objectives: first, identify the relevant covariates that contribute to explaining survival time, and second, estimate all unknown parameters of the joint model. For the first objective, we consider the estimate defined by maximizing the marginal log-likelihood regularized with a l1-penalty term. To tackle the optimization problem, we implement an adaptive stochastic gradient to handle the latent variables of the non linear mixed-effects model associated with a proximal operator to manage the non-differentiability of the penalty. We rely on an eBIC model choice criterion to select an optimal value for the regularization parameter. Once the relevant covariates are selected, we re-estimate the parameters in the reduced model by maximizing the likelihood using an adaptive stochastic gradient descent. We provide relevant simulations that showcase the performance of the proposed variable selection and parameter estimation method in the joint model. We investigate the effect of censoring and of the presence of correlation between the individual parameters in the mixed model.
Estimation of ratios of normalizing constants using stochastic approximation : the SARIS algorithm
Computing ratios of normalizing constants plays an important role in statistical modeling. Two important examples are hypothesis testing in latent variables models, and model comparison in Bayesian statistics. In both examples, the likelihood ratio and the Bayes factor are defined as the ratio of the normalizing constants of posterior distributions. We propose in this article a novel methodology that estimates this ratio using stochastic approximation principle. Our estimator is consistent and asymptotically Gaussian. Its asymptotic variance is smaller than the one of the popular optimal bridge sampling estimator. Furthermore, it is much more robust to little overlap between the two unnormalized distributions considered. Thanks to its online definition, our procedure can be integrated in an estimation process in latent variables model, and therefore reduce the computational effort. The performances of the estimator are illustrated through a simulation study and compared to two other estimators : the ratio importance sampling and the optimal bridge sampling estimators.