Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,557 result(s) for "Generalized additive models"
Sort by:
Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models
Recent work by Reiss and Ogden provides a theoretical basis for sometimes preferring restricted maximum likelihood (REML) to generalized cross-validation (GCV) for smoothing parameter selection in semiparametric regression. However, existing REML or marginal likelihood (ML) based methods for semiparametric generalized linear models (GLMs) use iterative REML or ML estimation of the smoothing parameters of working linear approximations to the GLM. Such indirect schemes need not converge and fail to do so in a non-negligible proportion of practical analyses. By contrast, very reliable prediction error criteria smoothing parameter selection methods are available, based on direct optimization of GCV, or related criteria, for the GLM itself. Since such methods directly optimize properly defined functions of the smoothing parameters, they have much more reliable convergence properties. The paper develops the first such method for REML or ML estimation of smoothing parameters. A Laplace approximation is used to obtain an approximate REML or ML for any GLM, which is suitable for efficient direct optimization. This REML or ML criterion requires that Newton-Raphson iteration, rather than Fisher scoring, be used for GLM fitting, and a computationally stable approach to this is proposed. The REML or ML criterion itself is optimized by a Newton method, with the derivatives required obtained by a mixture of implicit differentiation and direct methods. The method will cope with numerical rank deficiency in the fitted model and in fact provides a slight improvement in numerical robustness on the earlier method of Wood for prediction error criteria based smoothness selection. Simulation results suggest that the new REML and ML methods offer some improvement in mean-square error performance relative to GCV or Akaike's information criterion in most cases, without the small number of severe undersmoothing failures to which Akaike's information criterion and GCV are prone. This is achieved at the same computational cost as GCV or Akaike's information criterion. The new approach also eliminates the convergence failures of previous REML- or ML-based approaches for penalized GLMs and usually has lower computational cost than these alternatives. Example applications are presented in adaptive smoothing, scalar on function regression and generalized additive model selection.
Fast stable direct fitting and smoothness selection for generalized additive models
Existing computationally efficient methods for penalized likelihood generalized additive model fitting employ iterative smoothness selection on working linear models (or working mixed models). Such schemes fail to converge for a non-negligible proportion of models, with failure being particularly frequent in the presence of concurvity. If smoothness selection is performed by optimizing 'whole model' criteria these problems disappear, but until now attempts to do this have employed finite-difference-based optimization schemes which are computationally inefficient and can suffer from false convergence. The paper develops the first computationally efficient method for direct generalized additive model smoothness selection. It is highly stable, but by careful structuring achieves a computational efficiency that leads, in simulations, to lower mean computation times than the schemes that are based on working model smoothness selection. The method also offers a reliable way of fitting generalized additive mixed models.
Generalized additive models for location, scale and shape for high dimensional data-a flexible approach based on boosting
Generalized additive models for location, scale and shape (GAMLSSs) are a popular semiparametric modelling approach that, in contrast with conventional generalized additive models, regress not only the expected mean but also every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSSs are infeasible for high dimensional data set-ups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algorithm for high dimensional GAMLSSs that was developed to overcome these limitations. Specifically, the new algorithm was designed to allow the simultaneous estimation of predictor effects and variable selection. The algorithm proposed was applied to Munich rental guide data, which are used by landlords and tenants as a reference for the average rent of a flat depending on its characteristics and spatial features. The net rent predictions that resulted from the high dimensional GAMLSSs were found to be highly competitive and covariate-specific prediction intervals showed a major improvement over classical generalized additive models.
Using Remote-Sensing Environmental and Fishery Data to Map Potential Yellowfin Tuna Habitats in the Tropical Pacific Ocean
Changes in marine environments affect fishery resources at different spatial and temporal scales in marine ecosystems. Predictions from species distribution models are available to parameterize the environmental characteristics that influence the biology, range, and habitats of the species of interest. This study used generalized additive models (GAMs) fitted to two spatiotemporal fishery data sources, namely 1° spatial grid and observer record longline fishery data from 2006 to 2010, to investigate the relationship between catch rates of yellowfin tuna and oceanographic conditions by using multispectral satellite images and to develop a habitat preference model. The results revealed that the cumulative deviances obtained using the selected GAMs were 33.6% and 16.5% in the 1° spatial grid and observer record data, respectively. The environmental factors in the study were significant in the selected GAMs, and sea surface temperature explained the highest deviance. The results suggest that areas with a higher sea surface temperature, a sea surface height anomaly of approximately −10.0 to 20 cm, and a chlorophyll-a concentration of approximately 0.05–0.25 mg/m3 yield higher catch rates of yellowfin tuna. The 1° spatial grid data had higher cumulative deviances, and the predicted relative catch rates also exhibited a high correlation with observed catch rates. However, the maps of observer record data showed the high-quality spatial resolutions of the predicted relative catch rates in the close-view maps. Thus, these results suggest that models of catch rates of the 1° spatial grid data that incorporate relevant environmental variables can be used to infer possible responses in the distribution of highly migratory species, and the observer record data can be used to detect subtle changes in the target fishing grounds.
Analyzing weight evolution in mice infected by Trypanosoma cruzi
Concerning the specificities of a longitudinal study, the trajectories of a subject's mean responses not always present a linear behavior, which calls for tools that take into account the non-linearity of individual trajectories and that describe them towards associating possible random effects with each individual. Generalized additive mixed models (GAMMs) have come to solve this problem, since, in this class of models, it is possible to assign specific random effects to individuals, in addition to rewriting the linear term by summing unknown smooth functions, not parametrically specified, then using the P-splines smoothing technique. Thus, this article aims to introduce this methodology applied to a dataset referring to an experiment involving 57 Swiss mice infected by Trypanosoma cruzi, which had their weights monitored for 12 weeks. The analyses showed significant differences in the weight trajectory of the individuals by treatment group; besides, the assumptions required to validate the model were met. Therefore, it is possible to conclude that this methodology is satisfactory in modeling data of longitudinal sort, because, with this approach, in addition to the possibility of including fixed and random effects, these models allow adding complex correlation structures to residuals.
Meta-analysis of generalized additive models in neuroimaging studies
•Allows combination of nonlinear models without sharing data.•Increases power and accuracy in neuroimaging studies.•Illustrated in case study from the Lifebrain consortium.•Is available in open source R package. Analyzing data from multiple neuroimaging studies has great potential in terms of increasing statistical power, enabling detection of effects of smaller magnitude than would be possible when analyzing each study separately and also allowing to systematically investigate between-study differences. Restrictions due to privacy or proprietary data as well as more practical concerns can make it hard to share neuroimaging datasets, such that analyzing all data in a common location might be impractical or impossible. Meta-analytic methods provide a way to overcome this issue, by combining aggregated quantities like model parameters or risk ratios. Most meta-analytic tools focus on parametric statistical models, and methods for meta-analyzing semi-parametric models like generalized additive models have not been well developed. Parametric models are often not appropriate in neuroimaging, where for instance age-brain relationships may take forms that are difficult to accurately describe using such models. In this paper we introduce meta-GAM, a method for meta-analysis of generalized additive models which does not require individual participant data, and hence is suitable for increasing statistical power while upholding privacy and other regulatory concerns. We extend previous works by enabling the analysis of multiple model terms as well as multivariate smooth functions. In addition, we show how meta-analytic p-values can be computed for smooth terms. The proposed methods are shown to perform well in simulation experiments, and are demonstrated in a real data analysis on hippocampal volume and self-reported sleep quality data from the Lifebrain consortium. We argue that application of meta-GAM is especially beneficial in lifespan neuroscience and imaging genetics. The methods are implemented in an accompanying R package metagam, which is also demonstrated.
Particulate Matter Concentrations over South Korea: Impact of Meteorology and Other Pollutants
Air pollution is a serious challenge in South Korea and worldwide, and negatively impacts human health and mortality rates. To assess air quality and the spatiotemporal characteristics of atmospheric particulate matter (PM), PM concentrations were compared with meteorological conditions and the concentrations of other airborne pollutants over South Korea from 2015 to 2020, using different linear and non-linear models such as linear regression, generalized additive, and multivariable linear regression models. The results showed that meteorological conditions played a significant role in the formation, transportation, and deposition of air pollutants. PM2.5 levels peaked in January, while PM10 levels peaked in April. Both were at their lowest levels in July. Further, PM2.5 was the highest during winter, followed by spring, autumn, and summer, whereas PM10 was the highest in spring followed by winter, autumn, and summer. PM concentrations were negatively correlated with temperature, relative humidity, and precipitation. Wind speed had an inverse relationship with air quality; zonal and vertical wind components were positively and negatively correlated with PM, respectively. Furthermore, CO, black carbon, SO2, and SO4 had a positive relationship with PM. The impact of transboundary air pollution on PM concentration in South Korea was also elucidated using air mass trajectories.
Assimilating ecological theory with empiricism: Using constrained generalized additive models to enhance survival analyses
Integrating ecological theory with empirical methods is ubiquitous in ecology using hierarchical Bayesian models. However, there has been little development focused on integration of ecological theory into models for survival analysis. Survival is a fundamental process, linking individual fitness with population dynamics, but incorporating life history strategies to inform survival estimation can be challenging because mortality processes occur at multiple scales. We develop an approach to survival analysis, incorporating model constraints based on a species' life history strategy using functional analytical tools. Specifically, we structurally separate intrinsic patterns of mortality that arise from age‐specific processes (e.g. increasing survival during early life stages due to growth or maturation, versus senescence) from extrinsic mortality patterns that arise over different periods of time (e.g. seasonal temporal shifts). We use shape constrained generalized additive models (CGAMs) to obtain age‐specific hazard functions that incorporate theoretical information based on classical survivorship curves into the age component of the model and capture extrinsic factors in the time component. We compare the performance of our modelling approach to standard survival modelling tools that do not explicitly incorporate species life history strategy in the model structure, using metrics of predictive power, accuracy, efficiency and computation time. We applied these models to two case studies that reflect different functional shapes for the underlying survivorship curves, examining age‐period survival for white‐tailed deer Odocoileus virginianus in Wisconsin, USA and Columbian sharp‐tailed grouse Tympanuchus phasianellus columbianus in Colorado, USA. We found that models that included shape constraints for the age effects in the hazard curves using CGAMs outperformed models that did not include explicit functional constraints. We demonstrate a data‐driven and easily extendable approach to survival analysis by showing its utility to obtain hazard rates and survival probabilities, accounting for heterogeneity across ages and over time, for two very different species. We show how integration of ecological theory using constrained generalized additive models, with empirical statistical methods, enhances survival analyses.
Unveiling Tonal Contrasts in the Baltic Region: Exploring Stød in Livonian Spontaneous Speech
This paper presents findings for the tonal contrast and phonation differences between words with and without stød in Livonian spontaneous speech. Livonian differentiates between two contrastive phonological tones: the broken tone or stød and the plain tone. Stød is similar to the Danish stød in some respects and is said to be part of the tone systems of the phonologies of languages in the Baltic region. The findings show that the tonal contrast between words with and without stød tends to be neutralised in Livonian spontaneous speech, but there are individual differences between speakers and also differences between men and women. The most common non-modal phonation period categories in words with stød are creaky and tense. The results also indicate that stød disappears when the word has no prominence.
Decomposing trends in Swedish bird populations using generalized additive mixed models
1. Estimating trends of populations distributed across wide areas is important for conservation and management of animals. Surveys in the form of annually repeated counts across a number of sites are used in many monitoring programmes, and from these, nonlinear trends may be estimated using generalized additive models (GAM). 2. I use generalized additive mixed models (GAMM) to decompose population change into a long-term, smooth, trend component and a component for short-term fluctuations. The longterm population trend is modelled as a smooth function of time and short-term fluctuations as temporal random effects. The methods are applied to analyse trends in goldcrest and greenfinch populations in Sweden using data from the Swedish Breeding Bird Survey. I use simulations to investigate statistical properties of the model. 3. The model separates short-term fluctuations from longer term population change. Depending on the amount of noise in the population fluctuations, estimated long-term trends can differ markedly from estimates based on standard GAMs. For the goldcrest with wide among-year fluctuations, trends estimated with GAMs suggest that the population has in recent years recovered from a decline. When filtering out, short-term fluctuations analyses suggest that the population has been in steady decline since the beginning of the survey. 4. Simulations suggest that trend estimation using the GAMM model reduces spurious detection of long-term population change found with estimates from a GAM model, but gives similar mean square errors. The simulations therefore suggest that the GAMM model, which decomposes population change, estimates uncertainty of long-term trends more accurately at little cost in detecting them. 5. Policy implications. Filtering out short-term fluctuations in the estimation of long-term smooth trends using temporal random effects in a generalized additive mixed model provides more robust inference about the long-term trends compared to when such random effects are not used. This can have profound effects on management decisions, as illustrated in an example for goldcrest in the Swedish breeding bird survey. In the example, if temporal random effects were not used, red listing would be highly influenced by the specific year in which it was done. When temporal random effects are used, red listing is stable over time. The methods are available in an R-package, pop trend.