Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
8 result(s) for "Johannesson, Gardar"
Sort by:
Fixed rank kriging for very large spatial data sets
Spatial statistics for very large spatial data sets is challenging. The size of the data set, n, causes problems in computing optimal spatial predictors such as kriging, since its computational cost is of order [graphic removed] . In addition, a large data set is often defined on a large spatial domain, so the spatial process of interest typically exhibits non-stationary behaviour over that domain. A flexible family of non-stationary covariance functions is defined by using a set of basis functions that is fixed in number, which leads to a spatial prediction method that we call fixed rank kriging. Specifically, fixed rank kriging is kriging within this class of non-stationary covariance functions. It relies on computational simplifications when n is very large, for obtaining the spatial best linear unbiased predictor and its mean-squared prediction error for a hidden spatial process. A method based on minimizing a weighted Frobenius norm yields best estimators of the covariance function parameters, which are then substituted into the fixed rank kriging equations. The new methodology is applied to a very large data set of total column ozone data, observed over the entire globe, where n is of the order of hundreds of thousands.
Parametric sensitivity analysis of precipitation at global and local scales in the Community Atmosphere Model CAM5
We investigate the sensitivity of precipitation characteristics (mean, extreme, and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and Quasi‐Monte Carlo sampling approaches to effectively explore the high‐dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics. In the cloud ensemble, six parameters having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. Precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally, the Generalized Linear Model is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75–90% in total). The total variance shows a significant seasonal variability in midlatitude continental regions, but very small in tropical continental regions. Key Points: Parametric sensitivity of precipitation and their dependence are quantified Precipitation does not respond linearly and monotonically to parameter change Much of sensitivity is from independent parameters instead of their interactions
Bayesian Inference and Markov Chain Monte Carlo Sampling to Reconstruct a Contaminant Source on a Continental Scale
A methodology combining Bayesian inference with Markov chain Monte Carlo (MCMC) sampling is applied to a real accidental radioactive release that occurred on a continental scale at the end of May 1998 near Algeciras, Spain. The source parameters (i.e., source location and strength) are reconstructed from a limited set of measurements of the release. Annealing and adaptive procedures are implemented to ensure a robust and effective parameter-space exploration. The simulation setup is similar to an emergency response scenario, with the simplifying assumptions that the source geometry and release time are known. The Bayesian stochastic algorithm provides likely source locations within 100 km from the true source, after exploring a domain covering an area of approximately 1800 km × 3600 km. The source strength is reconstructed with a distribution of values of the same order of magnitude as the upper end of the range reported by the Spanish Nuclear Security Agency. By running the Bayesian MCMC algorithm on a large parallel cluster the inversion results could be obtained in few hours as required for emergency response to continental-scale releases. With additional testing and refinement of the methodology (e.g., tests that also include the source geometry and release time among the unknown source parameters), as well as with the continuous and rapid growth of computational power, the approach can potentially be used for real-world emergency response in the near future.
SENSITIVITY ANALYSIS AND EMULATION FOR FUNCTIONAL DATA USING BAYESIAN ADAPTIVE SPLINES
When a computer code is used to simulate a complex system, one of the fundamental tasks is to assess the sensitivity of the simulator to the different input parameters. In the case of computationally expensive simulators, this is often accomplished via a surrogate statistical model, a statistical output emulator. An effective emulator is one that provides good approximations to the computer code output for wide ranges of input values. In addition, an emulator should be able to handle large dimensional simulation output for a relevant number of inputs; it should flexibly capture heterogeneities in the variability of the response surface; it should be fast to evaluate for arbitrary combinations of input parameters, and it should provide an accurate quantification of the emulation uncertainty. In this paper we discuss the Bayesian approach to multivariate adaptive regression splines (BMARS) as an emulator for a computer model that outputs curves. We introduce modifications to traditional BMARS approaches that allow for fitting large amounts of data and allow for more efficient MCMC sampling. We emphasize the ease with which sensitivity analysis can be performed in this situation. We present a sensitivity analysis of a computer model of the deformation of a protective plate used in pressure-driven experiments. Our example serves as an illustration of the ability of BMARS emulators to fulfill all the necessities of computability, flexibility and reliable calculation on relevant measures of sensitivity.
Dynamic multi-resolution spatial models
Data from remote-sensing platforms play an important role in monitoring environmental processes, such as the distribution of stratospheric ozone. Remote-sense data are typically spatial, temporal, and massive. Existing prediction methods such as kriging are computationally infeasible. The multi-resolution spatial model (MRSM) captures nonstationary spatial dependence and produces fast optimal estimates using a change-of-resolution Kalman filter. However, past data can provide valuable information about the current status of the process being investigated. In this article, we incorporate the temporal dependence into the process by developing a dynamic MRSM. An application of the dynamic MRSM to a month of daily total column ozone data is presented, and on a given day the results of posterior inference are compared to those for the spatial-only MRSM. It is apparent that there are advantages to using the dynamic MRSM in regions where data are missing, such as when a whole swath of satellite data is missing.
Multiresolution statistical modeling in space and time with application to remote sensing of the environment
Analyzing massive spatial and space-time environmental datasets can be demanding. A central example used in this dissertation is the analysis of Total Column Ozone (TCO) data remotely sensed from a satellite. There are a number of issues that need to be resolved. These include computational issues, the challenge of modeling and predicting nonstationary spatial processes, and developing realistic temporal dynamics for space-time processes. In this dissertation, we look at the problem of (1) representing and fitting a large-scale spatial trend surface to massive, global datasets; (2) variance-covariance modeling and estimation for multi-resolution spatial models (MRSMs); and (3) developing a dynamic MRSM with special emphasis on the development of the temporal dynamics. One can argue that the large-scale spatial features of massive, fine-resolution spatial data can be obtained from the coarser-resolution aspects of the data. Consequently, we propose a sequential-aggregation procedure that yields more manageable data at coarser resolutions and use these for spatial trend surface fitting. Assuming that the trend surface belongs to the class of linear combinations of smooth basis functions, we investigate a new trend-surface-fitting approach based on penalized weighted-least-squares regression, where the penalty term is data-adaptive. Extensive comparisons are made to standard fitting procedures based on a day's worth of TCO data. Multi-resolution spatial models (MRSMs) have been shown to be successful at modeling massive spatial datasets. The MRSM models the spatial dependence indirectly through a coarse-to-fine-resolution process model, where it is necessary to specify the variance parameters. We propose a spatially smooth model for the variance parameters, outline computationally fast, resolution-specific-likelihood-based methods for parameter estimation, and apply the statistical methodology to a day's worth of TCO data. The MRSM is a spatial-only model. An extension of the MRSM is given that incorporates temporal dynamics at the coarsest spatial resolution of interest, yielding a dynamic (space-time) MRSM. A physics-based flow model is proposed for the coarse-resolution dynamics, which maintains the computational advantage of the MRSM. An application to month's worth of TCO data is given.
Spatial-temporal nonlinear filtering based on hierarchical statistical models
A hierarchical statistical model is made up generically of a data model, a process model, and occasionally a prior model for all the unknown parameters. The process model, known as the state equations in the filtering literature, is where most of the scientist's physical/chemical/biological knowledge about the problem is used. In the case of a dynamically changing configuration of objects moving through a spatial domain of interest, that knowledge is summarized through equations of motion with random perturbations. In this paper, our interest is in dynamically filtering noisy observations on these objects, where the state equations are nonlinear. Two recent methods of filtering, the Unscented Particle filter (UPF) and the Unscented Kalman filter, are presented and compared to the better known Extended Kalman filter. Other sources of nonlinearity arise when we wish to estimate nonlinear functions of the objects positions; it is here where the UPF shows its superiority, since optimal estimates and associated variances are straightforward to obtain. The longer computing time needed for the UPF is often not a big issue, with the ever faster processors that are available. This paper is a review of spatial-temporal nonlinear filtering, and we illustrate it in a Command and Control setting where the objects are highly mobile weapons, and the nonlinear function of object locations is a two-dimensional surface known as the danger-potential field.[PUBLICATION ABSTRACT]
Start codon variant in LAG3 is associated with decreased LAG-3 expression and increased risk of autoimmune thyroid disease
Autoimmune thyroid disease (AITD) is a common autoimmune disease. In a GWAS meta-analysis of 110,945 cases and 1,084,290 controls, 290 sequence variants at 225 loci are associated with AITD. Of these variants, 115 are previously unreported. Multiomics analysis yields 235 candidate genes outside the MHC-region and the findings highlight the importance of genes involved in T-cell regulation. A rare 5’-UTR variant (rs781745126-T, MAF = 0.13% in Iceland) in LAG3 has the largest effect (OR = 3.42, P  = 2.2 × 10 −16 ) and generates a novel start codon for an open reading frame upstream of the canonical protein translation initiation site. rs781745126-T reduces mRNA and surface expression of the inhibitory immune checkpoint LAG-3 co-receptor on activated lymphocyte subsets and halves LAG-3 levels in plasma among heterozygotes. All three homozygous carriers of rs781745126-T have AITD, of whom one also has two other T-cell mediated diseases, that is vitiligo and type 1 diabetes. rs781745126-T associates nominally with vitiligo (OR = 5.1, P  = 6.5 × 10 −3 ) but not with type 1 diabetes. Thus, the effect of rs781745126-T is akin to drugs that inhibit LAG-3, which unleash immune responses and can have thyroid dysfunction and vitiligo as adverse events. This illustrates how a multiomics approach can reveal potential drug targets and safety concerns. Here, the authors conduct a GWAS for autoimmune thyroid disease, finding 225 loci including a start codon variant in LAG3 that confers a 3.4-fold risk of autoimmune thyroid disease and reduces expression and plasma levels of LAG−3.