Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
356 result(s) for "Global atmospheric research program"
Sort by:
Can the Impact of Aerosols on Deep Convection be Isolated from Meteorological Effects in Atmospheric Observations?
Influence of pollution on dynamics of deep convection continues to be a controversial topic. Arguably, only carefully designed numerical simulations can clearly separate the impact of aerosols from the effects of meteorological factors that affect moist convection. This paper argues that such a separation is virtually impossible using observations because of the insufficient accuracy of atmospheric measurements and the fundamental nature of the interaction between deep convection and its environment. To support this conjecture, results from numerical simulations are presented that apply modeling methodology previously developed by the author. The simulations consider small modifications, difficult to detect in observations, of the initial sounding, surface fluxes, and large-scale forcing tendencies. All these represent variations of meteorological conditions that affect deep convective dynamics independently of aerosols. The setup follows the case of daytime convective development over land based on observations during the Large-Scale Biosphere–Atmosphere (LBA) field project in Amazonia. The simulated observable macroscopic changes of convection, such as the surface precipitation and upper-tropospheric cloudiness, are similar to or larger than those resulting from changes of cloud condensation nuclei from pristine to polluted conditions studied previously using the same modeling case. Observations from Phase III of the Global Atmospheric Research Program Atlantic Tropical Experiment (GATE) are also used to support the argument concerning the impact of the large-scale forcing. The simulations suggest that the aerosol impacts on dynamics of deep convection cannot be isolated from meteorological effects, at least for the daytime development of unorganized deep convection considered in this study.
Who Is (More) Rational?
Revealed preference theory offers a criterion for decision-making quality: if decisions are high quality then there exists a utility function the choices maximize. We conduct a large-scale experiment to test for consistency with utility maximization. Consistency scores vary markedly within and across socioeconomic groups. In particular, consistency is strongly related to wealth: A standard deviation increase in consistency is associated with 15-19 percent more household wealth. This association is quantitatively robust to conditioning on correlates of unobserved constraints, preferences, and beliefs. Consistency with utility maximization under laboratory conditions thus captures decision-making ability that applies across domains and influences important real-world outcomes.
THORPEX RESEARCH AND THE SCIENCE OF PREDICTION
The Observing System Research and Predictability Experiment (THORPEX) was a 10-yr, international research program organized by the World Meteorological Organization’s World Weather Research Program. THORPEX was motivated by the need to accelerate the rate of improvement in the accuracy of 1-day to 2-week forecasts of high-impact weather for the benefit of society, the economy, and the environment. THORPEX, which took place from 2005 to 2014, was the first major international program focusing on the advancement of global numerical weather prediction systems since the Global Atmospheric Research Program, which took place almost 40 years earlier, from 1967 through 1982. The scientific achievements of THORPEX were accomplished through bringing together scientists from operational centers, research laboratories, and the academic community to collaborate on research that would ultimately advance operational predictive skill. THORPEX included an unprecedented effort to make operational products readily accessible to the broader academic research community, with community efforts focused on problems where challenging science intersected with the potential to accelerate improvements in predictive skill. THORPEX also collaborated with other major programs to identify research areas of mutual interest, such as topics at the intersection of weather and climate. THORPEX research has 1) increased our knowledge of the global-to-regional influences on the initiation, evolution, and predictability of high-impact weather; 2) provided insight into how predictive skill depends on observing strategies and observing systems; 3) improved data assimilation and ensemble forecast systems; 4) advanced knowledge of high-impact weather associated with tropical and polar circulations and their interactions with midlatitude flows; and 5) expanded society’s use of weather information through applied and social science research.
Species distribution modelling of marine benthos : a North Sea case study
Species distribution models (SDMs) were applied to predict the distribution of benthic species in the North Sea. An understanding of species distribution patterns is essential to gain insight into ecological processes in marine ecosystems and to guide ecosystem management strategies. Therefore, we compared 9 different SDM methods, including GLM, GBM, FDA, SVM, RF, MAXENT, BIOCLIM, GARP and MARS, by using 10 environmental variables to model the distribution of 20 marine benthic species. Most of the models showed good or very good performance in terms of predictive power and accuracy, with highest mean area under the curve (AUC) values of 0.845 and 0.840, obtained for the MAXENT and GBM models, respectively. The poorest performance was shown by the BIOCLIM model, which had a mean AUC of 0.708. Nevertheless, the mapped distribution patterns varied remarkably depending on the model used, with up to 32.5% differences in predictions between models. For species with a narrow distribution range, the models showed a better performance based on the AUC than for species with a broad distribution range, which can most likely be attributed to the restricted spatial scale and the model evaluation procedure. Of the environmental variables, bottom water temperature and depth had the greatest effect on the distribution of 14 benthic species, based on MAXENT results. We examine the potential utility of this strategy for predicting future distribution of benthic species in response to climate change.
Spatially autocorrelated sampling falsely inflates measures of accuracy for presence-only niche models
Environmental niche models that utilize presence-only data have been increasingly employed to model species distributions and test ecological and evolutionary predictions. The ideal method for evaluating the accuracy of a niche model is to train a model with one dataset and then test model predictions against an independent dataset. However, a truly independent dataset is often not available, and instead random subsets of the total data are used for 'training' and 'testing' purposes. The goal of this study was to determine how spatially autocorrelated sampling affects measures of niche model accuracy when using subsets of a larger dataset for accuracy evaluation. The distribution of Centaurea maculosa (spotted knapweed; Asteraceae) was modelled in six states in the western United States: California, Oregon, Washington, Idaho, Wyoming and Montana. Two types of niche modelling algorithms - the genetic algorithm for rule-set prediction (GARP) and maximum entropy modelling (as implemented with Maxent) - were used to model the potential distribution of C. maculosa across the region. The effect of spatially autocorrelated sampling was examined by applying a spatial filter to the presence-only data (to reduce autocorrelation) and then comparing predictions made using the spatial filter with those using a random subset of the data, equal in sample size to the filtered data. The accuracy of predictions from both algorithms was sensitive to the spatial autocorrelation of sampling effort in the occurrence data. Spatial filtering led to lower values of the area under the receiver operating characteristic curve plot but higher similarity statistic (I) values when compared with predictions from models built with random subsets of the total data, meaning that spatial autocorrelation of sampling effort between training and test data led to inflated measures of accuracy. The findings indicate that care should be taken when interpreting the results from presence-only niche models when training and test data have been randomly partitioned but occurrence data were non-randomly sampled (in a spatially autocorrelated manner). The higher accuracies obtained without the spatial filter are a result of spatial autocorrelation of sampling effort between training and test data inflating measures of prediction accuracy. If independently surveyed data for testing predictions are unavailable, then it may be necessary to explicitly account for the spatial autocorrelation of sampling effort between randomly partitioned training and test subsets when evaluating niche model predictions.
Does the interpolation accuracy of species distribution models come at the expense of transferability?
Model transferability (extrapolative accuracy) is one important feature in species distribution models, required in several ecological and conservation biological applications. This study uses 10 modelling techniques and nationwide data on both (1) species distribution of birds, butterflies, and plants and (2) climate and land cover in Finland to investigate whether good interpolative prediction accuracy for models comes at the expense of transferability —i.e. markedly worse performance in new areas. Models' interpolation and extrapolation performance was primarily assessed using AUC (the area under the curve of a receiver characteristic plot) and Kappa statistics, with supplementary comparisons examining model sensitivity and specificity values. Our AUC and Kappa results show that extrapolation to new areas is a greater challenge for all included modelling techniques than simple filling of gaps in a well-sampled area, but there are also differences among the techniques in the degree of transferability. Among the machine-learning modelling techniques, MAXENT, generalized boosting methods (GBM), and artificial neural networks (ANN) showed good transferability while the performance of GARP and random forest (RF) decreased notably in extrapolation. Among the regression-based methods, generalized additive models (GAM) and generalized linear models (GLM) showed good transferability. A desirable combination of good prediction accuracy and good transferability was evident for three modelling techniques: MAXENT, GBM, and GAM. However, examination of model sensitivity and specificity revealed that model types may differ in their tendencies to either increased over-prediction of presences or absences in extrapolation, and some of the methods show contrasting changes in sensitivity vs specificity (e.g. ANN and GARP). Among the three species groups, the best transferability was seen with birds, followed closely by butterflies, whereas reliable extrapolation for plant species distribution models appears to be a major challenge at least at this scale. Overall, detailed knowledge of the behaviour of different techniques in various study settings and with different species groups is of utmost importance in predictive modelling.
GARP (LRRC32) Is Essential for the Surface Expression of Latent TGF-β on Platelets and Activated FOXP3⁺ Regulatory T Cells
TGF-β family members are highly pleiotropic cytokines with diverse regulatory functions. TGF-β is normally found in the latent form associated with latency-associated peptide (LAP). This latent complex can associate with latent TGFβ-binding protein (LTBP) to produce a large latent form. Latent TGF-β is also found on the surface of activated FOXP3⁺ regulatory T cells (Tregs), but it is unclear how it is anchored to the cell membrane. We show that GARP or LRRC32, a leucine-rich repeat molecule of unknown function, is critical for tethering TGF-β to the cell surface. We demonstrate that platelets and activated Tregs co-express latent TGF-β and GARP on their membranes. The knockdown of GARP mRNA with siRNA prevented surface latent TGF-β expression on activated Tregs and recombinant latent TGF-β1 is able to bind directly with GARP. Confocal microscopy and immunoprecipitation strongly support their interactions. The role of TGF-β on Tregs appears to have dual functions, both for Treg-mediated suppression and infectious tolerance mechanism.
Predicting Species Distributions from Small Numbers of Occurrence Records: A Test Case Using Cryptic Geckos in Madagascar
Aim Techniques that predict species potential distributions by combining observed occurrence records with environmental variables show much potential for application across a range of biogeographical analyses. Some of the most promising applications relate to species for which occurrence records are scarce, due to cryptic habits, locally restricted distributions or low sampling effort. However, the minimum sample sizes required to yield useful predictions remain difficult to determine. Here we developed and tested a novel jackknife validation approach to assess the ability to predict species occurrence when fewer than 25 occurrence records are available. Location Madagascar. Methods Models were developed and evaluated for 13 species of secretive leaf-tailed geckos (Uroplatus spp.) that are endemic to Madagascar, for which available sample sizes range from 4 to 23 occurrence localities (at 1 km2grid resolution). Predictions were based on 20 environmental data layers and were generated using two modelling approaches: a method based on the principle of maximum entropy (Maxent) and a genetic algorithm (GARP). Results We found high success rates and statistical significance in jackknife tests with sample sizes as low as five when the Maxent model was applied. Results for GARP at very low sample sizes (less than c. 10) were less good. When sample sizes were experimentally reduced for those species with the most records, variability among predictions using different combinations of localities demonstrated that models were greatly influenced by exactly which observations were included. Main conclusions We emphasize that models developed using this approach with small sample sizes should be interpreted as identifying regions that have similar environmental conditions to where the species is known to occur, and not as predicting actual limits to the range of a species. The jackknife validation approach proposed here enables assessment of the predictive ability of models built using very small sample sizes, although use of this test with larger sample sizes may lead to overoptimistic estimates of predictive power. Our analyses demonstrate that geographical predictions developed from small numbers of occurrence records may be of great value, for example in targeting field surveys to accelerate the discovery of unknown populations and species.
The effect of sample size and species characteristics on performance of different species distribution modeling methods
Species distribution models should provide conservation practioners with estimates of the spatial distributions of species requiring attention. These species are often rare and have limited known occurrences, posing challenges for creating accurate species distribution models. We tested four modeling methods (Bioclim, Domain, GARP, and Maxent) across 18 species with different levels of ecological specialization using six different sample size treatments and three different evaluation measures. Our assessment revealed that Maxent was the most capable of the four modeling methods in producing useful results with sample sizes as small as 5, 10 and 25 occurrences. The other methods compensated reasonably well (Domain and GARP) to poorly (Bioclim) when presented with datasets of small sample sizes. We show that multiple evaluation measures are necessary to determine accuracy of models produced with presence-only data. Further, we found that accuracy of models is greater for species with small geographic ranges and limited environmental tolerance, ecological characteristics of many rare species. Our results indicate that reasonable models can be made for some rare species, a result that should encourage conservationists to add distribution modeling to their toolbox.