Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
42 result(s) for "Emmott, Stephen"
Sort by:
Ten billion
\"Just 10,000 years ago, there were only one million humans on Earth. By 1800, just over two hundred years ago, there were one billion of us. By 1960, there were three billion. There are now over seven billion of us. By 2050, there will be at least nine billion other people--and, sometime near the end of this century, there will be at least ten billion of us. There is simply no known way to provide this many people with clothes, food, and fresh water. And any action we take to address these issues will turn up the thermostat on global warming. Stephen Emmott has dedicated his career to researching the effects of humans on the Earth's natural systems. This is his call to arms, an urgent plea to re-imagine the interconnected web of our global problems in a new light\"-- Provided by publisher.
Emergent Global Patterns of Ecosystem Structure and Function from a Mechanistic General Ecosystem Model
Anthropogenic activities are causing widespread degradation of ecosystems worldwide, threatening the ecosystem services upon which all human life depends. Improved understanding of this degradation is urgently needed to improve avoidance and mitigation measures. One tool to assist these efforts is predictive models of ecosystem structure and function that are mechanistic: based on fundamental ecological principles. Here we present the first mechanistic General Ecosystem Model (GEM) of ecosystem structure and function that is both global and applies in all terrestrial and marine environments. Functional forms and parameter values were derived from the theoretical and empirical literature where possible. Simulations of the fate of all organisms with body masses between 10 µg and 150,000 kg (a range of 14 orders of magnitude) across the globe led to emergent properties at individual (e.g., growth rate), community (e.g., biomass turnover rates), ecosystem (e.g., trophic pyramids), and macroecological scales (e.g., global patterns of trophic structure) that are in general agreement with current data and theory. These properties emerged from our encoding of the biology of, and interactions among, individual organisms without any direct constraints on the properties themselves. Our results indicate that ecologists have gathered sufficient information to begin to build realistic, global, and mechanistic models of ecosystems, capable of predicting a diverse range of ecosystem properties and their response to human pressures.
10 billion
Ten Billion is a snapshot of a planet, and our species, approaching a crisis : how we got here, what's happening now, and where this leaves us for the rest of this century. Ten Billion is anything but a \"green\" book. And it's not another book about the climate. Ten Billion is a book about us\"-
Selector function of MHC I molecules is determined by protein plasticity
The selection of peptides for presentation at the surface of most nucleated cells by major histocompatibility complex class I molecules (MHC I) is crucial to the immune response in vertebrates. However, the mechanisms of the rapid selection of high affinity peptides by MHC I from amongst thousands of mostly low affinity peptides are not well understood. We developed computational systems models encoding distinct mechanistic hypotheses for two molecules, HLA-B*44:02 (B*4402) and HLA-B*44:05 (B*4405), which differ by a single residue yet lie at opposite ends of the spectrum in their intrinsic ability to select high affinity peptides. We used in vivo biochemical data to infer that a conformational intermediate of MHC I is significant for peptide selection. We used molecular dynamics simulations to show that peptide selector function correlates with protein plasticity and confirmed this experimentally by altering the plasticity of MHC I with a single point mutation, which altered in vivo selector function in a predictable way. Finally, we investigated the mechanisms by which the co-factor tapasin influences MHC I plasticity. We propose that tapasin modulates MHC I plasticity by dynamically coupling the peptide binding region and α 3 domain of MHC I allosterically, resulting in enhanced peptide selector function.
Universal, easy access to geotemporal information: FetchClimate
Geotemporal information, information associated with geographical space and time, has always been critical to climate and environmental science. However, this information is certainly not universally or easily accessible. In fact, obtaining and using geotemporal information often comes with a considerable technical overheads, impeding research progress. To address this, we introduce FetchClimate: a cloud service designed to provide easy, universal access to geotemporal information. FetchClimate enables and accelerates the use of geotemporal information by enabling it to be accessed programmatically from a Web service (such as the statistical software R) or non-programmatically using a Web browser. We intend the service to accelerate the pace of ecological and environmental research by eliminating the technical overheads currently needed to obtain geotemporal information. The software, online manual, and user support are freely available at .
Ten Simple Rules for Effective Computational Research
  In particular, the use and development of software tools is becoming vital for investigating scientific hypotheses, and a wide range of scientists are finding software development playing a more central role in their day-to-day research. While many guides to software development exist, they are often aimed at computer scientists [6] or concentrate on large open-source projects [7]; the present guide is aimed specifically at the vast majority of scientific researchers: those without formal training in computer science.
A Peptide Filtering Relation Quantifies MHC Class I Peptide Optimization
Major Histocompatibility Complex (MHC) class I molecules enable cytotoxic T lymphocytes to destroy virus-infected or cancerous cells, thereby preventing disease progression. MHC class I molecules provide a snapshot of the contents of a cell by binding to protein fragments arising from intracellular protein turnover and presenting these fragments at the cell surface. Competing fragments (peptides) are selected for cell-surface presentation on the basis of their ability to form a stable complex with MHC class I, by a process known as peptide optimization. A better understanding of the optimization process is important for our understanding of immunodominance, the predominance of some T lymphocyte specificities over others, which can determine the efficacy of an immune response, the danger of immune evasion, and the success of vaccination strategies. In this paper we present a dynamical systems model of peptide optimization by MHC class I. We incorporate the chaperone molecule tapasin, which has been shown to enhance peptide optimization to different extents for different MHC class I alleles. Using a combination of published and novel experimental data to parameterize the model, we arrive at a relation of peptide filtering, which quantifies peptide optimization as a function of peptide supply and peptide unbinding rates. From this relation, we find that tapasin enhances peptide unbinding to improve peptide optimization without significantly delaying the transit of MHC to the cell surface, and differences in peptide optimization across MHC class I alleles can be explained by allele-specific differences in peptide binding. Importantly, our filtering relation may be used to dynamically predict the cell surface abundance of any number of competing peptides by MHC class I alleles, providing a quantitative basis to investigate viral infection or disease at the cellular level. We exemplify this by simulating optimization of the distribution of peptides derived from Human Immunodeficiency Virus Gag-Pol polyprotein.
Troubling Trends in Scientific Software Use
\"Blind trust\" is dangerous when choosing software to support research. Software pervades every domain of science ( 1 – 3 ), perhaps nowhere more decisively than in modeling. In key scientific areas of great societal importance, models and the software that implement them define both how science is done and what science is done ( 4 , 5 ). Across all science, this dependence has led to concerns around the need for open access to software ( 6 , 7 ), centered on the reproducibility of research ( 1 , 8 – 10 ). From fields such as high-performance computing, we learn key insights and best practices for how to develop, standardize, and implement software ( 11 ). Open and systematic approaches to the development of software are essential for all sciences. But for many scientists this is not sufficient. We describe problems with the adoption and use of scientific software.
Time to model all life on Earth
To help transform our understanding of the biosphere, ecologists -- like climate scientists -- should simulate whole ecosystems, argue Drew Purves and colleagues.
Scientists and software - surveying the species distribution modelling community
Aim Software use is ubiquitous in the species distribution modelling (SDM) domain; nearly every scientist working on SDM either uses or develops specialist SDM software; however, little is formally known about the prevalence or preference of one software over another. We seek to provide, for the first time, a 'snapshot' of SDM users, the methods they use and the questions they answer. Location Global. Methods We conducted a survey of over 300 SDM scientists to capture a snapshot of the community and used an extensive literature search of SDM papers in order to investigate the characteristics of the SDM community and its interactions with software developers in terms of co-authoring research publications. Results Our results show that those members of the community who develop software and who are directly connected with developers are among the most highly connected and published authors in the field. We further show that the two most popular softwares for SDM lie at opposite ends of the 'use-complexity' continuum. Main conclusion Given the importance of SDM research in a changing environment, with its increasing use in the policy domain, it is vital to be aware of what software and methodologies are being implemented. Here, we present a snapshot of the SDM community, the software and the methods being used.