Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
7,243 result(s) for "Biological models Data processing."
Sort by:
Collecting experiments : making Big Data biology
Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it. Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts. Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowing--collecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.
Collecting experiments : making big data biology
Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it. Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts. Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowing—collecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.
Topological Data Analysis of Biological Aggregation Models
We apply tools from topological data analysis to two mathematical models inspired by biological aggregations such as bird flocks, fish schools, and insect swarms. Our data consists of numerical simulation output from the models of Vicsek and D'Orsogna. These models are dynamical systems describing the movement of agents who interact via alignment, attraction, and/or repulsion. Each simulation time frame is a point cloud in position-velocity space. We analyze the topological structure of these point clouds, interpreting the persistent homology by calculating the first few Betti numbers. These Betti numbers count connected components, topological circles, and trapped volumes present in the data. To interpret our results, we introduce a visualization that displays Betti numbers over simulation time and topological persistence scale. We compare our topological results to order parameters typically used to quantify the global behavior of aggregations, such as polarization and angular momentum. The topological calculations reveal events and structure not captured by the order parameters.
SciPy 1.0: fundamental algorithms for scientific computing in Python
SciPy is an open-source scientific computing library for the Python programming language. Since its initial release in 2001, SciPy has become a de facto standard for leveraging scientific algorithms in Python, with over 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories and millions of downloads per year. In this work, we provide an overview of the capabilities and development practices of SciPy 1.0 and highlight some recent technical developments. This Perspective describes the development and capabilities of SciPy 1.0, an open source scientific computing library for the Python programming language.
A semantic matching energy function for learning with multi-relational data
Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature as well as on data from a real-world knowledge base (WordNet). In addition, we present how our method can be applied to perform word-sense disambiguation in a context of open-text semantic parsing, where the goal is to learn to assign a structured meaning representation to almost any sentence of free text, demonstrating that it can scale up to tens of thousands of nodes and thousands of types of relation.
Protocol Update for large-scale genome and gene function analysis with the PANTHER classification system (v.14.0)
The PANTHER classification system (http://www.pantherdb.org) is a comprehensive system that combines genomes, gene function classifications, pathways and statistical analysis tools to enable biologists to analyze large-scale genome-wide experimental data. The current system (PANTHER v.14.0) covers 131 complete genomes organized into gene families and subfamilies; evolutionary relationships between genes are represented in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models (HMMs)). The families and subfamilies are annotated with Gene Ontology (GO) terms, and sequences are assigned to PANTHER pathways. A suite of tools has been built to allow users to browse and query gene functions and analyze large-scale experimental data with a number of statistical tests. PANTHER is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. Since the protocol for using this tool (v.8.0) was originally published in 2013, there have been substantial improvements and updates in the areas of data quality, data coverage, statistical algorithms and user experience. This Protocol Update provides detailed instructions on how to analyze genome-wide experimental data in the PANTHER classification system.Here the authors provide an update to their 2013 protocol for using the PANTHER classification system, detailing how to analyze genome-wide experimental data with the newest version of PANTHER (v.14.0), with improvements in the areas of data quality, data coverage, statistical algorithms and user experience.
Ecological niches and geographic distributions
This book provides a first synthetic view of an emerging area of ecology and biogeography, linking individual- and population-level processes to geographic distributions and biodiversity patterns. Problems in evolutionary ecology, macroecology, and biogeography are illuminated by this integrative view. The book focuses on correlative approaches known as ecological niche modeling, species distribution modeling, or habitat suitability modeling, which use associations between known occurrences of species and environmental variables to identify environmental conditions under which populations can be maintained. The spatial distribution of environments suitable for the species can then be estimated: a potential distribution for the species. This approach has broad applicability to ecology, evolution, biogeography, and conservation biology, as well as to understanding the geographic potential of invasive species and infectious diseases, and the biological implications of climate change. The authors lay out conceptual foundations and general principles for understanding and interpreting species distributions with respect to geography and environment. Focus is on development of niche models. While serving as a guide for students and researchers, the book also provides a theoretical framework to support future progress in the field.
The Hidden Geometry of Complex, Network-Driven Contagion Phenomena
The global spread of epidemics, rumors, opinions, and innovations are complex, network-driven dynamic processes. The combined multiscale nature and intrinsic heterogeneity of the underlying networks make it difficult to develop an intuitive understanding of these processes, to distinguish relevant from peripheral factors, to predict their time course, and to locate their origin. However, we show that complex spatiotemporal patterns can be reduced to surprisingly simple, homogeneous wave propagation patterns, if conventional geographic distance is replaced by a probabilistically motivated effective distance. In the context of global, air-traffic-mediated epidemics, we show that effective distance reliably predicts disease arrival times. Even if epidemiological parameters are unknown, the method can still deliver relative arrival times. The approach can also identify the spatial origin of spreading processes and successfully be applied to data of the worldwide 2009 H1N1 influenza pandemic and 2003 SARS epidemic.
Chemometric analysis in Raman spectroscopy from experimental design to machine learning–based modeling
Raman spectroscopy is increasingly being used in biology, forensics, diagnostics, pharmaceutics and food science applications. This growth is triggered not only by improvements in the computational and experimental setups but also by the development of chemometric techniques. Chemometric techniques are the analytical processes used to detect and extract information from subtle differences in Raman spectra obtained from related samples. This information could be used to find out, for example, whether a mixture of bacterial cells contains different species, or whether a mammalian cell is healthy or not. Chemometric techniques include spectral processing (ensuring that the spectra used for the subsequent computational processes are as clean as possible) as well as the statistical analysis of the data required for finding the spectral differences that are most useful for differentiation between, for example, different cell types. For Raman spectra, this analysis process is not yet standardized, and there are many confounding pitfalls. This protocol provides guidance on how to perform a Raman spectral analysis: how to avoid these pitfalls, and strategies to circumvent problematic issues. The protocol is divided into four parts: experimental design, data preprocessing, data learning and model transfer. We exemplify our workflow using three example datasets where the spectra from individual cells were collected in single-cell mode, and one dataset where the data were collected from a raster scanning–based Raman spectral imaging experiment of mice tissue. Our aim is to help move Raman-based technologies from proof-of-concept studies toward real-world applications. Raman spectroscopy is increasingly being used in biological assays and studies. This protocol provides guidance for performing chemometric analysis to detect and extract information relating to the chemical differences between biological samples.