Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,985 result(s) for "Biology, Experimental Data processing."
Sort by:
Collecting experiments : making Big Data biology
Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it. Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts. Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowing--collecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.
Collecting experiments : making big data biology
Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it. Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts. Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowing—collecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.
A population-based phenome-wide association study of cardiac and aortic structure and function
Differences in cardiac and aortic structure and function are associated with cardiovascular diseases and a wide range of other types of disease. Here we analyzed cardiovascular magnetic resonance images from a population-based study, the UK Biobank, using an automated machine-learning-based analysis pipeline. We report a comprehensive range of structural and functional phenotypes for the heart and aorta across 26,893 participants, and explore how these phenotypes vary according to sex, age and major cardiovascular risk factors. We extended this analysis with a phenome-wide association study, in which we tested for correlations of a wide range of non-imaging phenotypes of the participants with imaging phenotypes. We further explored the associations of imaging phenotypes with early-life factors, mental health and cognitive function using both observational analysis and Mendelian randomization. Our study illustrates how population-based cardiac and aortic imaging phenotypes can be used to better define cardiovascular disease risks as well as heart–brain health interactions, highlighting new opportunities for studying disease mechanisms and developing image-based biomarkers. Using magnetic resonance images of the heart and aorta from 26,893 individuals in the UK Biobank, a phenome-wide association study associates cardiovascular imaging phenotypes with a wide range of demographic, lifestyle and clinical features.
On the responsible use of digital data to tackle the COVID-19 pandemic
Large-scale collection of data could help curb the COVID-19 pandemic, but it should not neglect privacy and public trust. Best practices should be identified to maintain responsible data-collection and data-processing standards at a global scale.
Digital technologies in the public-health response to COVID-19
Digital technologies are being harnessed to support the public-health response to COVID-19 worldwide, including population surveillance, case identification, contact tracing and evaluation of interventions on the basis of mobility data and communication with the public. These rapid responses leverage billions of mobile phones, large online datasets, connected devices, relatively low-cost computing resources and advances in machine learning and natural language processing. This Review aims to capture the breadth of digital innovations for the public-health response to COVID-19 worldwide and their limitations, and barriers to their implementation, including legal, ethical and privacy barriers, as well as organizational and workforce barriers. The future of public health is likely to become increasingly digital, and we review the need for the alignment of international strategies for the regulation, evaluation and use of digital technologies to strengthen pandemic management, and future preparedness for COVID-19 and other infectious diseases. The COVID-19 pandemic has resulted in an accelerated development of applications for digital health, including symptom monitoring and contact tracing. Their potential is wide ranging and must be integrated into conventional approaches to public health for best effect.
Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence
Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains challenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physicians and unearth associations that previous statistical methods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework. Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing common childhood diseases. Our study provides a proof of concept for implementing an AI-based system as a means to aid physicians in tackling large amounts of data, augmenting diagnostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare providers are in relative shortage, the benefits of such an AI system are likely to be universal.A natural language processing system can support physicians in diagnostic assessments by extracting clinical information from electronic medical records to accurately predict diagnosis in pediatric patients.
The Global Burden of Disease Study at 30 years
The Global Burden of Disease Study (GBD) began 30 years ago with the goal of providing timely, valid and relevant assessments of critical health outcomes. Over this period, the GBD has become progressively more granular. The latest iteration provides assessments of thousands of outcomes for diseases, injuries and risk factors in more than 200 countries and territories and at the subnational level in more than 20 countries. The GBD is now produced by an active collaboration of over 8,000 scientists and analysts from more than 150 countries. With each GBD iteration, the data, data processing and methods used for data synthesis have evolved, with the goal of enhancing transparency and comparability of measurements and communicating various sources of uncertainty. The GBD has many limitations, but it remains a dynamic, iterative and rigorous attempt to provide meaningful health measurement to a wide range of stakeholders. This Perspective reflects on the past, present and future of the dynamic, expanding public health endeavor that is the Global Burden of Disease Study.
The triumphs and limitations of computational methods for scRNA-seq
The rapid progress of protocols for sequencing single-cell transcriptomes over the past decade has been accompanied by equally impressive advances in the computational methods for analysis of such data. As capacity and accuracy of the experimental techniques grew, the emerging algorithm developments revealed increasingly complex facets of the underlying biology, from cell type composition to gene regulation to developmental dynamics. At the same time, rapid growth has forced continuous reevaluation of the underlying statistical models, experimental aims, and sheer volumes of data processing that are handled by these computational tools. Here, I review key computational steps of single-cell RNA sequencing (scRNA-seq) analysis, examine assumptions made by different approaches, and highlight successes, remaining ambiguities, and limitations that are important to keep in mind as scRNA-seq becomes a mainstream technique for studying biology.This review provides an overview of recent computational developments in scRNA-seq analysis and highlights packages and tools applied in executing these analyses.
A guide to deep learning in healthcare
Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.