Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
70,419 result(s) for "Life sciences Data processing."
Sort by:
Python for the life sciences : a gentle introduction to Python for life scientists
\"Treat yourself to a lively, intuitive, and easy-to-follow introduction to computer programming in Python. The book was written specifically for biologists with little or no prior experience of writing code - with the goal of giving them not only a foundation in Python programming, but also the confidence and inspiration to start using Python in their own research. Virtually all of the examples in the book are drawn from across a wide spectrum of life science research, from simple biochemical calculations and sequence analysis, to modeling the dynamic interactions of genes and proteins in cells, or the drift of genes in an evolving population. Best of all, \"Python for the life sciences\" shows you how to implement all of these projects in Python, one of the most popular programming languages for scientific computing. If you are a life scientist interested in learning Python to jump-start your research, this is the book for you\"--Back cover.
Networks of networks in biology : concepts, tools and applications
Biological systems are extremely complex and have emergent properties that cannot be explained or even predicted by studying their individual parts in isolation. The reductionist approach, although successful in the early days of molecular biology, underestimates this complexity. As the amount of available data grows, so it will become increasingly important to be able to analyse and integrate these large data sets. This book introduces novel approaches and solutions to the Big Data problem in biomedicine, and presents new techniques in the field of graph theory for handling and processing multi-type large data sets. By discussing cutting-edge problems and techniques, researchers from a wide range of fields will be able to gain insights for exploiting big heterogonous data in the life sciences through the concept of 'network of networks'.
Database technology for life sciences and medicine
This book presents innovative approaches from database researchers supporting the challenging process of knowledge discovery in biomedicine. Ranging from how to effectively store and organize biomedical data via data quality and case studies to sophisticated data mining methods, this book provides the state-of-the-art of database technology for life sciences and medicine.
Grid computing in life sciences
This is the second volume in the series of proceedings from the International Workshop on Life Science Grid. It represents the few, if not the only, dedicated proceedings volumes that gathers together the presentations of leaders in the emerging sub-discipline of grid computing for the life sciences. The volume covers the latest developments, trends and trajectories in life science grid computing from top names in bioinformatics and computational biology: A Konagaya; J C Wooley of the National Science Foundation (NSF) and DoE thought leader in supercomputing and life science computing, and one of the key people in the NSF CIBIO initiative; P Arzberger of PRAGMA fame; and R Sinnott of UK e-Science.
‘Fit-for-purpose?’ – challenges and opportunities for applications of blockchain technology in the future of healthcare
Blockchain is a shared distributed digital ledger technology that can better facilitate data management, provenance and security, and has the potential to transform healthcare. Importantly, blockchain represents a data architecture, whose application goes far beyond Bitcoin – the cryptocurrency that relies on blockchain and has popularized the technology. In the health sector, blockchain is being aggressively explored by various stakeholders to optimize business processes, lower costs, improve patient outcomes, enhance compliance, and enable better use of healthcare-related data. However, critical in assessing whether blockchain can fulfill the hype of a technology characterized as ‘revolutionary’ and ‘disruptive’, is the need to ensure that blockchain design elements consider actual healthcare needs from the diverse perspectives of consumers, patients, providers, and regulators. In addition, answering the real needs of healthcare stakeholders, blockchain approaches must also be responsive to the unique challenges faced in healthcare compared to other sectors of the economy. In this sense, ensuring that a health blockchain is ‘fit-for-purpose’ is pivotal. This concept forms the basis for this article, where we share views from a multidisciplinary group of practitioners at the forefront of blockchain conceptualization, development, and deployment.
Quantifying and contextualizing the impact of bioRxiv preprints through automated social media audience segmentation
Engagement with scientific manuscripts is frequently facilitated by Twitter and other social media platforms. As such, the demographics of a paper's social media audience provide a wealth of information about how scholarly research is transmitted, consumed, and interpreted by online communities. By paying attention to public perceptions of their publications, scientists can learn whether their research is stimulating positive scholarly and public thought. They can also become aware of potentially negative patterns of interest from groups that misinterpret their work in harmful ways, either willfully or unintentionally, and devise strategies for altering their messaging to mitigate these impacts. In this study, we collected 331,696 Twitter posts referencing 1,800 highly tweeted bioRxiv preprints and leveraged topic modeling to infer the characteristics of various communities engaging with each preprint on Twitter. We agnostically learned the characteristics of these audience sectors from keywords each user's followers provide in their Twitter biographies. We estimate that 96% of the preprints analyzed are dominated by academic audiences on Twitter, suggesting that social media attention does not always correspond to greater public exposure. We further demonstrate how our audience segmentation method can quantify the level of interest from nonspecialist audience sectors such as mental health advocates, dog lovers, video game developers, vegans, bitcoin investors, conspiracy theorists, journalists, religious groups, and political constituencies. Surprisingly, we also found that 10% of the preprints analyzed have sizable (>5%) audience sectors that are associated with right-wing white nationalist communities. Although none of these preprints appear to intentionally espouse any right-wing extremist messages, cases exist in which extremist appropriation comprises more than 50% of the tweets referencing a given preprint. These results present unique opportunities for improving and contextualizing the public discourse surrounding scientific research.
Improving big citizen science data: Moving beyond haphazard sampling
Citizen science is mainstream: millions of people contribute data to a growing array of citizen science projects annually, forming massive datasets that will drive research for years to come. Many citizen science projects implement a \"leaderboard\" framework, ranking the contributions based on number of records or species, encouraging further participation. But is every data point equally \"valuable?\" Citizen scientists collect data with distinct spatial and temporal biases, leading to unfortunate gaps and redundancies, which create statistical and informational problems for downstream analyses. Up to this point, the haphazard structure of the data has been seen as an unfortunate but unchangeable aspect of citizen science data. However, we argue here that this issue can actually be addressed: we provide a very simple, tractable framework that could be adapted by broadscale citizen science projects to allow citizen scientists to optimize the marginal value of their efforts, increasing the overall collective knowledge.
A complete data processing workflow for cryo-ET and subtomogram averaging
Electron cryotomography is currently the only method capable of visualizing cells in three dimensions at nanometer resolutions. While modern instruments produce massive amounts of tomography data containing extremely rich structural information, data processing is very labor intensive and the results are often limited by the skills of the personnel rather than the data. We present an integrated workflow that covers the entire tomography data processing pipeline, from automated tilt series alignment to subnanometer resolution subtomogram averaging. Resolution enhancement is made possible through the use of per-particle per-tilt contrast transfer function correction and alignment. The workflow greatly reduces human bias, increases throughput and more closely approaches data-limited resolution for subtomogram averaging in both purified macromolecules and cells.
A Review of Feature Reduction Techniques in Neuroimaging
Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.