Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
211,789 result(s) for "public software"
Sort by:
Prospects for combined analyses of hadronic emission from Formula omitted-ray sources in the Milky Way with CTA and KM3NeT
The Cherenkov Telescope Array and the KM3NeT neutrino telescopes are major upcoming facilities in the fields of [Formula omitted]-ray and neutrino astronomy, respectively. Possible simultaneous production of [Formula omitted] rays and neutrinos in astrophysical accelerators of cosmic-ray nuclei motivates a combination of their data. We assess the potential of a combined analysis of CTA and KM3NeT data to determine the contribution of hadronic emission processes in known Galactic [Formula omitted]-ray emitters, comparing this result to the cases of two separate analyses. In doing so, we demonstrate the capability of Gammapy, an open-source software package for the analysis of [Formula omitted]-ray data, to also process data from neutrino telescopes. For a selection of prototypical [Formula omitted]-ray sources within our Galaxy, we obtain models for primary proton and electron spectra in the hadronic and leptonic emission scenario, respectively, by fitting published [Formula omitted]-ray spectra. Using these models and instrument response functions for both detectors, we employ the Gammapy package to generate pseudo data sets, where we assume 200 h of CTA observations and 10 years of KM3NeT detector operation. We then apply a three-dimensional binned likelihood analysis to these data sets, separately for each instrument and jointly for both. We find that the largest benefit of the combined analysis lies in the possibility of a consistent modelling of the [Formula omitted]-ray and neutrino emission. Assuming a purely leptonic scenario as input, we obtain, for the most favourable source, an average expected 68% credible interval that constrains the contribution of hadronic processes to the observed [Formula omitted]-ray emission to below 15%.
Blockchain and crypto currency : building a high quality marketplace for crypto data
This open access book contributes to the creation of a cyber ecosystem supported by blockchain technology in which technology and people can coexist in harmony. Blockchains have shown that trusted records, or ledgers, of permanent data can be stored on the Internet in a decentralized manner. The decentralization of the recording process is expected to significantly economize the cost of transactions. Creating a ledger on data, a blockchain makes it possible to designate the owner of each piece of data, to trade data pieces, and to market them. This book examines the formation of markets for various types of data from the theory of market quality proposed and developed by M. Yano. Blockchains are expected to give data itself the status of a new production factor. Bringing ownership of data to the hands of data producers, blockchains can reduce the possibility of information leakage, enhance the sharing and use of IoT data, and prevent data monopoly and misuse. The industry will have a bright future as soon as better technology is developed and when a healthy infrastructure is created to support the blockchain market.
Machine Learning and Deep Learning frameworks and libraries for large-scale data mining: a survey
The combined impact of new computing resources and techniques with an increasing avalanche of large datasets, is transforming many research areas and may lead to technological breakthroughs that can be used by billions of people. In the recent years, Machine Learning and especially its subfield Deep Learning have seen impressive advances. Techniques developed within these two fields are now able to analyze and learn from huge amounts of real world examples in a disparate formats. While the number of Machine Learning algorithms is extensive and growing, their implementations through frameworks and libraries is also extensive and growing too. The software development in this field is fast paced with a large number of open-source software coming from the academy, industry, start-ups or wider open-source communities. This survey presents a recent time-slide comprehensive overview with comparisons as well as trends in development and usage of cutting-edge Artificial Intelligence software. It also provides an overview of massive parallelism support that is capable of scaling computation effectively and efficiently in the era of Big Data.
Orchestrating single-cell analysis with Bioconductor
Recent technological advancements have enabled the profiling of a large number of genome-wide features in individual cells. However, single-cell data present unique challenges that require the development of specialized methods and software infrastructure to successfully derive biological insights. The Bioconductor project has rapidly grown to meet these demands, hosting community-developed open-source software distributed as R packages. Featuring state-of-the-art computational methods, standardized data infrastructure and interactive data visualization tools, we present an overview and online book ( https://osca.bioconductor.org ) of single-cell methods for prospective users. This Perspective highlights open-source software for single-cell analysis released as part of the Bioconductor project, providing an overview for users and developers.
A spectrum of free software tools for processing the VCF variant call format: vcflib, bio-vcf, cyvcf2, hts-nim and slivar
Since its introduction in 2011 the variant call format (VCF) has been widely adopted for processing DNA and RNA variants in practically all population studies—as well as in somatic and germline mutation studies. The VCF format can represent single nucleotide variants, multi-nucleotide variants, insertions and deletions, and simple structural variants called and anchored against a reference genome. Here we present a spectrum of over 125 useful, complimentary free and open source software tools and libraries, we wrote and made available through the multiple vcflib , bio-vcf , cyvcf2 , hts-nim and slivar projects. These tools are applied for comparison, filtering, normalisation, smoothing and annotation of VCF, as well as output of statistics, visualisation, and transformations of files variants. These tools run everyday in critical biomedical pipelines and countless shell scripts. Our tools are part of the wider bioinformatics ecosystem and we highlight best practices. We shortly discuss the design of VCF, lessons learnt, and how we can address more complex variation through pangenome graph formats, variation that can not easily be represented by the VCF format.
Using Digital Image Analysis to Quantify Small Arthropod Vectors
Quantifying arthropod vectors can be a time-consuming process. Here, we describe a technique to count large samples of small arthropods using ImageJ. ImageJ is an open source image processing software, produced by the National Institutes of Health, with a straightforward interface that has proven useful in quantifying small organisms (i.e., cells, pollen, eggs). In 2017, we deployed CDC light traps baited with carbon dioxide among seven sites to capture black flies (Diptera: Sim uliidae). Samples of the captured specimens were photographed, and then quantified manually and automatically, using ImageJ. We compared the accuracy of three types of automated counts to manual counts of black flies using an information-theoretic approach. We found that changing the particle size produced counts closest to those obtained by manual counts. Even over a large range of values, from tens to thousands of flies, our automated counts were often identical to and almost always within 5% of the manual counts. When different, automated counts were usually slightly less than manual counts, and thus conservative estimates.This automated technique is simple, repeatable, requires minimal training, and can reduce the time needed to quantify small arthropods such as black flies. Key words: Simuliidae, black flies, ImageJ, arthropod counts, automatic counts
EntropyHub: An open-source toolkit for entropic time series analysis
An increasing number of studies across many research fields from biomedical engineering to finance are employing measures of entropy to quantify the regularity, variability or randomness of time series and image data. Entropy, as it relates to information theory and dynamical systems theory, can be estimated in many ways, with newly developed methods being continuously introduced in the scientific literature. Despite the growing interest in entropic time series and image analysis, there is a shortage of validated, open-source software tools that enable researchers to apply these methods. To date, packages for performing entropy analysis are often run using graphical user interfaces, lack the necessary supporting documentation, or do not include functions for more advanced entropy methods, such as cross-entropy, multiscale cross-entropy or bidimensional entropy. In light of this, this paper introduces EntropyHub , an open-source toolkit for performing entropic time series analysis in MATLAB, Python and Julia. EntropyHub (version 0.1) provides an extensive range of more than forty functions for estimating cross-, multiscale, multiscale cross-, and bidimensional entropy, each including a number of keyword arguments that allows the user to specify multiple parameters in the entropy calculation. Instructions for installation, descriptions of function syntax, and examples of use are fully detailed in the supporting documentation, available on the EntropyHub website– www.EntropyHub.xyz . Compatible with Windows, Mac and Linux operating systems, EntropyHub is hosted on GitHub, as well as the native package repository for MATLAB, Python and Julia, respectively. The goal of EntropyHub is to integrate the many established entropy methods into one complete resource, providing tools that make advanced entropic time series analysis straightforward and reproducible.
Fiji: an open-source platform for biological-image analysis
Presented is an overview of the image-analysis software platform Fiji, a distribution of ImageJ that updates the underlying ImageJ architecture and adds modern software design elements to expand the capabilities of the platform and facilitate collaboration between biologists and computer scientists. Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments
The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.