Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
7,429
result(s) for
"Software development tools"
Sort by:
Learning FPGAs : digital design for beginners with Mojo and Lucid HDL
by
Rajewski, Justin, author
in
Field programmable gate arrays Design and construction.
,
Electronic digital computers Design and construction.
,
Computers Circuits Design and construction.
2017
\"Learn how to design digital circuits with FPGAs (field-programmable gate arrays), the devices that reconfigure themselves to become the very hardware circuits you set out to program. With this practical guide, author Justin Rajewski shows you hands-on how to create FPGA projects, whether you're a programmer, engineer, product designer, or maker. You'll quickly go from the basics to designing your own processor. Designing digital circuits used to be a long and costly endeavor that only big companies could pursue. FPGAs make the process much easier, and now they're affordable enough even for hobbyists. If you're familiar with electricity and basic electrical components, this book starts simply and progresses through increasingly complex projects\"--Publisher's description.
BUILDING THE PROFESSIONAL COMPETENCE OF FUTURE PROGRAMMERS USING METHODS AND TOOLS OF FLEXIBLE DEVELOPMENT OF SOFTWARE APPLICATIONS
by
Glazunova, Olena G.
,
Korolchuk, Valentyna I.
,
Parkhomenko, Oleksandra V.
in
Applications programs
,
Best practice
,
Education
2022
To support their professional expertise, a modern programmer must constantly follow new technologies, learn new methods of solving problems (best practices), exchange experience, use auxiliary tools that accelerate the development process, should be able to work in a team and develop their knowledge and skills. The task of modern IT education is to meet the demands of the information technology market. Specialists should be provided with proper training, which will give them relevant professional competences. The present paper analyzes modern methodologies of flexible software development and tools, defines professional competences related to software development based on the standard of higher education for the specialists of the Bachelor’s academic level in Software engineering specialty. The novelty of the research lies in the justification of the competency-based approach to the training of future programmers. This approach involves the use of methods and tools for flexible development of software applications in three stages of project tasks of different types and complexity, which are formed in accordance with certain professional competencies. At Phase 1, students studied flexible methodologies and tools for developing software applications in the Software Design academic discipline. In Phase 2, flexible methodologies and software development tools were used during academic and technological practical training, in particular, students performed a group project according to the Scrum methodology, using Kanban approaches. In Phase 3, students worked individually on the Bachelor’s thesis under the guidance of a teacher. The article describes the organization of the work process on the principles of flexible development and flexible learning, presents the results of experimental research, which showed an increase in the level of professional competencies in software development. A statistical analysis of the results of the experiment has been carried out and their significance has been proved.
Journal Article
Bayesian reaction optimization as a tool for chemical synthesis
2021
Reaction optimization is fundamental to synthetic chemistry, from optimizing the yield of industrial processes to selecting conditions for the preparation of medicinal candidates
1
. Likewise, parameter optimization is omnipresent in artificial intelligence, from tuning virtual personal assistants to training social media and product recommendation systems
2
. Owing to the high cost associated with carrying out experiments, scientists in both areas set numerous (hyper)parameter values by evaluating only a small subset of the possible configurations. Bayesian optimization, an iterative response surface-based global optimization algorithm, has demonstrated exceptional performance in the tuning of machine learning models
3
. Bayesian optimization has also been recently applied in chemistry
4
–
9
; however, its application and assessment for reaction optimization in synthetic chemistry has not been investigated. Here we report the development of a framework for Bayesian reaction optimization and an open-source software tool that allows chemists to easily integrate state-of-the-art optimization algorithms into their everyday laboratory practices. We collect a large benchmark dataset for a palladium-catalysed direct arylation reaction, perform a systematic study of Bayesian optimization compared to human decision-making in reaction optimization, and apply Bayesian optimization to two real-world optimization efforts (Mitsunobu and deoxyfluorination reactions). Benchmarking is accomplished via an online game that links the decisions made by expert chemists and engineers to real experiments run in the laboratory. Our findings demonstrate that Bayesian optimization outperforms human decisionmaking in both average optimization efficiency (number of experiments) and consistency (variance of outcome against initially available data). Overall, our studies suggest that adopting Bayesian optimization methods into everyday laboratory practices could facilitate more efficient synthesis of functional chemicals by enabling better-informed, data-driven decisions about which experiments to run.
Bayesian optimization is applied in chemical synthesis towards the optimization of various organic reactions and is found to outperform scientists in both average optimization efficiency and consistency.
Journal Article
MetPy
by
Marsh, Patrick T.
,
Leeman, John R.
,
Manser, Russell P.
in
Algorithms
,
Arrays
,
Atmospheric sciences
2022
MetPy is an open-source, Python-based package for meteorology, providing domain-specific functionality built extensively on top of the robust scientific Python software stack, which includes libraries like NumPy, SciPy, Matplotlib, and xarray. The goal of the project is to bring the weather analysis capabilities of GEMPAK (and similar software tools) into a modern computing paradigm. MetPy strives to employ best practices in its development, including software tests, continuous integration, and automated publishing of web-based documentation. As such, MetPy represents a sustainable, long-term project that fills a need for the meteorological community. MetPy’s development is substantially driven by its user community, both through feedback on a variety of open, public forums like Stack Overflow, and through code contributions facilitated by the GitHub collaborative software development platform. MetPy has recently seen the release of version 1.0, with robust functionality for analyzing and visualizing meteorological datasets. While previous versions of MetPy have already seen extensive use, the 1.0 release represents a significant milestone in terms of completeness and a commitment to long-term support for the programming interfaces. This article provides an overview of MetPy’s suite of capabilities, including its use of labeled arrays and physical unit information as its core data model, unit-aware calculations, cross sections, skew T and GEMPAK-like plotting, station model plots, and support for parsing a variety of meteorological data formats. The general road map for future planned development for MetPy is also discussed.
Journal Article
LArSoft: toolkit for simulation, reconstruction and analysis of liquid argon TPC neutrino detectors
2017
LArSoft is a set of detector-independent software tools for the simulation, reconstruction and analysis of data from liquid argon (LAr) neutrino experiments The common features of LAr time projection chambers (TPCs) enable sharing of algorithm code across detectors of very different size and configuration. LArSoft is currently used in production simulation and reconstruction by the ArgoNeuT, DUNE, LArlAT, MicroBooNE, and SBND experiments. The software suite offers a wide selection of algorithms and utilities, including those for associated photo-detectors and the handling of auxiliary detectors outside the TPCs. Available algorithms cover the full range of simulation and reconstruction, from raw waveforms to high-level reconstructed objects, event topologies and classification. The common code within LArSoft is contributed by adopting experiments, which also provide detector-specific geometry descriptions, and code for the treatment of electronic signals. LArSoft is also a collaboration of experiments, Fermilab and associated software projects which cooperate in setting requirements, priorities, and schedules. In this talk, we outline the general architecture of the software and the interaction with external libraries and detector-specific code. We also describe the dynamics of LArSoft software development between the contributing experiments, the projects supporting the software infrastructure LArSoft relies on, and the core LArSoft support project.
Journal Article
Some bibliometric procedures for analyzing and evaluating research fields
by
M Ángeles Martínez
,
Cobo, M J
,
Gutiérrez-Salcedo, M
in
Bibliographies
,
Bibliometrics
,
Impact analysis
2018
Nowadays, measuring the quality and quantity of the scientific production is an important necessity since almost every research assessment decision depends, to a great extent, upon the scientific merits of the involved researchers. To do that, many different indicators have been proposed in the literature. Two main bibliometric procedures to explore a research field have been defined: performance analysis and science mapping. On the one hand, performance analysis aims at evaluating groups of scientific actors (countries, universities, departments, researchers) and the impact of their activity on the basis of bibliographic data. On the other hand, the extraction of knowledge from the intellectual, social or conceptual structure of a research field could be done by means of science mapping analysis based on bibliographic networks. In this paper, we introduce some of the most important techniques and software tools to analyze the impact of a research field and its scientific structures. Particularly, four bibliometric indices (h, g, hg and q2), the h-classics approach to identify the classic papers of a research field and three free science mapping software tools (CitNetExplorer, SciMAT and VOSViewer) are shown.
Journal Article
How to treat uncertainties in life cycle assessment studies?
by
Baustert, Paul
,
Othoniel Benoit
,
Elorri, Igos
in
Computer programs
,
Computer simulation
,
Context
2019
PurposeThe use of life cycle assessment (LCA) as a decision support tool can be hampered by the numerous uncertainties embedded in the calculation. The treatment of uncertainty is necessary to increase the reliability and credibility of LCA results. The objective is to provide an overview of the methods to identify, characterize, propagate (uncertainty analysis), understand the effects (sensitivity analysis), and communicate uncertainty in order to propose recommendations to a broad public of LCA practitioners.MethodsThis work was carried out via a literature review and an analysis of LCA tool functionalities. In order to facilitate the identification of uncertainty, its location within an LCA model was distinguished between quantity (any numerical data), model structure (relationships structure), and context (criteria chosen within the goal and scope of the study). The methods for uncertainty characterization, uncertainty analysis, and sensitivity analysis were classified according to the information provided, their implementation in LCA software, the time and effort required to apply them, and their reliability and validity. This review led to the definition of recommendations on three levels: basic (low efforts with LCA software), intermediate (significant efforts with LCA software), and advanced (significant efforts with non-LCA software).Results and discussionFor the basic recommendations, minimum and maximum values (quantity uncertainty) and alternative scenarios (model structure/context uncertainty) are defined for critical elements in order to estimate the range of results. Result sensitivity is analyzed via one-at-a-time variations (with realistic ranges of quantities) and scenario analyses. Uncertainty should be discussed at least qualitatively in a dedicated paragraph. For the intermediate level, the characterization can be refined with probability distributions and an expert review for scenario definition. Uncertainty analysis can then be performed with the Monte Carlo method for the different scenarios. Quantitative information should appear in inventory tables and result figures. Finally, advanced practitioners can screen uncertainty sources more exhaustively, include correlations, estimate model error with validation data, and perform Latin hypercube sampling and global sensitivity analysis.ConclusionsThrough this pedagogic review of the methods and practical recommendations, the authors aim to increase the knowledge of LCA practitioners related to uncertainty and facilitate the application of treatment techniques. To continue in this direction, further research questions should be investigated (e.g., on the implementation of fuzzy logic and model uncertainty characterization) and the developers of databases, LCIA methods, and software tools should invest efforts in better implementing and treating uncertainty in LCA.
Journal Article
A review on methods and software for fuzzy cognitive maps
2019
Fuzzy cognitive maps (FCMs) keep growing in popularity within the scientific community. However, despite substantial advances in the theory and applications of FCMs, there is a lack of an up-to-date, comprehensive presentation of the state-of-the-art in this domain. In this review study we are filling that gap. First, we present basic FCM concepts and analyze their static and dynamic properties, and next we elaborate on existing algorithms used for learning the FCM structure. Second, we provide a goal-driven overview of numerous theoretical developments recently reported in this area. Moreover, we consider the application of FCMs to time series forecasting and classification. Finally, in order to support the readers in their own research, we provide an overview of the existing software tools enabling the implementation of both existing FCM schemes as well as prospective theoretical and/or practical contributions.
Journal Article
TSEBRA: transcript selector for BRAKER
2021
Background
BRAKER is a suite of automatic pipelines, BRAKER1 and BRAKER2, for the accurate annotation of protein-coding genes in eukaryotic genomes. Each pipeline trains statistical models of protein-coding genes based on provided evidence and, then predicts protein-coding genes in genomic sequences using both the extrinsic evidence and statistical models. For training and prediction, BRAKER1 and BRAKER2 incorporate complementary extrinsic evidence: BRAKER1 uses only RNA-seq data while BRAKER2 uses only a database of cross-species proteins. The BRAKER suite has so far not been able to reliably exceed the accuracy of BRAKER1 and BRAKER2 when incorporating both types of evidence simultaneously. Currently, for a novel genome project where both RNA-seq and protein data are available, the best option is to run both pipelines independently, and to pick one, likely better output. Therefore, one or another type of the extrinsic evidence would remain unexploited.
Results
We present TSEBRA, a software that selects gene predictions (transcripts) from the sets generated by BRAKER1 and BRAKER2. TSEBRA uses a set of rules to compare scores of overlapping transcripts based on their support by RNA-seq and homologous protein evidence. We show in computational experiments on genomes of 11 species that TSEBRA achieves higher accuracy than either BRAKER1 or BRAKER2 running alone and that TSEBRA compares favorably with the combiner tool EVidenceModeler.
Conclusion
TSEBRA is an easy-to-use and fast software tool. It can be used in concert with the BRAKER pipeline to generate a gene prediction set supported by both RNA-seq and homologous protein evidence.
Journal Article
Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl
by
Carpenter, Anne E
,
Karhohs, Kyle W
,
Heng CherKeng
in
Biomedical data
,
Biomedical materials
,
Cell culture
2019
Segmenting the nuclei of cells in microscopy images is often the first step in the quantitative analysis of imaging data for biological and biomedical applications. Many bioimage analysis tools can segment nuclei in images but need to be selected and configured for every experiment. The 2018 Data Science Bowl attracted 3,891 teams worldwide to make the first attempt to build a segmentation method that could be applied to any two-dimensional light microscopy image of stained nuclei across experiments, with no human interaction. Top participants in the challenge succeeded in this task, developing deep-learning-based models that identified cell nuclei across many image types and experimental conditions without the need to manually adjust segmentation parameters. This represents an important step toward configuration-free bioimage analysis software tools.The 2018 Data Science Bowl challenged competitors to develop an accurate tool for segmenting stained nuclei from diverse light microscopy images. The winners deployed innovative deep-learning strategies to realize configuration-free segmentation.
Journal Article