Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
5,092
result(s) for
"Chemistry, Analytic Data processing."
Sort by:
Notes on statistics and data quality for analytical chemists
by
Thompson, Michael
,
Lowthian, Philip J
in
Analytical Chemistry
,
Chemistry, Analytic
,
Chemometrics
2011
This book is intended to help analytical chemists feel comfortable with more commonly used statistical operations and help them make effective use of the results. Emphasis is put upon computer-based methods that are applied in relation to measurement and the quality of the resulting data. The book is intended for analytical chemists working in industry but is also appropriate for students taking first degrees or an MSc in analytical chemistry.
How to Use Excel® in Analytical Chemistry
by
Levie, Robert de
in
Chemistry, Analytic
,
Chemistry, Analytic -- Statistical methods -- Data processing
,
Chemometrics
2001
Because of their intuitive layout, extensive mathematical capabilities, and convenient graphics, spreadsheets provide an easy, straightforward route to scientific computing. This textbook for undergraduate and entry-level graduate chemistry and chemical engineering students uses Excel, the most powerful available spreadsheet, to explore and solve problems in general and chemical data analysis. This is the only up-to-date text on the use of spreadsheets in chemistry. The book discusses topics including statistics, chemical equilibria, pH calculations, titrations, and instrumental methods such as chromatography, spectrometry, and electroanalysis. It contains many examples of data analysis, and uses spreadsheets for numerical simulations, and testing analytical procedures. It also treats modern data analysis methods such as linear and non-linear least squares in great detail, as well as methods based on Fourier transformation. The book shows how matrix methods can be powerful tools in data analysis, and how easily these are implemented on a spreadsheet and describes in detail how to simulate chemical kinetics on a spreadsheet. It also introduces the reader to the use of VBA, the macro language of Microsoft Office, which lets the user import higher-level computer programs into the spreadsheet.
Metabolomics for personalized medicine: the input of analytical chemistry from biomarker discovery to point-of-care tests
by
Junot Christophe
,
Castelli, Florence Anne
,
Rosati Giulio
in
Analytical chemistry
,
Biomarkers
,
Customization
2022
Metabolomics refers to the large-scale detection, quantification, and analysis of small molecules (metabolites) in biological media. Although metabolomics, alone or combined with other omics data, has already demonstrated its relevance for patient stratification in the frame of research projects and clinical studies, much remains to be done to move this approach to the clinical practice. This is especially true in the perspective of being applied to personalized/precision medicine, which aims at stratifying patients according to their risk of developing diseases, and tailoring medical treatments of patients according to individual characteristics in order to improve their efficacy and limit their toxicity. In this review article, we discuss the main challenges linked to analytical chemistry that need to be addressed to foster the implementation of metabolomics in the clinics and the use of the data produced by this approach in personalized medicine. First of all, there are already well-known issues related to untargeted metabolomics workflows at the levels of data production (lack of standardization), metabolite identification (small proportion of annotated features and identified metabolites), and data processing (from automatic detection of features to multi-omic data integration) that hamper the inter-operability and reusability of metabolomics data. Furthermore, the outputs of metabolomics workflows are complex molecular signatures of few tens of metabolites, often with small abundance variations, and obtained with expensive laboratory equipment. It is thus necessary to simplify these molecular signatures so that they can be produced and used in the field. This last point, which is still poorly addressed by the metabolomics community, may be crucial in a near future with the increased availability of molecular signatures of medical relevance and the increased societal demand for participatory medicine.
Journal Article
At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives
by
Zedadra, Ouarda
,
Fortino, Giancarlo
,
Kouahla, Mohamed Nadjib
in
Algorithms
,
Artificial intelligence
,
Big Data
2023
Given its advantages in low latency, fast response, context-aware services, mobility, and privacy preservation, edge computing has emerged as the key support for intelligent applications and 5G/6G Internet of things (IoT) networks. This technology extends the cloud by providing intermediate services at the edge of the network and improving the quality of service for latency-sensitive applications. Many AI-based solutions with machine learning, deep learning, and swarm intelligence have exhibited the high potential to perform intelligent cognitive sensing, intelligent network management, big data analytics, and security enhancement for edge-based smart applications. Despite its many benefits, there are still concerns about the required capabilities of intelligent edge computing to deal with the computational complexity of machine learning techniques for big IoT data analytics. Resource constraints of edge computing, distributed computing, efficient orchestration, and synchronization of resources are all factors that require attention for quality of service improvement and cost-effective development of edge-based smart applications. In this context, this paper aims to explore the confluence of AI and edge in many application domains in order to leverage the potential of the existing research around these factors and identify new perspectives. The confluence of edge computing and AI improves the quality of user experience in emergency situations, such as in the Internet of vehicles, where critical inaccuracies or delays can lead to damage and accidents. These are the same factors that most studies have used to evaluate the success of an edge-based application. In this review, we first provide an in-depth analysis of the state of the art of AI in edge-based applications with a focus on eight application areas: smart agriculture, smart environment, smart grid, smart healthcare, smart industry, smart education, smart transportation, and security and privacy. Then, we present a qualitative comparison that emphasizes the main objective of the confluence, the roles and the use of artificial intelligence at the network edge, and the key enabling technologies for edge analytics. Then, open challenges, future research directions, and perspectives are identified and discussed. Finally, some conclusions are drawn.
Journal Article
Lipidomics from sample preparation to data analysis: a primer
by
Züllig, Thomas
,
Trötzmüller Martin
,
Köfeler, Harald C
in
Cell membranes
,
Data analysis
,
Data processing
2020
Lipids are amongst the most important organic compounds in living organisms, where they serve as building blocks for cellular membranes as well as energy storage and signaling molecules. Lipidomics is the science of the large-scale determination of individual lipid species, and the underlying analytical technology that is used to identify and quantify the lipidome is generally mass spectrometry (MS). This review article provides an overview of the crucial steps in MS-based lipidomics workflows, including sample preparation, either liquid–liquid or solid-phase extraction, derivatization, chromatography, ion-mobility spectrometry, MS, and data processing by various software packages. The associated concepts are discussed from a technical perspective as well as in terms of their application. Furthermore, this article sheds light on recent advances in the technology used in this field and its current limitations. Particular emphasis is placed on data quality assurance and adequate data reporting; some of the most common pitfalls in lipidomics are discussed, along with how to circumvent them.
Journal Article
Advancing untargeted metabolomics using data-independent acquisition mass spectrometry technology
2019
Metabolomics quantitatively measures metabolites in a given biological system and facilitates the understanding of physiological and pathological activities. With the recent advancement of mass spectrometry (MS) technology, liquid chromatography-mass spectrometry (LC-MS) with data-independent acquisition (DIA) has been emerged as a powerful technology for untargeted metabolomics due to its capability to acquire all MS2 spectra and high quantitative accuracy. In this trend article, we first introduced the basic principles of several common DIA techniques including MSE, all ion fragmentation (AIF), SWATH, and MSX. Then, we summarized and compared the data analysis strategies to process DIA-based untargeted metabolomics data, including metabolite identification and quantification. We think the advantages of the DIA technique will enable its broad application in untargeted metabolomics.
Journal Article
The MaxQuant computational platform for mass spectrometry-based shotgun proteomics
2016
MaxQuant is a platform for mass spectrometry-based proteomics data analysis. It includes a peptide database search engine, called Andromeda, and expanding capability to handle data from most quantitative proteomics experiments.
MaxQuant is one of the most frequently used platforms for mass-spectrometry (MS)-based proteomics data analysis. Since its first release in 2008, it has grown substantially in functionality and can be used in conjunction with more MS platforms. Here we present an updated protocol covering the most important basic computational workflows, including those designed for quantitative label-free proteomics, MS1-level labeling and isobaric labeling techniques. This protocol presents a complete description of the parameters used in MaxQuant, as well as of the configuration options of its integrated search engine, Andromeda. This protocol update describes an adaptation of an existing protocol that substantially modifies the technique. Important concepts of shotgun proteomics and their implementation in MaxQuant are briefly reviewed, including different quantification strategies and the control of false-discovery rates (FDRs), as well as the analysis of post-translational modifications (PTMs). The MaxQuant output tables, which contain information about quantification of proteins and PTMs, are explained in detail. Furthermore, we provide a short version of the workflow that is applicable to data sets with simple and standard experimental designs. The MaxQuant algorithms are efficiently parallelized on multiple processors and scale well from desktop computers to servers with many cores. The software is written in C# and is freely available at
http://www.maxquant.org
.
Journal Article
Using MicrobiomeAnalyst for comprehensive statistical, functional, and meta-analysis of microbiome data
2020
MicrobiomeAnalyst is an easy-to-use, web-based platform for comprehensive analysis of common data outputs generated from current microbiome studies. It enables researchers and clinicians with little or no bioinformatics training to explore a wide variety of well-established methods for microbiome data processing, statistical analysis, functional profiling and comparison with public datasets or known microbial signatures. MicrobiomeAnalyst currently contains four modules: Marker-gene Data Profiling (MDP), Shotgun Data Profiling (SDP), Projection with Public Data (PPD), and Taxon Set Enrichment Analysis (TSEA). This protocol will first introduce the MDP module by providing a step-wise description of how to prepare, process and normalize data; perform community profiling; identify important features; and conduct correlation and classification analysis. We will then demonstrate how to perform predictive functional profiling and introduce several unique features of the SDP module for functional analysis. The last two sections will describe the key steps involved in using the PPD and TSEA modules for meta-analysis and visual exploration of the results. In summary, MicrobiomeAnalyst offers a one-stop shop that enables microbiome researchers to thoroughly explore their preprocessed microbiome data via intuitive web interfaces. The complete protocol can be executed in ~70 min.
This protocol details MicrobiomeAnalyst, a user-friendly, web-based platform for comprehensive statistical, functional, and meta-analysis of microbiome data.
Journal Article
The strength in numbers: comprehensive characterization of house dust using complementary mass spectrometric techniques
by
Moschet, Christoph
,
Covaci, Adrian
,
Letzel, Thomas
in
Additives
,
Chlorinated hydrocarbons
,
Chromatography
2019
Untargeted analysis of a composite house dust sample has been performed as part of a collaborative effort to evaluate the progress in the field of suspect and nontarget screening and build an extensive database of organic indoor environment contaminants. Twenty-one participants reported results that were curated by the organizers of the collaborative trial. In total, nearly 2350 compounds were identified (18%) or tentatively identified (25% at confidence level 2 and 58% at confidence level 3), making the collaborative trial a success. However, a relatively small share (37%) of all compounds were reported by more than one participant, which shows that there is plenty of room for improvement in the field of suspect and nontarget screening. An even a smaller share (5%) of the total number of compounds were detected using both liquid chromatography–mass spectrometry (LC-MS) and gas chromatography–mass spectrometry (GC-MS). Thus, the two MS techniques are highly complementary. Most of the compounds were detected using LC with electrospray ionization (ESI) MS and comprehensive 2D GC (GC×GC) with atmospheric pressure chemical ionization (APCI) and electron ionization (EI), respectively. Collectively, the three techniques accounted for more than 75% of the reported compounds. Glycols, pharmaceuticals, pesticides, and various biogenic compounds dominated among the compounds reported by LC-MS participants, while hydrocarbons, hydrocarbon derivatives, and chlorinated paraffins and chlorinated biphenyls were primarily reported by GC-MS participants. Plastics additives, flavor and fragrances, and personal care products were reported by both LC-MS and GC-MS participants. It was concluded that the use of multiple analytical techniques was required for a comprehensive characterization of house dust contaminants. Further, several recommendations are given for improved suspect and nontarget screening of house dust and other indoor environment samples, including the use of open-source data processing tools. One of the tools allowed provisional identification of almost 500 compounds that had not been reported by participants.
Journal Article