Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"protocol-update"
Sort by:
Protocol Update for large-scale genome and gene function analysis with the PANTHER classification system (v.14.0)
2019
The PANTHER classification system (http://www.pantherdb.org) is a comprehensive system that combines genomes, gene function classifications, pathways and statistical analysis tools to enable biologists to analyze large-scale genome-wide experimental data. The current system (PANTHER v.14.0) covers 131 complete genomes organized into gene families and subfamilies; evolutionary relationships between genes are represented in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models (HMMs)). The families and subfamilies are annotated with Gene Ontology (GO) terms, and sequences are assigned to PANTHER pathways. A suite of tools has been built to allow users to browse and query gene functions and analyze large-scale experimental data with a number of statistical tests. PANTHER is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. Since the protocol for using this tool (v.8.0) was originally published in 2013, there have been substantial improvements and updates in the areas of data quality, data coverage, statistical algorithms and user experience. This Protocol Update provides detailed instructions on how to analyze genome-wide experimental data in the PANTHER classification system.Here the authors provide an update to their 2013 protocol for using the PANTHER classification system, detailing how to analyze genome-wide experimental data with the newest version of PANTHER (v.14.0), with improvements in the areas of data quality, data coverage, statistical algorithms and user experience.
Journal Article
Creation and analysis of biochemical constraint-based models using the COBRA Toolbox v.3.0
2019
Constraint-based reconstruction and analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental molecular systems biology data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive desktop software suite of interoperable COBRA methods. It has found widespread application in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. This protocol is an update to the COBRA Toolbox v.1.0 and v.2.0. Version 3.0 includes new methods for quality-controlled reconstruction, modeling, topological analysis, strain and experimental design, and network visualization, as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimization solvers for multi-scale, multi-cellular, and reaction kinetic modeling, respectively. This protocol provides an overview of all these new features and can be adapted to generate and analyze constraint-based models in a wide variety of scenarios. The COBRA Toolbox v.3.0 provides an unparalleled depth of COBRA methods.The COBRA toolbox provides quality-controlled reconstruction, modeling, topological analysis, and network visualization, as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data.
Journal Article
Optimizing the standardized assays for determining the catalytic activity and kinetics of peroxidase-like nanozymes
2024
Nanozymes are nanomaterials with enzyme-like catalytic properties. They are attractive reagents because they do not have the same limitations of natural enzymes (e.g., high cost, low stability and difficult storage). To test, optimize and compare nanozymes, it is important to establish fundamental principles and systematic standards to fully characterize their catalytic performance. Our 2018 protocol describes how to characterize the catalytic activity and kinetics of peroxidase nanozymes, the most widely used type of nanozyme. This approach was based on Michaelis–Menten enzyme kinetics and is now updated to take into account the unique physicochemical properties of nanomaterials that determine the catalytic kinetics of nanozymes. The updated procedure describes how to determine the number of active sites as well as other physicochemical properties such as surface area, shape and size. It also outlines how to calculate the hydroxyl adsorption energy from the crystal structure using the density functional theory method. The calculations now incorporate these measurements and computations to better characterize the catalytic kinetics of peroxidase nanozymes that have different shapes, sizes and compositions. This updated protocol better describes the catalytic performance of nanozymes and benefits the development of nanozyme research since further nanozyme development requires precise control of activity by engineering the electronic, geometric structure and atomic configuration of the catalytic sites of nanozymes. The characterization of the catalytic activity of peroxidase nanozymes and the evaluation of their kinetics can be performed in 4 h. The procedure is suitable for users with expertise in nano- and materials technology.
Key points
Nanozymes are nanoparticles designed to have catalytic properties similar to those of natural enzymes. Design and optimization of nanozyme properties require analytical methods to characterize their physical properties as well as their catalytic activity and kinetics.
This is an updated protocol for measuring catalytic behavior that incorporates data from measured physical properties unique to each nanoparticle as well as density functional theory calculations into the Michaelis–Menten approach.
Developing optimal nanozymes requires standardized methods for measuring their catalytic activity and reaction kinetics. This protocol integrates enzyme based Michaelis–Menten kinetics with measured physical properties and computational methods.
Journal Article
Extraction of highly degraded DNA from ancient bones, teeth and sediments for high-throughput sequencing
by
Glocke, Isabelle
,
Aximu-Petri, Ayinuer
,
Rohland, Nadin
in
Automation
,
Bones
,
Deoxyribonucleic acid
2018
DNA preserved in ancient bones, teeth and sediments is typically highly fragmented and present only in minute amounts. Here, we provide a highly versatile silica-based DNA extraction protocol that enables the retrieval of short (≥35 bp) or even ultrashort (≥25 bp) DNA fragments from such material with minimal carryover of substances that inhibit library preparation for high-throughput sequencing. DNA extraction can be performed with either silica spin columns, which offer the most convenient choice for manual DNA extraction, or silica-coated magnetic particles. The latter allow a substantial cost reduction as well as automation on liquid-handling systems. This protocol update replaces a now-outdated version that was published 11 years ago, before high-throughput sequencing technologies became widely available. It has been thoroughly optimized to provide the highest DNA yields from highly degraded samples, as well as fast and easy handling, requiring not more than ~15 min of hands-on time per sample.
Journal Article
Optimizing the Cell Painting assay for image-based profiling
by
Chandrasekaran, Srinivas Niranj
,
Concannon, John B.
,
Pilling, James E.
in
631/114/1564
,
631/114/2163
,
631/1647/245/2225
2023
In image-based profiling, software extracts thousands of morphological features of cells from multi-channel fluorescence microscopy images, yielding single-cell profiles that can be used for basic research and drug discovery. Powerful applications have been proven, including clustering chemical and genetic perturbations on the basis of their similar morphological impact, identifying disease phenotypes by observing differences in profiles between healthy and diseased cells and predicting assay outcomes by using machine learning, among many others. Here, we provide an updated protocol for the most popular assay for image-based profiling, Cell Painting. Introduced in 2013, it uses six stains imaged in five channels and labels eight diverse components of the cell: DNA, cytoplasmic RNA, nucleoli, actin, Golgi apparatus, plasma membrane, endoplasmic reticulum and mitochondria. The original protocol was updated in 2016 on the basis of several years’ experience running it at two sites, after optimizing it by visual stain quality. Here, we describe the work of the Joint Undertaking for Morphological Profiling Cell Painting Consortium, to improve upon the assay via quantitative optimization by measuring the assay’s ability to detect morphological phenotypes and group similar perturbations together. The assay gives very robust outputs despite various changes to the protocol, and two vendors’ dyes work equivalently well. We present Cell Painting version 3, in which some steps are simplified and several stain concentrations can be reduced, saving costs. Cell culture and image acquisition take 1–2 weeks for typically sized batches of ≤20 plates; feature extraction and data analysis take an additional 1–2 weeks.
This protocol is an update to
Nat. Protoc
. 11, 1757–1774 (2016):
https://doi.org/10.1038/nprot.2016.105
We provide an updated protocol for image-based profiling with Cell Painting. A detailed procedure, with standardized conditions for the assay, is presented, along with a comprehensive description of parameters to be considered when optimizing the assay.
Journal Article
Manual and automated preparation of single-stranded DNA libraries for the sequencing of DNA from ancient biological remains and other sources of highly degraded DNA
by
Gansauge, Marie-Theres
,
Aximu-Petri, Ayinuer
,
Nagel, Sarah
in
631/158/2464
,
631/181/2474
,
631/208/212
2020
It has been shown that highly fragmented DNA is most efficiently converted into DNA libraries for sequencing if both strands of the DNA fragments are processed independently. We present an updated protocol for library preparation from single-stranded DNA, which is based on the splinted ligation of an adapter oligonucleotide to the 3′ ends of single DNA strands, the synthesis of a complementary strand using a DNA polymerase and the addition of a 5′ adapter via blunt-end ligation. The efficiency of library preparation is determined individually for each sample using a spike-in oligonucleotide. The whole workflow, including library preparation, quantification and amplification, requires two work days for up to 16 libraries. Alternatively, we provide documentation and electronic protocols enabling automated library preparation of 96 samples in parallel on a Bravo NGS Workstation (Agilent Technologies). After library preparation, molecules with uninformative short inserts (shorter than ~30−35 base pairs) can be removed by polyacrylamide gel electrophoresis if desired.
Here the authors describe an updated protocol for single-stranded sequencing library preparation suitable for highly degraded DNA from ancient remains or other sources. The procedure can be performed manually or in an automated fashion.
Journal Article
Proteome-wide structural changes measured with limited proteolysis-mass spectrometry: an advanced protocol for high-throughput applications
by
de Souza, Natalie
,
Pepelnjak, Monika
,
Malinovska, Liliana
in
631/1647/2163
,
631/45/475
,
631/535
2023
Proteins regulate biological processes by changing their structure or abundance to accomplish a specific function. In response to a perturbation, protein structure may be altered by various molecular events, such as post-translational modifications, protein–protein interactions, aggregation, allostery or binding to other molecules. The ability to probe these structural changes in thousands of proteins simultaneously in cells or tissues can provide valuable information about the functional state of biological processes and pathways. Here, we present an updated protocol for LiP-MS, a proteomics technique combining limited proteolysis with mass spectrometry, to detect protein structural alterations in complex backgrounds and on a proteome-wide scale. In LiP-MS, proteins undergo a brief proteolysis in native conditions followed by complete digestion in denaturing conditions, to generate structurally informative proteolytic fragments that are analyzed by mass spectrometry. We describe advances in the throughput and robustness of the LiP-MS workflow and implementation of data-independent acquisition–based mass spectrometry, which together achieve high reproducibility and sensitivity, even on large sample sizes. We introduce MSstatsLiP, an R package dedicated to the analysis of LiP-MS data for the identification of structurally altered peptides and differentially abundant proteins. The experimental procedures take 3 d, mass spectrometric measurement time and data processing depend on sample number and statistical analysis typically requires ~1 d. These improvements expand the adaptability of LiP-MS and enable wide use in functional proteomics and translational applications.
LiP-MS is an approach based on limited proteolysis that can be used to probe structural changes in proteins. This update highlights recent changes to the method to improve throughput, proteome coverage and data analysis.
Journal Article
Simple, efficient and thorough shotgun proteomic analysis with PatternLab V
by
Durán, Rosario
,
Santos, Marlon D. M
,
Brunoro, Giselle V. F
in
Amino acid sequence
,
Browsing
,
Computer applications
2022
Shotgun proteomics aims to identify and quantify the thousands of proteins in complex mixtures such as cell and tissue lysates and biological fluids. This approach uses liquid chromatography coupled with tandem mass spectrometry and typically generates hundreds of thousands of mass spectra that require specialized computational environments for data analysis. PatternLab for proteomics is a unified computational environment for analyzing shotgun proteomic data. PatternLab V (PLV) is the most comprehensive and crucial update so far, the result of intensive interaction with the proteomics community over several years. All PLV modules have been optimized and its graphical user interface has been completely updated for improved user experience. Major improvements were made to all aspects of the software, ranging from boosting the number of protein identifications to faster extraction of ion chromatograms. PLV provides modules for preparing sequence databases, protein identification, statistical filtering and in-depth result browsing for both labeled and label-free quantitation. The PepExplorer module can even pinpoint de novo sequenced peptides not already present in the database. PLV is of broad applicability and therefore suitable for challenging experimental setups, such as time-course experiments and data handling from unsequenced organisms. PLV interfaces with widely adopted software and community initiatives, e.g., Comet, Skyline, PEAKS and PRIDE. It is freely available at http://www.patternlabforproteomics.org.PatternLab is a unified computational environment for analyzing shotgun proteomic data. Version 5 provides modules for preparing sequence databases, protein identification, statistical filtering and in-depth result browsing.
Journal Article
Functionalized tetrahedral DNA frameworks for the capture of circulating tumor cells
by
Wang, Shaopeng
,
Li, Min
,
Lin, Meihua
in
631/61/54/992
,
639/925/926/1050
,
Analytical Chemistry
2024
Identification and characterization of circulating tumor cells (CTCs) from blood samples of patients with cancer can help monitor parameters such as disease stage, disease progression and therapeutic efficiency. However, the sensitivity and specificity of current multivalent approaches used for CTC capture is limited by the lack of control over the ligands’ position. In this Protocol Update, we describe DNA-tetrahedral frameworks anchored with aptamers that can be configured with user-defined spatial arrangements and stoichiometries. The modified tetrahedral DNA frameworks, termed ‘n-simplexes’, can be used as probes to specifically target receptor−ligand interactions on the cell membrane. Here, we describe the synthesis and use of n-simplexes that target the epithelial cell adhesion molecule expressed on the surface of CTCs. The characterization of the n-simplexes includes measuring the binding affinity to the membrane receptors as a result of the spatial arrangement and stoichiometry of the aptamers. We further detail the capture of CTCs from patient blood samples. The procedure for the preparation and characterization of n-simplexes requires 11.5 h, CTC capture from clinical samples and data processing requires ~5 h per six samples and the downstream analysis of captured cells typically requires 5.5 h. The protocol is suitable for users with basic expertise in molecular biology and handling of clinical samples.
Key points
The procedure describes the design of targeted probes via orthogonal anchoring of aptamers, with varying stoichiometries, to the vertices of tetrahedral DNA frameworks. The frameworks, synthesized by using a single annealing step, are then characterized by using native polyacrylamide gel electrophoresis, atomic force microscopy, single-molecule fluorescence and transmission electron microscopy.
The targeted tetrahedral DNA frameworks improve CTC capture efficiency when compared with alternative nucleic acid–based probes in whole-blood samples with the supernatant removed via centrifugation.
The controlled positioning and stoichiometry of aptamers on tetrahedral DNA frameworks enables the synthesis of targeted probes for the capture of circulating tumor cells from blood samples.
Journal Article
Inducible mouse models of colon cancer for the analysis of sporadic and inflammation-driven tumor progression and lymph node metastasis
by
Scheibe, Kristina
,
Greten, Florian R.
,
Boonsanay, Verawan
in
631/1647/334/1874/345
,
631/1647/767/70
,
631/67/1504/1885
2021
Despite advances in the detection and therapy of colorectal cancer (CRC) in recent years, CRC has remained a major challenge in clinical practice. Although alternative methods for modeling CRC have been developed, animal models of CRC remain helpful when analyzing molecular aspects of pathogenesis and are often used to perform preclinical in vivo studies of potential therapeutics. This protocol updates our protocol published in 2007, which provided an azoxymethane (AOM)-based setup for investigations into sporadic (Step 5A) and, when combined with dextran sodium sulfate (Step 5B), inflammation-associated tumor growth. This update also extends the applications beyond those of the original protocol by including an option in which AOM is serially applied to mice with p53 deficiency in the intestinal epithelium (Step 5C). In this model, the combination of p53 deficiency and AOM promotes tumor development, including growth of invasive cancers and lymph node metastasis. It also provides details on analysis of colorectal tumor growth and metastasis, including analysis of partial epithelial-to-mesenchymal transition, cell isolation and co-culture studies, high-resolution mini-endoscopy, light-sheet fluorescence microscopy and micro-CT imaging in mice. The target audience for our protocol is researchers who plan in vivo studies to address mechanisms influencing sporadic or inflammation-driven tumor development, including the analysis of local invasiveness and lymph node metastasis. It is suitable for preclinical in vivo testing of novel drugs and other interventional strategies for clinical translation, plus the evaluation of emerging imaging devices/modalities. It can be completed within 24 weeks (using Step 5A/C) or 10 weeks (using Step 5B).
Chemicals are used to induce sporadic and inflammation-associated colon tumor growth in mouse models. When combined with p53 deficiency, tumor development is promoted, including the growth of invasive cancers and lymph node metastasis.
Journal Article