Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
376 result(s) for "High-Throughput Nucleotide Sequencing - trends"
Sort by:
A comparison of single-cell trajectory inference methods
Trajectory inference approaches analyze genome-wide omics data from thousands of single cells and computationally infer the order of these cells along developmental trajectories. Although more than 70 trajectory inference tools have already been developed, it is challenging to compare their performance because the input they require and output models they produce vary substantially. Here, we benchmark 45 of these methods on 110 real and 229 synthetic datasets for cellular ordering, topology, scalability and usability. Our results highlight the complementarity of existing tools, and that the choice of method should depend mostly on the dataset dimensions and trajectory topology. Based on these results, we develop a set of guidelines to help users select the best method for their dataset. Our freely available data and evaluation pipeline (https://benchmark.dynverse.org) will aid in the development of improved tools designed to analyze increasingly large and complex single-cell datasets.The authors comprehensively benchmark the accuracy, scalability, stability and usability of 45 single-cell trajectory inference methods.
Review of Clinical Next-Generation Sequencing
- Next-generation sequencing (NGS) is a technology being used by many laboratories to test for inherited disorders and tumor mutations. This technology is new for many practicing pathologists, who may not be familiar with the uses, methodology, and limitations of NGS. - To familiarize pathologists with several aspects of NGS, including current and expanding uses; methodology including wet bench aspects, bioinformatics, and interpretation; validation and proficiency; limitations; and issues related to the integration of NGS data into patient care. - The review is based on peer-reviewed literature and personal experience using NGS in a clinical setting at a major academic center. - The clinical applications of NGS will increase as the technology, bioinformatics, and resources evolve to address the limitations and improve quality of results. The challenge for clinical laboratories is to ensure testing is clinically relevant, cost-effective, and can be integrated into clinical care.
Three decades of nanopore sequencing
A long-held goal in sequencing has been to use a voltage-biased nanoscale pore in a membrane to measure the passage of a linear, single-stranded (ss) DNA or RNA molecule through that pore. With the development of enzyme-based methods that ratchet polynucleotides through the nanopore, nucleobase-by-nucleobase, measurements of changes in the current through the pore can now be decoded into a DNA sequence using an algorithm. In this Historical Perspective, we describe the key steps in nanopore strand-sequencing, from its earliest conceptualization more than 25 years ago to its recent commercialization and application.
Hereditary spastic paraplegia: from diagnosis to emerging therapeutic approaches
Hereditary spastic paraplegia (HSP) describes a heterogeneous group of genetic neurodegenerative diseases characterised by progressive spasticity of the lower limbs. The pathogenic mechanism, associated clinical features, and imaging abnormalities vary substantially according to the affected gene and differentiating HSP from other genetic diseases associated with spasticity can be challenging. Next generation sequencing-based gene panels are now widely available but have limitations and a molecular diagnosis is not made in most suspected cases. Symptomatic management continues to evolve but with a greater understanding of the pathophysiological basis of individual HSP subtypes there are emerging opportunities to provide targeted molecular therapies and personalised medicine.
Recent trends in molecular diagnostics of yeast infections: from PCR to NGS
The incidence of opportunistic yeast infections in humans has been increasing over recent years. These infections are difficult to treat and diagnose, in part due to the large number and broad diversity of species that can underlie the infection. In addition, resistance to one or several antifungal drugs in infecting strains is increasingly being reported, severely limiting therapeutic options and showcasing the need for rapid detection of the infecting agent and its drug susceptibility profile. Current methods for species and resistance identification lack satisfactory sensitivity and specificity, and often require prior culturing of the infecting agent, which delays diagnosis. Recently developed high-throughput technologies such as next generation sequencing or proteomics are opening completely new avenues for more sensitive, accurate and fast diagnosis of yeast pathogens. These approaches are the focus of intensive research, but translation into the clinics requires overcoming important challenges. In this review, we provide an overview of existing and recently emerged approaches that can be used in the identification of yeast pathogens and their drug resistance profiles. Throughout the text we highlight the advantages and disadvantages of each methodology and discuss the most promising developments in their path from bench to bedside.
Next-generation sequencing in Charcot–Marie–Tooth disease: opportunities and challenges
Charcot–Marie–Tooth disease and the related disorders hereditary motor neuropathy and hereditary sensory neuropathy, collectively termed CMT, are the commonest group of inherited neuromuscular diseases, and they exhibit wide phenotypic and genetic heterogeneity. CMT is usually characterized by distal muscle atrophy, often with foot deformity, weakness and sensory loss. In the past decade, next-generation sequencing (NGS) technologies have revolutionized genomic medicine and, as these technologies are being applied to clinical practice, they are changing our diagnostic approach to CMT. In this Review, we discuss the application of NGS technologies, including disease-specific gene panels, whole-exome sequencing, whole-genome sequencing (WGS), mitochondrial sequencing and high-throughput transcriptome sequencing, to the diagnosis of CMT. We discuss the growing challenge of variant interpretation and consider how the clinical phenotype can be combined with genetic, bioinformatic and functional evidence to assess the pathogenicity of genetic variants in patients with CMT. WGS has several advantages over the other techniques that we discuss, which include unparalleled coverage of coding, non-coding and intergenic areas of both nuclear and mitochondrial genomes, the ability to identify structural variants and the opportunity to perform genome-wide dense homozygosity mapping. We propose an algorithm for incorporating WGS into the CMT diagnostic pathway.
Making the Leap from Research Laboratory to Clinic: Challenges and Opportunities for Next-Generation Sequencing in Infectious Disease Diagnostics
Next-generation DNA sequencing (NGS) has progressed enormously over the past decade, transforming genomic analysis and opening up many new opportunities for applications in clinical microbiology laboratories. The impact of NGS on microbiology has been revolutionary, with new microbial genomic sequences being generated daily, leading to the development of large databases of genomes and gene sequences. The ability to analyze microbial communities without culturing organisms has created the ever-growing field of metagenomics and microbiome analysis and has generated significant new insights into the relation between host and microbe. The medical literature contains many examples of how this new technology can be used for infectious disease diagnostics and pathogen analysis. The implementation of NGS in medical practice has been a slow process due to various challenges such as clinical trials, lack of applicable regulatory guidelines, and the adaptation of the technology to the clinical environment. In April 2015, the American Academy of Microbiology (AAM) convened a colloquium to begin to define these issues, and in this document, we present some of the concepts that were generated from these discussions.
Genome-wide genetic marker discovery and genotyping using next-generation sequencing
Key Points New methods that make use of high-throughput sequencing are enabling the simultaneous discovery and sequencing of thousands of genetic markers across whole genomes. These methods can be used to study wild populations of tens or hundreds of individuals for which genomic resources were not previously available. They also enable the rapid genotyping of hundreds of individuals in a mapping cross, for quantitative trait locus (QTL) mapping and marker-assisted selection. We describe best practices and make recommendations for a group of methods involving the use of restriction enzymes, namely reduced-representation libraries, complexity reduction of polymorphic sequences, restriction-site-associated DNA sequencing, multiplexed shotgun genotyping and genotyping by sequencing. We discuss the impact of several factors — such as the availability of genomic resources, the levels of polymorphism, the pooling of samples and the choice of restriction enzyme — on the design and implementation of high-throughput marker discovery and genotyping experiments. The analysis of data from these methods can be challenging and new methods for processing high-throughput marker data are described. At present these methods are far more economical than whole-genome sequencing. We discuss how this situation is likely to change over the next few years, as sequencing costs continue to fall rapidly. The authors describe the best practices for a growing number of methods that use next-generation sequencing to rapidly discover and assess genetic markers across any genome, with applications from population genomics and quantitative trait locus mapping to marker-assisted selection. The advent of next-generation sequencing (NGS) has revolutionized genomic and transcriptomic approaches to biology. These new sequencing tools are also valuable for the discovery, validation and assessment of genetic markers in populations. Here we review and discuss best practices for several NGS methods for genome-wide genetic marker development and genotyping that use restriction enzyme digestion of target genomes to reduce the complexity of the target. These new methods — which include reduced-representation sequencing using reduced-representation libraries (RRLs) or complexity reduction of polymorphic sequences (CRoPS), restriction-site-associated DNA sequencing (RAD-seq) and low coverage genotyping — are applicable to both model organisms with high-quality reference genome sequences and, excitingly, to non-model species with no existing genomic data.
Highly accurate long reads are crucial for realizing the potential of biodiversity genomics
Background Generating the most contiguous, accurate genome assemblies given available sequencing technologies is a long-standing challenge in genome science. With the rise of long-read sequencing, assembly challenges have shifted from merely increasing contiguity to correctly assembling complex, repetitive regions of interest, ideally in a phased manner. At present, researchers largely choose between two types of long read data: longer, but less accurate sequences, or highly accurate, but shorter reads (i.e., >Q20 or 99% accurate). To better understand how these types of long-read data as well as scale of data (i.e., mean length and sequencing depth) influence genome assembly outcomes, we compared genome assemblies for a caddisfly, Hesperophylax magnus , generated with longer, but less accurate, Oxford Nanopore (ONT) R9.4.1 and highly accurate PacBio HiFi (HiFi) data. Next, we expanded this comparison to consider the influence of highly accurate long-read sequence data on genome assemblies across 6750 plant and animal genomes. For this broader comparison, we used HiFi data as a surrogate for highly accurate long-reads broadly as we could identify when they were used from GenBank metadata. Results HiFi reads outperformed ONT reads in all assembly metrics tested for the caddisfly data set and allowed for accurate assembly of the repetitive ~ 20 Kb H-fibroin gene. Across plants and animals, genome assemblies that incorporated HiFi reads were also more contiguous. For plants, the average HiFi assembly was 501% more contiguous (mean contig N50 = 20.5 Mb) than those generated with any other long-read data (mean contig N50 = 4.1 Mb). For animals, HiFi assemblies were 226% more contiguous (mean contig N50 = 20.9 Mb) versus other long-read assemblies (mean contig N50 = 9.3 Mb). In plants, we also found limited evidence that HiFi may offer a unique solution for overcoming genomic complexity that scales with assembly size. Conclusions Highly accurate long-reads generated with HiFi or analogous technologies represent a key tool for maximizing genome assembly quality for a wide swath of plants and animals. This finding is particularly important when resources only allow for one type of sequencing data to be generated. Ultimately, to realize the promise of biodiversity genomics, we call for greater uptake of highly accurate long-reads in future studies.
Reconstructing ancient genomes and epigenomes
Key Points High-throughput sequencing technologies have revolutionized ancient DNA (aDNA) research by enabling the reconstruction of whole-genome sequences from traces of short and extremely degraded DNA fragments. DNA preservation is highly variable across samples and environments, as well as within single archaeological remains. The current temporal range for whole-genome sequencing covers the past million years. DNA extracts from ancient material are generally metagenomic assemblages that include the DNA from the host and its associated microorganisms, as well as from a range of environmental microorganisms that colonize the sample after its death. Various molecular approaches have been developed to improve access to aDNA in samples and reduce the sequencing costs of paleogenomics. These include extraction methods tailored to ultrashort DNA fragments, target enrichment for library inserts annealing to a panel of nucleic acid probes, and library building procedures targeting each DNA strand individually or incorporating only the most damaged DNA fragments. Analyses of aDNA are prone to contamination by modern DNA molecules, which generally show limited degradation and fragmentation. Therefore, ruling out contamination — for example, exploiting patterns of DNA degradation and monitoring the heterozygosity levels observed at haploid loci — represents the cornerstone of every aDNA study. DNA degradation reactions taking place post-mortem introduce mutation and depth-of-coverage patterns in the sequence data that can be exploited to authenticate paleogenomes and reconstruct genome-wide nucleosome and methylation maps. Sequencing genomes of ancient specimens, including human ancestors, can provide rich insights into evolutionary histories. However, ancient DNA samples are frequently degraded, damaged and contaminated with ancient and modern DNA from various sources. This Review describes the methodological and bioinformatic advances that allow these challenges to be overcome in order to process and sequence ancient samples for genome reconstruction, as well as recent progress in characterizing ancient epigenomes. Research involving ancient DNA (aDNA) has experienced a true technological revolution in recent years through advances in the recovery of aDNA and, particularly, through applications of high-throughput sequencing. Formerly restricted to the analysis of only limited amounts of genetic information, aDNA studies have now progressed to whole-genome sequencing for an increasing number of ancient individuals and extinct species, as well as to epigenomic characterization. Such advances have enabled the sequencing of specimens of up to 1 million years old, which, owing to their extensive DNA damage and contamination, were previously not amenable to genetic analyses. In this Review, we discuss these varied technical challenges and solutions for sequencing ancient genomes and epigenomes.