Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
15,232 result(s) for "Medical Bioinformatics and Systems Biology"
Sort by:
Generalizable gesture classification of HDsEMG using volume representations of muscles averaged across multiple individuals
Human hands can perform far more gestures than the number of muscles controlling them, as most gestures result from coordinated combinations of muscle activations and relaxations. This complexity poses a key challenge for human-machine interfaces performing gesture classification based on electromyography (EMG). Rather than identifying all conceivable gestures, it may be simpler to instead identify the activity of the individual muscles which generate a variety of complicated gestures. Here we suggest a three-dimensional model with volume representations of individual digit extensor muscles, averaged across multiple individuals, and evaluate its application and performance in hand gesture classification. Time-domain peaks in high-density surface EMG data from different hand gestures were extracted and localized within the model, from which a gesture classification scheme was generated for both single and multi-label cases. The model was created and tested on a publicly available dataset with 19 participants, leveraging a leave-one-out approach to assess inter-subject generalizability, and multi-label data to assess generalizability to gestures not included in the creation of the model. For single-label classification performance, true positive rates were between 61.9 and 95.1%, with false positive rates between 0 and 24.1%, for different single-digit extensions. The multi-label test demonstrated some degree of generalizability in identifying completely new gesture compositions, while simultaneously maintaining the leave-one-out approach for inter-subject generalizability. A model generated with this approach could be used for gesture classification by anyone, without individual modelling data, with the potential to generalize to any number of gesture compositions.
Liver glycogen phosphorylase is upregulated in glioblastoma and provides a metabolic vulnerability to high dose radiation
Channelling of glucose via glycogen, known as the glycogen shunt, may play an important role in the metabolism of brain tumours, especially in hypoxic conditions. We aimed to dissect the role of glycogen degradation in glioblastoma (GBM) response to ionising radiation (IR). Knockdown of the glycogen phosphorylase liver isoform (PYGL), but not the brain isoform (PYGB), decreased clonogenic growth and survival of GBM cell lines and sensitised them to IR doses of 10–12 Gy. Two to five days after IR exposure of PYGL knockdown GBM cells, mitotic catastrophy and a giant multinucleated cell morphology with senescence-like phenotype developed. The basal levels of the lysosomal enzyme alpha-acid glucosidase (GAA), essential for autolysosomal glycogen degradation, and the lipidated forms of gamma-aminobutyric acid receptor-associated protein-like (GABARAPL1 and GABARAPL2) increased in shPYGL U87MG cells, suggesting a compensatory mechanism of glycogen degradation. In response to IR, dysregulation of autophagy was shown by accumulation of the p62 and the lipidated form of GABARAPL1 and GABARAPL2 in shPYGL U87MG cells. IR increased the mitochondrial mass and the colocalisation of mitochondria with lysosomes in shPYGL cells, thereby indicating reduced mitophagy. These changes coincided with increased phosphorylation of AMP-activated protein kinase and acetyl-CoA carboxylase 2, slower ATP generation in response to glucose loading and progressive loss of oxidative phosphorylation. The resulting metabolic deficiencies affected the availability of ATP required for mitosis, resulting in the mitotic catastrophy observed in shPYGL cells following IR. PYGL mRNA and protein levels were higher in human GBM than in normal human brain tissues and high PYGL mRNA expression in GBM correlated with poor patient survival. In conclusion, we show a major new role for glycogen metabolism in GBM cancer. Inhibition of glycogen degradation sensitises GBM cells to high-dose IR indicating that PYGL is a potential novel target for the treatment of GBMs.
Modeling aspects of the language of life through transfer-learning protein sequences
Background Predicting protein function and structure from sequence is one important challenge for computational biology. For 26 years, most state-of-the-art approaches combined machine learning and evolutionary information. However, for some applications retrieving related proteins is becoming too time-consuming. Additionally, evolutionary information is less powerful for small families, e.g. for proteins from the Dark Proteome . Both these problems are addressed by the new methodology introduced here. Results We introduced a novel way to represent protein sequences as continuous vectors ( embeddings ) by using the language model ELMo taken from natural language processing. By modeling protein sequences, ELMo effectively captured the biophysical properties of the language of life from unlabeled big data (UniRef50). We refer to these new embeddings as SeqVec ( Seq uence-to- Vec tor) and demonstrate their effectiveness by training simple neural networks for two different tasks. At the per-residue level, secondary structure (Q3 = 79% ± 1, Q8 = 68% ± 1) and regions with intrinsic disorder (MCC = 0.59 ± 0.03) were predicted significantly better than through one-hot encoding or through Word2vec-like approaches. At the per-protein level, subcellular localization was predicted in ten classes (Q10 = 68% ± 1) and membrane-bound were distinguished from water-soluble proteins (Q2 = 87% ± 1). Although SeqVec embeddings generated the best predictions from single sequences, no solution improved over the best existing method using evolutionary information. Nevertheless, our approach improved over some popular methods using evolutionary information and for some proteins even did beat the best. Thus, they prove to condense the underlying principles of protein sequences. Overall, the important novelty is speed: where the lightning-fast HHblits needed on average about two minutes to generate the evolutionary information for a target protein, SeqVec created embeddings on average in 0.03 s. As this speed-up is independent of the size of growing sequence databases, SeqVec provides a highly scalable approach for the analysis of big data in proteomics, i.e. microbiome or metaproteome analysis. Conclusion Transfer-learning succeeded to extract information from unlabeled sequence databases relevant for various protein prediction tasks. SeqVec modeled the language of life, namely the principles underlying protein sequences better than any features suggested by textbooks and prediction methods. The exception is evolutionary information, however, that information is not available on the level of a single sequence.
DPDDI: a deep predictor for drug-drug interactions
Background The treatment of complex diseases by taking multiple drugs becomes increasingly popular. However, drug-drug interactions (DDIs) may give rise to the risk of unanticipated adverse effects and even unknown toxicity. DDI detection in the wet lab is expensive and time-consuming. Thus, it is highly desired to develop the computational methods for predicting DDIs. Generally, most of the existing computational methods predict DDIs by extracting the chemical and biological features of drugs from diverse drug-related properties, however some drug properties are costly to obtain and not available in many cases. Results In this work, we presented a novel method (namely DPDDI) to predict DDIs by extracting the network structure features of drugs from DDI network with graph convolution network (GCN), and the deep neural network (DNN) model as a predictor. GCN learns the low-dimensional feature representations of drugs by capturing the topological relationship of drugs in DDI network. DNN predictor concatenates the latent feature vectors of any two drugs as the feature vector of the corresponding drug pairs to train a DNN for predicting the potential drug-drug interactions. Experiment results show that, the newly proposed DPDDI method outperforms four other state-of-the-art methods; the GCN-derived latent features include more DDI information than other features derived from chemical, biological or anatomical properties of drugs; and the concatenation feature aggregation operator is better than two other feature aggregation operators (i.e., inner product and summation). The results in case studies confirm that DPDDI achieves reasonable performance in predicting new DDIs. Conclusion We proposed an effective and robust method DPDDI to predict the potential DDIs by utilizing the DDI network information without considering the drug properties (i.e., drug chemical and biological properties). The method should also be useful in other DDI-related scenarios, such as the detection of unexpected side effects, and the guidance of drug combination.
A genomic catalog of Earth’s microbiomes
The reconstruction of bacterial and archaeal genomes from shotgun metagenomes has enabled insights into the ecology and evolution of environmental and host-associated microbiomes. Here we applied this approach to >10,000 metagenomes collected from diverse habitats covering all of Earth’s continents and oceans, including metagenomes from human and animal hosts, engineered environments, and natural and agricultural soils, to capture extant microbial, metabolic and functional potential. This comprehensive catalog includes 52,515 metagenome-assembled genomes representing 12,556 novel candidate species-level operational taxonomic units spanning 135 phyla. The catalog expands the known phylogenetic diversity of bacteria and archaea by 44% and is broadly available for streamlined comparative analyses, interactive exploration, metabolic modeling and bulk download. We demonstrate the utility of this collection for understanding secondary-metabolite biosynthetic potential and for resolving thousands of new host linkages to uncultivated viruses. This resource underscores the value of genome-centric approaches for revealing genomic properties of uncultivated microorganisms that affect ecosystem processes. Cataloging microbial genomes from Earth’s environments expands the known phylogenetic diversity of bacteria and archaea.
Fungal community analysis by high-throughput sequencing of amplified markers – a user's guide
Novel high-throughput sequencing methods outperform earlier approaches in terms of resolution and magnitude. They enable identification and relative quantification of community members and offer new insights into fungal community ecology. These methods are currently taking over as the primary tool to assess fungal communities of plant-associated endophytes, pathogens, and mycorrhizal symbionts, as well as free-living saprotrophs. Taking advantage of the collective experience of six research groups, we here review the different stages involved in fungal community analysis, from field sampling via laboratory procedures to bioinformatics and data interpretation. We discuss potential pitfalls, alternatives, and solutions. Highlighted topics are challenges involved in: obtaining representative DNA/RNA samples and replicates that encompass the targeted variation in community composition, selection of marker regions and primers, options for amplification and multiplexing, handling of sequencing errors, and taxonomic identification. Without awareness of methodological biases, limitations of markers, and bioinformatics challenges, large-scale sequencing projects risk yielding artificial results and misleading conclusions.
Data integration in the era of omics: current and future challenges
To integrate heterogeneous and large omics data constitutes not only a conceptual challenge but a practical hurdle in the daily analysis of omics data. With the rise of novel omics technologies and through large-scale consortia projects, biological systems are being further investigated at an unprecedented scale generating heterogeneous and often large data sets. These data-sets encourage researchers to develop novel data integration methodologies. In this introduction we review the definition and characterize current efforts on data integration in the life sciences. We have used a web-survey to assess current research projects on data-integration to tap into the views, needs and challenges as currently perceived by parts of the research community.
Creation and analysis of biochemical constraint-based models using the COBRA Toolbox v.3.0
Constraint-based reconstruction and analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental molecular systems biology data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive desktop software suite of interoperable COBRA methods. It has found widespread application in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. This protocol is an update to the COBRA Toolbox v.1.0 and v.2.0. Version 3.0 includes new methods for quality-controlled reconstruction, modeling, topological analysis, strain and experimental design, and network visualization, as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimization solvers for multi-scale, multi-cellular, and reaction kinetic modeling, respectively. This protocol provides an overview of all these new features and can be adapted to generate and analyze constraint-based models in a wide variety of scenarios. The COBRA Toolbox v.3.0 provides an unparalleled depth of COBRA methods.The COBRA toolbox provides quality-controlled reconstruction, modeling, topological analysis, and network visualization, as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data.
PTPD: predicting therapeutic peptides by deep learning and word2vec
* Background In the search for therapeutic peptides for disease treatments, many efforts have been made to identify various functional peptides from large numbers of peptide sequence databases. In this paper, we propose an effective computational model that uses deep learning and word2vec to predict therapeutic peptides (PTPD). * Results Representation vectors of all k -mers were obtained through word2vec based on k -mer co-existence information. The original peptide sequences were then divided into k -mers using the windowing method. The peptide sequences were mapped to the input layer by the embedding vector obtained by word2vec. Three types of filters in the convolutional layers, as well as dropout and max-pooling operations, were applied to construct feature maps. These feature maps were concatenated into a fully connected dense layer, and rectified linear units (ReLU) and dropout operations were included to avoid over-fitting of PTPD. The classification probabilities were generated by a sigmoid function. PTPD was then validated using two datasets: an independent anticancer peptide dataset and a virulent protein dataset, on which it achieved accuracies of 96% and 94%, respectively. * Conclusions PTPD identified novel therapeutic peptides efficiently, and it is suitable for application as a useful tool in therapeutic peptide design.
cytoHubba: identifying hub objects and sub-networks from complex interactome
Background Network is a useful way for presenting many types of biological data including protein-protein interactions, gene regulations, cellular pathways, and signal transductions. We can measure nodes by their network features to infer their importance in the network, and it can help us identify central elements of biological networks. Results We introduce a novel Cytoscape plugin cytoHubba for ranking nodes in a network by their network features. CytoHubba provides 11 topological analysis methods including Degree, Edge Percolated Component, Maximum Neighborhood Component, Density of Maximum Neighborhood Component, Maximal Clique Centrality and six centralities (Bottleneck, EcCentricity, Closeness, Radiality, Betweenness, and Stress) based on shortest paths. Among the eleven methods, the new proposed method, MCC, has a better performance on the precision of predicting essential proteins from the yeast PPI network. Conclusions CytoHubba provide a user-friendly interface to explore important nodes in biological networks. It computes all eleven methods in one stop shopping way. Besides, researchers are able to combine cytoHubba with and other plugins into a novel analysis scheme. The network and sub-networks caught by this topological analysis strategy will lead to new insights on essential regulatory networks and protein drug targets for experimental biologists. According to cytoscape plugin download statistics, the accumulated number of cytoHubba is around 6,700 times since 2010.