Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
156
result(s) for
"Bioinformatic and algorithmical studies"
Sort by:
Performance of in silico prediction tools for the classification of rare BRCA1/2 missense variants in clinical diagnostics
by
Hauke, Jan
,
Weber, Jonas
,
Engel, Christoph
in
Bioinformatic and algorithmical studies
,
Biomedical and Life Sciences
,
Biomedicine
2018
Background
The use of next-generation sequencing approaches in clinical diagnostics has led to a tremendous increase in data and a vast number of variants of uncertain significance that require interpretation. Therefore, prediction of the effects of missense mutations using
in silico
tools has become a frequently used approach. Aim of this study was to assess the reliability of
in silico
prediction as a basis for clinical decision making in the context of hereditary breast and/or ovarian cancer.
Methods
We tested the performance of four prediction tools (Align-GVGD, SIFT, PolyPhen-2, MutationTaster2) using a set of 236
BRCA1/2
missense variants that had previously been classified by expert committees. However, a major pitfall in the creation of a reliable evaluation set for our purpose is the generally accepted classification of
BRCA1/2
missense variants using the multifactorial likelihood model, which is partially based on Align-GVGD results. To overcome this drawback we identified 161 variants whose classification is independent of any previous
in silico
prediction. In addition to the performance as stand-alone tools we examined the sensitivity, specificity, accuracy and Matthews correlation coefficient (MCC) of combined approaches.
Results
PolyPhen-2 achieved the lowest sensitivity (0.67), specificity (0.67), accuracy (0.67) and MCC (0.39). Align-GVGD achieved the highest values of specificity (0.92), accuracy (0.92) and MCC (0.73), but was outperformed regarding its sensitivity (0.90) by SIFT (1.00) and MutationTaster2 (1.00). All tools suffered from poor specificities, resulting in an unacceptable proportion of false positive results in a clinical setting. This shortcoming could not be bypassed by combination of these tools. In the best case scenario, 138 families would be affected by the misclassification of neutral variants within the cohort of patients of the German Consortium for Hereditary Breast and Ovarian Cancer.
Conclusion
We show that due to low specificities state-of-the-art
in silico
prediction tools are not suitable to predict pathogenicity of variants of uncertain significance in
BRCA1/2
. Thus, clinical consequences should never be based solely on
in silico
forecasts. However, our data suggests that SIFT and MutationTaster2 could be suitable to predict benignity, as both tools did not result in false negative predictions in our analysis.
Journal Article
HLA and proteasome expression body map
by
Löwer, Martin
,
Boegel, Sebastian
,
Sahin, Ugur
in
Autoimmune
,
Bioinformatic and algorithmical studies
,
Bioinformatics
2018
Background
The presentation of HLA peptide complexes to T cells is a highly regulated and tissue specific process involving multiple transcriptionally controlled cellular components. The extensive polymorphism of HLA genes and the complex composition of the proteasome make it difficult to map their expression profiles across tissues.
Methods
Here we applied a tailored gene quantification pipeline to 4323 publicly available RNA-Seq datasets representing 55 normal tissues and cell types to examine expression profiles of (classical and non-classical) HLA class I, class II and proteasomal genes.
Results
We generated the first comprehensive expression atlas of antigen presenting-related genes across 56 normal tissues and cell types, including immune cells, pancreatic islets, platelets and hematopoietic stem cells. We found a surprisingly heterogeneous HLA expression pattern with up to 100-fold difference in intra-tissue median HLA abundances. Cells of the immune system and lymphatic organs expressed the highest levels of classical HLA class I (HLA-A,-B,-C), class II (HLA-DQA1,-DQB1,-DPA1,-DPB1,-DRA,-DRB1) and non-classical HLA class I (HLA-E,-F) molecules, whereas retina, brain, muscle, megakaryocytes and erythroblasts showed the lowest abundance. In contrast, we identified a distinct and highly tissue-restricted expression pattern of the non-classical class I gene HLA-G in placenta, pancreatic islets, pituitary gland and testis. While the constitutive proteasome showed relatively constant expression across all tissues, we found the immunoproteasome to be enriched in lymphatic organs and almost absent in immune privileged tissues.
Conclusions
Here, we not only provide a reference catalog of tissue and cell type specific HLA expression, but also highlight extremely variable expression of the basic components of antigen processing and presentation in different cell types. Our findings indicate that low expression of classical HLA class I molecules together with lack of immunoproteasome components as well as upregulation of HLA-G may be of key relevance to maintain tolerance in immune privileged tissues.
Journal Article
High performance logistic regression for privacy-preserving genome analysis
by
Nascimento, Anderson C. A.
,
Todoki, Ariel
,
Railsback, Davis
in
Accuracy
,
Algorithms
,
Approximation
2021
Background
In biomedical applications, valuable data is often split between owners who cannot openly share the data because of privacy regulations and concerns. Training machine learning models on the joint data without violating privacy is a major technology challenge that can be addressed by combining techniques from machine learning and cryptography. When collaboratively training machine learning models with the cryptographic technique named secure multi-party computation, the price paid for keeping the data of the owners private is an increase in computational cost and runtime. A careful choice of machine learning techniques, algorithmic and implementation optimizations are a necessity to enable practical secure machine learning over distributed data sets. Such optimizations can be tailored to the kind of data and Machine Learning problem at hand.
Methods
Our setup involves secure two-party computation protocols, along with a trusted initializer that distributes correlated randomness to the two computing parties. We use a gradient descent based algorithm for training a logistic regression like model with a clipped ReLu activation function, and we break down the algorithm into corresponding cryptographic protocols. Our main contributions are a new protocol for computing the activation function that requires neither secure comparison protocols nor Yao’s garbled circuits, and a series of cryptographic engineering optimizations to improve the performance.
Results
For our largest gene expression data set, we train a model that requires over 7 billion secure multiplications; the training completes in about 26.90 s in a local area network. The implementation in this work is a further optimized version of the implementation with which we won first place in Track 4 of the iDASH 2019 secure genome analysis competition.
Conclusions
In this paper, we present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function. To the best of our knowledge, we present the fastest existing secure multi-party computation implementation for training logistic regression models on high dimensional genome data distributed across a local area network.
Journal Article
Detecting copy number variation in next generation sequencing data from diagnostic gene panels
by
Drabløs, Finn
,
Lavik, Liss Anne Solberg
,
Sjursen, Wenche
in
Bioinformatic and algorithmical studies
,
Biomedical and Life Sciences
,
Biomedicine
2021
Background
Detection of copy number variation (CNV) in genes associated with disease is important in genetic diagnostics, and next generation sequencing (NGS) technology provides data that can be used for CNV detection. However, CNV detection based on NGS data is in general not often used in diagnostic labs as the data analysis is challenging, especially with data from targeted gene panels. Wet lab methods like MLPA (MRC Holland) are widely used, but are expensive, time consuming and have gene-specific limitations. Our aim has been to develop a bioinformatic tool for CNV detection from NGS data in medical genetic diagnostic samples.
Results
Our computational pipeline for detection of CNVs in NGS data from targeted gene panels utilizes coverage depth of the captured regions and calculates a copy number ratio score for each region. This is computed by comparing the mean coverage of the sample with the mean coverage of the same region in other samples, defined as a pool. The pipeline selects pools for comparison dynamically from previously sequenced samples, using the pool with an average coverage depth that is nearest to the one of the samples. A sliding window-based approach is used to analyze each region, where length of sliding window and sliding distance can be chosen dynamically to increase or decrease the resolution. This helps in detecting CNVs in small or partial exons. With this pipeline we have correctly identified the CNVs in 36 positive control samples, with sensitivity of 100% and specificity of 91%. We have detected whole gene level deletion/duplication, single/multi exonic level deletion/duplication, partial exonic deletion and mosaic deletion. Since its implementation in mid-2018 it has proven its diagnostic value with more than 45 CNV findings in routine tests.
Conclusions
With this pipeline as part of our diagnostic practices it is now possible to detect partial, single or multi-exonic, and intragenic CNVs in all genes in our target panel. This has helped our diagnostic lab to expand the portfolio of genes where we offer CNV detection, which previously was limited by the availability of MLPA kits.
Journal Article
Application of Neural Networks for classification of Patau, Edwards, Down, Turner and Klinefelter Syndrome based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics
by
Badnjevic, Almir
,
Gurbeta, Lejla
,
Catic, Aida
in
Artificial neural networks
,
Bioinformatic and algorithmical studies
,
Biomedical and Life Sciences
2018
Background
The usage of Artificial Neural Networks (ANNs) for genome-enabled classifications and establishing genome-phenotype correlations have been investigated more extensively over the past few years. The reason for this is that ANNs are good approximates of complex functions, so classification can be performed without the need for explicitly defined input-output model. This engineering tool can be applied for optimization of existing methods for disease/syndrome classification. Cytogenetic and molecular analyses are the most frequent tests used in prenatal diagnostic for the early detection of Turner, Klinefelter, Patau, Edwards and Down syndrome. These procedures can be lengthy, repetitive; and often employ invasive techniques so a robust automated method for classifying and reporting prenatal diagnostics would greatly help the clinicians with their routine work.
Methods
The database consisted of data collected from 2500 pregnant woman that came to the Institute of Gynecology, Infertility and Perinatology “Mehmedbasic” for routine antenatal care between January 2000 and December 2016. During first trimester all women were subject to screening test where values of maternal serum pregnancy-associated plasma protein A (PAPP-A) and free beta human chorionic gonadotropin (β-hCG) were measured. Also, fetal nuchal translucency thickness and the presence or absence of the nasal bone was observed using ultrasound.
Results
The architectures of linear feedforward and feedback neural networks were investigated for various training data distributions and number of neurons in hidden layer. Feedback neural network architecture out performed feedforward neural network architecture in predictive ability for all five aneuploidy prenatal syndrome classes. Feedforward neural network with 15 neurons in hidden layer achieved classification sensitivity of 92.00%. Classification sensitivity of feedback (Elman’s) neural network was 99.00%. Average accuracy of feedforward neural network was 89.6% and for feedback was 98.8%.
Conclusion
The results presented in this paper prove that an expert diagnostic system based on neural networks can be efficiently used for classification of five aneuploidy syndromes, covered with this study, based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics. Developed Expert System proved to be simple, robust, and powerful in properly classifying prenatal aneuploidy syndromes.
Journal Article
Machine learning based refined differential gene expression analysis of pediatric sepsis
by
EL-Manzalawy, Yasser
,
Abbas, Mostafa
in
Algorithms
,
Analysis
,
Bioinformatic and algorithmical studies
2020
Background
Differential expression (DE) analysis of transcriptomic data enables genome-wide analysis of gene expression changes associated with biological conditions of interest. Such analysis often provides a wide list of genes that are differentially expressed between two or more groups. In general, identified differentially expressed genes (DEGs) can be subject to further downstream analysis for obtaining more biological insights such as determining enriched functional pathways or gene ontologies. Furthermore, DEGs are treated as candidate biomarkers and a small set of DEGs might be identified as biomarkers using either biological knowledge or data-driven approaches.
Methods
In this work, we present a novel approach for identifying biomarkers from a list of DEGs by re-ranking them according to the Minimum Redundancy Maximum Relevance (MRMR) criteria using repeated cross-validation feature selection procedure.
Results
Using gene expression profiles for 199 children with sepsis and septic shock, we identify 108 DEGs and propose a 10-gene signature for reliably predicting pediatric sepsis mortality with an estimated Area Under ROC Curve (AUC) score of 0.89.
Conclusions
Machine learning based refinement of DE analysis is a promising tool for prioritizing DEGs and discovering biomarkers from gene expression profiles. Moreover, our reported 10-gene signature for pediatric sepsis mortality may facilitate the development of reliable diagnosis and prognosis biomarkers for sepsis.
Journal Article
Differential gene expression in disease: a comparison between high-throughput studies and the literature
by
Rodriguez-Esteban, Raul
,
Jiang, Xiaoyu
in
Analysis
,
Bioinformatic and algorithmical studies
,
Bioinformatics
2017
Background
Differential gene expression is important to understand the biological differences between healthy and diseased states. Two common sources of differential gene expression data are microarray studies and the biomedical literature.
Methods
With the aid of text mining and gene expression analysis we have examined the comparative properties of these two sources of differential gene expression data.
Results
The literature shows a preference for reporting genes associated to higher fold changes in microarray data, rather than genes that are simply significantly differentially expressed. Thus, the resemblance between the literature and microarray data increases when the fold-change threshold for microarray data is increased. Moreover, the literature has a reporting preference for differentially expressed genes that (1) are overexpressed rather than underexpressed; (2) are overexpressed in multiple diseases; and (3) are popular in the biomedical literature at large. Additionally, the degree to which diseases are similar depends on whether microarray data or the literature is used to compare them. Finally, vaguely-qualified reports of differential expression magnitudes in the literature have only small correlation with microarray fold-change data.
Conclusions
Reporting biases of differential gene expression in the literature can be affecting our appreciation of disease biology and of the degree of similarity that actually exists between different diseases.
Journal Article
Challenges and strategies for implementing genomic services in diverse settings: experiences from the Implementing GeNomics In pracTicE (IGNITE) network
by
Rosenman, Marc
,
Ryanne Wu, R.
,
Levy, Mia A.
in
Big Data
,
Bioinformatic and algorithmical studies
,
Biomedical and Life Sciences
2017
Background
To realize potential public health benefits from genetic and genomic innovations, understanding how best to implement the innovations into clinical care is important. The objective of this study was to synthesize data on challenges identified by six diverse projects that are part of a National Human Genome Research Institute (NHGRI)-funded network focused on implementing genomics into practice and strategies to overcome these challenges.
Methods
We used a multiple-case study approach with each project considered as a case and qualitative methods to elicit and describe themes related to implementation challenges and strategies. We describe challenges and strategies in an implementation framework and typology to enable consistent definitions and cross-case comparisons. Strategies were linked to challenges based on expert review and shared themes.
Results
Three challenges were identified by all six projects, and strategies to address these challenges varied across the projects. One common challenge was to increase the relative priority of integrating genomics within the health system electronic health record (EHR). Four projects used data warehousing techniques to accomplish the integration. The second common challenge was to strengthen clinicians’ knowledge and beliefs about genomic medicine. To overcome this challenge, all projects developed educational materials and conducted meetings and outreach focused on genomic education for clinicians. The third challenge was engaging patients in the genomic medicine projects. Strategies to overcome this challenge included use of mass media to spread the word, actively involving patients in implementation (e.g., a patient advisory board), and preparing patients to be active participants in their healthcare decisions.
Conclusions
This is the first collaborative evaluation focusing on the description of genomic medicine innovations implemented in multiple real-world clinical settings. Findings suggest that strategies to facilitate integration of genomic data within existing EHRs and educate stakeholders about the value of genomic services are considered important for effective implementation. Future work could build on these findings to evaluate which strategies are optimal under what conditions. This information will be useful for guiding translation of discoveries to clinical care, which, in turn, can provide data to inform continual improvement of genomic innovations and their applications.
Journal Article
Optimally splitting cases for training and testing high dimensional classifiers
by
Simon, Richard M
,
Dobbin, Kevin K
in
Algorithms
,
Analysis of Variance
,
Artificial Intelligence
2011
Background
We consider the problem of designing a study to develop a predictive classifier from high dimensional data. A common study design is to split the sample into a training set and an independent test set, where the former is used to develop the classifier and the latter to evaluate its performance. In this paper we address the question of what proportion of the samples should be devoted to the training set. How does this proportion impact the mean squared error (MSE) of the prediction accuracy estimate?
Results
We develop a non-parametric algorithm for determining an optimal splitting proportion that can be applied with a specific dataset and classifier algorithm. We also perform a broad simulation study for the purpose of better understanding the factors that determine the best split proportions and to evaluate commonly used splitting strategies (1/2 training or 2/3 training) under a wide variety of conditions. These methods are based on a decomposition of the MSE into three intuitive component parts.
Conclusions
By applying these approaches to a number of synthetic and real microarray datasets we show that for linear classifiers the optimal proportion depends on the overall number of samples available and the degree of differential expression between the classes. The optimal proportion was found to depend on the full dataset size (n) and classification accuracy - with higher accuracy and smaller
n
resulting in more assigned to the training set. The commonly used strategy of allocating 2/3rd of cases for training was close to optimal for reasonable sized datasets (
n
≥ 100) with strong signals (i.e. 85% or greater full dataset accuracy). In general, we recommend use of our nonparametric resampling approach for determing the optimal split. This approach can be applied to any dataset, using any predictor development method, to determine the best split.
Journal Article
Using Ethereum blockchain to store and query pharmacogenomics data via smart contracts
by
Brannon, Charlotte M.
,
Gürsoy, Gamze
,
Gerstein, Mark
in
Algorithms
,
Bioinformatic and algorithmical studies
,
Biomedical and Life Sciences
2020
Background
As pharmacogenomics data becomes increasingly integral to clinical treatment decisions, appropriate data storage and sharing protocols need to be adopted. One promising option for secure, high-integrity storage and sharing is Ethereum smart contracts. Ethereum is a blockchain platform, and smart contracts are immutable pieces of code running on virtual machines in this platform that can be invoked by a user or another contract (in the blockchain network). The 2019 iDASH (Integrating Data for Analysis, Anonymization, and Sharing) competition for Secure Genome Analysis challenged participants to develop time- and space-efficient Ethereum smart contracts for gene-drug relationship data.
Methods
Here we design a specific smart contract to store and query gene-drug interactions in Ethereum using an index-based, multi-mapping approach. Our contract stores each pharmacogenomics observation, a gene-variant-drug triplet with outcome, in a mapping searchable by a unique identifier, allowing for time and space efficient storage and query. This solution ranked in the top three at the 2019 IDASH competition. We further improve our ”challenge solution” and develop an alternate ”fastQuery” smart contract, which combines together identical gene-variant-drug combinations into a single storage entry, leading to significantly better scalability and query efficiency.
Results
On a private, proof-of-authority network, both our challenge and fastQuery solutions exhibit approximately linear memory and time usage for inserting into and querying small databases (<1,000 entries). For larger databases (1000 to 10,000 entries), fastQuery maintains this scaling. Furthermore, both solutions can query by a single field (”0-AND”) or a combination of fields (”1- or 2-AND”). Specifically, the challenge solution can complete a 2-AND query from a small database (100 entries) in 35ms using 0.1 MB of memory. For the same query, fastQuery has a 2-fold improvement in time and a 10-fold improvement in memory.
Conclusion
We show that pharmacogenomics data can be stored and queried efficiently using Ethereum blockchain. Our solutions could potentially be used to store a range of clinical data and extended to other fields requiring high-integrity data storage and efficient access.
Journal Article