Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
676
result(s) for
"Auto Encoders"
Sort by:
DRMDA: deep representations‐based miRNA–disease association prediction
2018
Recently, microRNAs (miRNAs) are confirmed to be important molecules within many crucial biological processes and therefore related to various complex human diseases. However, previous methods of predicting miRNA–disease associations have their own deficiencies. Under this circumstance, we developed a prediction method called deep representations‐based miRNA–disease association (DRMDA) prediction. The original miRNA–disease association data were extracted from HDMM database. Meanwhile, stacked auto‐encoder, greedy layer‐wise unsupervised pre‐training algorithm and support vector machine were implemented to predict potential associations. We compared DRMDA with five previous classical prediction models (HGIMDA, RLSMDA, HDMP, WBSMDA and RWRMDA) in global leave‐one‐out cross‐validation (LOOCV), local LOOCV and fivefold cross‐validation, respectively. The AUCs achieved by DRMDA were 0.9177, 08339 and 0.9156 ± 0.0006 in the three tests above, respectively. In further case studies, we predicted the top 50 potential miRNAs for colon neoplasms, lymphoma and prostate neoplasms, and 88%, 90% and 86% of the predicted miRNA can be verified by experimental evidence, respectively. In conclusion, DRMDA is a promising prediction method which could identify potential and novel miRNA–disease associations.
Journal Article
A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification
by
Güzel, Mehmet Serdar
,
Çelebi, Fatih V.
,
Karim, Ahmad M.
in
Accuracy
,
Algorithms
,
Alzheimer's disease
2020
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes.
Journal Article
Dimensionality reduction methods for extracting functional networks from large‐scale CRISPR screens
by
Billmann, Maximilian
,
Myers, Chad L
,
Hassan, Arshia Zernab
in
auto‐encoder
,
Cancer
,
Cell culture
2023
CRISPR‐Cas9 screens facilitate the discovery of gene functional relationships and phenotype‐specific dependencies. The Cancer Dependency Map (DepMap) is the largest compendium of whole‐genome CRISPR screens aimed at identifying cancer‐specific genetic dependencies across human cell lines. A mitochondria‐associated bias has been previously reported to mask signals for genes involved in other functions, and thus, methods for normalizing this dominant signal to improve co‐essentiality networks are of interest. In this study, we explore three unsupervised dimensionality reduction methods—autoencoders, robust, and classical principal component analyses (PCA)—for normalizing the DepMap to improve functional networks extracted from these data. We propose a novel “onion” normalization technique to combine several normalized data layers into a single network. Benchmarking analyses reveal that robust PCA combined with onion normalization outperforms existing methods for normalizing the DepMap. Our work demonstrates the value of removing low‐dimensional signals from the DepMap before constructing functional gene networks and provides generalizable dimensionality reduction‐based normalization tools.
Synopsis
Dimensionality reduction‐based methods are proposed for normalizing Cancer Dependency Map (DepMap) genome‐wide CRISPR screen data to enhance the functional information in co‐essentiality networks extracted from DepMap.
Low‐dimensional patterns introduce dominant covariation in gene networks derived from DepMap data, obscuring more subtle functional relationships.
Applying dimensionality reduction approaches to remove low‐dimensional signal, including robust and classical principal component analysis or autoencoders, can increase functional information captured by similarity networks derived from DepMap data.
Onion normalization, which integrates several normalized data layers into a single network, outperforms existing methods for constructing co‐essentiality networks from the DepMap.
Graphical Abstract
Dimensionality reduction‐based methods are proposed for normalizing Cancer Dependency Map (DepMap) genome‐wide CRISPR screen data to enhance the functional information in co‐essentiality networks extracted from DepMap.
Journal Article
3D reconstruction of porous media using a batch normalized variational auto-encoder
by
Zhang, Ting
,
Yang, Yi
,
Zhang, Anqin
in
Coders
,
Deep learning
,
Earth and Environmental Science
2022
The 3D reconstruction of porous media plays a key role in many engineering applications. There are two main methods for the reconstruction of porous media: physical experimental methods and numerical reconstruction methods. The former are usually expensive and restricted by the limited size of experimental samples, while the latter are relatively cost-effective but still suffer from a lengthy processing time and unsatisfactory performance. With the vigorous development of deep learning in recent years, applying deep learning methods to 3D reconstruction of porous media has become an important direction. Variational auto-encoder (VAE) is one of the typical deep learning methods with a strong ability of extracting features from training images (TIs), but it has the problem of posterior collapse, meaning the generated data from the decoder are not related to its input data, i.e. the latent space
Z
. This paper proposes a VAE model (called SE-FBN-VAE) based on squeeze-and-excitation network (SENet) and fixed batch normalization (FBN) for the reconstruction of porous media. SENet is a simple and efficient channel attention mechanism, which improves the sensitivity of the model to channel characteristics. The application of SENet to VAE can further improve its ability of extracting features from TIs. Batch normalization (BN) is a common method for data normalization in neural networks, which reduces the convergence time of the network. In this paper, BN is slightly modified to solve the problem of posterior collapse of VAE. Compared with some other numerical methods, the effectiveness and practicability of the proposed method are demonstrated.
Journal Article
Understanding stock market instability via graph auto-encoders
2025
Understanding stock market instability is a key question in financial management as practitioners seek to forecast breakdowns in long-run asset co-movement patterns which expose portfolios to rapid and devastating collapses in value. These disruptions are linked to changes in the structure of market wide stock correlations which increase the risk of high volatility shocks. The structure of these co-movements can be described as a network where companies are represented by nodes while edges capture correlations between their price movements. Co-movement breakdowns then manifest as abrupt changes in the topological structure of this network. Measuring the scale of this change and learning a timely indicator of breakdowns is central in understanding both financial stability and volatility forecasting. We propose to use the edge reconstruction accuracy of a graph auto-encoder as an indicator for how homogeneous connections between assets are, which we use, based on the literature of financial network analysis, as a proxy to infer market volatility. We show, through our experiments on the Standard and Poor’s index over the 2015-2022 period, that the reconstruction errors from our model correlate with volatility spikes and can be used to improve out-of-sample autoregressive modeling of volatility. Our results demonstrate that market instability can be predicted by changes in the homogeneity in connections of the financial network which expands the understanding of instability in the stock market. We discuss the implications of this graph machine learning-based volatility estimation for policy targeted at ensuring financial market stability.
Journal Article
Computer-aided diagnostic system kinds and pulmonary nodule detection efficacy
by
Abdalla, Kasim Karam
,
Kadhim, Omar Raad
,
Motlak, Hassan Jassim
in
Accuracy
,
Classification
,
Computed tomography
2022
This paper summarizes the literature on computer-aided detection (CAD) systems used to identify and diagnose lung nodules in images obtained with computed tomography (CT) scanners. The importance of developing such systems lies in the fact that the process of manually detecting lung nodules is painstaking and sequential work for radiologists, as it takes a long time. Moreover, the pulmonary nodules have multiple appearances and shapes, and the large number of slices generated by the scanner creates great difficulty in accurately locating the lung nodules. The handcraft nodules detection process can be caused by messing some nodules spicily when these nodules' diameter be less than 10 mm. So, the CAD system is an essential assistant to the radiologist in this case of nodule detection, and it contributed to reducing time consumption in nodules detection; moreover, it applied more accuracy in this field. The objective of this paper is to follow up on current and previous work on lung cancer detection and lung nodule diagnosis. This literature dealt with a group of specialized systems in this field quickly and showed the methods used in them. It dealt with an emphasis on a system based on deep learning involving neural convolution networks.
Journal Article
A semantic‐based method for analysing unknown malicious behaviours via hyper‐spherical variational auto‐encoders
2023
In the User and Entity Behaviour Analytics (UEBA), unknown malicious behaviours are often difficult to be automatically detected due to the lack of labelled data. Most of the existing methods also fail to take full advantage of the threat intelligence and incorporate the impact of the behaviour patterns of the benign users. To address this issue, this paper proposes a Generalised Zero‐Shot Learning (GZSL) method based on hyper‐spherical Variational Auto‐Encoders (VAEs). Compared to the VAEs, the authors’ proposed method is more robust and suitable for capturing data with richer and more nuanced structures. The authors’ method analyses the unknown malicious behaviours by projecting them and their semantic attributes to shared space. These are then matched by the cosine similarity. The authors further use a Graph Convolutional Network (GCN) to reduce the impact of different user behaviour patterns before projection. The experimental results indicate that the proposed method is efficient in the analysis of unknown malicious behaviours.
Journal Article
Multi-layer maximum mean discrepancy in auto-encoders for cross-corpus speech emotion recognition
2023
Speech emotion recognition system performance degrades due to the mismatch between the training (source) and the test (target) corpora. Domain adaptation methods can be used to handle this problem. In this paper, we propose a deep domain adaptation method for ordinary and variational auto-encoders to extract domain-invariant features for cross-corpus speech emotion recognition. In this way, we consider an auto-encoder for each source and target domain dataset. Then, we propose to train auto-encoders using a domain adaptation loss along with the conventional loss. The domain adaptation loss is based on maximum mean discrepancy between layers of source and target auto-encoders to bring the distributions of target and source domain features closer and obtain a domain-invariant feature space. We report our results on several emotional speech datasets as the source and target datasets where we used the SVM as a classifier which is only trained on extracted source features. Experimental results show that the proposed domain-adapted auto-encoder and variational auto-encoder improve cross-corpus speech emotion recognition accuracy in comparison to unadapted auto-encoders and other related methods.
Journal Article
Auto encoder-guided Feature Extraction for Pneumonia Identification from Chest X-ray Images
2024
The World Health Organization recognizes pneumonia as a significant global health issue. Artificial intelligence, particularly machine learning, and deep learning has emerged as valuable tools for improving pneumonia diagnosis. However, these techniques face a major challenge: the lack of labeled data. To tackle this, we propose using unsupervised learning models, which can produce comparable results even with limited training data. Our study presents an unsupervised learning approach utilizing autoencoders to detect pneumonia from chest X-ray images. Our method uses Variational autoencoders for feature extraction, which are then employed in classification using a Random Forest classifier. The model is trained on a dataset containing two classes of X-ray images: pneumonia and normal. Our approach demonstrates effectiveness comparable to existing supervised learning methods.
Journal Article
An Efficient Anomaly Detection System for Crowded Scenes Using Variational Autoencoders
by
Yu, Xiaosheng
,
Xu, Ming
,
Chen, Dongyue
in
anomaly detection
,
convolutional auto-encoder
,
Deep learning
2019
Anomaly detection in crowded scenes is an important and challenging part of the intelligent video surveillance system. As the deep neural networks make success in feature representation, the features extracted by a deep neural network represent the appearance and motion patterns in different scenes more specifically, comparing with the hand-crafted features typically used in the traditional anomaly detection approaches. In this paper, we propose a new baseline framework of anomaly detection for complex surveillance scenes based on a variational auto-encoder with convolution kernels to learn feature representations. Firstly, the raw frames series are provided as input to our variational auto-encoder without any preprocessing to learn the appearance and motion features of the receptive fields. Then, multiple Gaussian models are used to predict the anomaly scores of the corresponding receptive fields. Our proposed two-stage anomaly detection system is evaluated on the video surveillance dataset for a large scene, UCSD pedestrian datasets, and yields competitive performance compared with state-of-the-art methods.
Journal Article