Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
110 result(s) for "variational auto encoder"
Sort by:
Knowledge Interpolated Conditional Variational Auto-Encoder for Knowledge Grounded Dialogues
In the Knowledge Grounded Dialogue (KGD) generation, the explicit modeling of instance-variety of knowledge specificity and its seamless fusion with the dialogue context remains challenging. This paper presents an innovative approach, the Knowledge Interpolated conditional Variational auto-encoder (KIV), to address these issues. In particular, KIV introduces a novel interpolation mechanism to fuse two latent variables: independently encoding dialogue context and grounded knowledge. This distinct fusion of context and knowledge in the semantic space enables the interpolated latent variable to guide the decoder toward generating more contextually rich and engaging responses. We further explore deterministic and probabilistic methodologies to ascertain the interpolation weight, capturing the level of knowledge specificity. Comprehensive empirical analysis conducted on the Wizard-of-Wikipedia and Holl-E datasets verifies that the responses generated by our model performs better than strong baselines, with notable performance improvements observed in both automatic metrics and manual evaluation.
A semantic‐based method for analysing unknown malicious behaviours via hyper‐spherical variational auto‐encoders
In the User and Entity Behaviour Analytics (UEBA), unknown malicious behaviours are often difficult to be automatically detected due to the lack of labelled data. Most of the existing methods also fail to take full advantage of the threat intelligence and incorporate the impact of the behaviour patterns of the benign users. To address this issue, this paper proposes a Generalised Zero‐Shot Learning (GZSL) method based on hyper‐spherical Variational Auto‐Encoders (VAEs). Compared to the VAEs, the authors’ proposed method is more robust and suitable for capturing data with richer and more nuanced structures. The authors’ method analyses the unknown malicious behaviours by projecting them and their semantic attributes to shared space. These are then matched by the cosine similarity. The authors further use a Graph Convolutional Network (GCN) to reduce the impact of different user behaviour patterns before projection. The experimental results indicate that the proposed method is efficient in the analysis of unknown malicious behaviours.
Disentanglement in conceptual space during sensorimotor interaction
The disentanglement of different objective properties from the external world is the foundation of language development for agents. The basic target of this process is to summarise the common natural properties and then to name it to describe those properties in the future. To realise this purpose, a new learning model is introduced for the disentanglement of several sensorimotor concepts (e.g. sizes, colours and shapes of objects) while the causal relationship is being learnt during interaction without much a priori experience and external instructions. This learning model links predictive deep neural models and the variational auto-encoder (VAE) and provides the possibility that the independent concepts can be extracted and disentangled from both perception and action. Moreover, such extraction is further learnt by VAE to memorise their common statistical features. The authors examine this model in the affordance learning setting, where the robot is trying to learn to disentangle about shapes of the tools and objects. The results show that such a process can be found in the neural activities of the $\\beta $β-VAE unit, which indicate that using similar VAE models is a promising way to learn the concepts, and thereby to learn the causal relationship of the sensorimotor interaction.
Data Augmentation for Electricity Theft Detection Using Conditional Variational Auto-Encoder
Due to the strong concealment of electricity theft and the limitation of inspection resources, the number of power theft samples mastered by the power department is insufficient, which limits the accuracy of power theft detection. Therefore, a data augmentation method for electricity theft detection based on the conditional variational auto-encoder (CVAE) is proposed. Firstly, the stealing power curves are mapped into low dimensional latent variables by using the encoder composed of convolutional layers, and the new stealing power curves are reconstructed by the decoder composed of deconvolutional layers. Then, five typical attack models are proposed, and the convolutional neural network is constructed as a classifier according to the data characteristics of stealing power curves. Finally, the effectiveness and adaptability of the proposed method is verified by a smart meters’ data set from London. The simulation results show that the CVAE can take into account the shapes and distribution characteristics of samples at the same time, and the generated stealing power curves have the best effect on the performance improvement of the classifier than the traditional augmentation methods such as the random oversampling method, synthetic minority over-sampling technique, and conditional generative adversarial network. Moreover, it is suitable for different classifiers.
PFVAE: A Planar Flow-Based Variational Auto-Encoder Prediction Model for Time Series Data
Prediction based on time series has a wide range of applications. Due to the complex nonlinear and random distribution of time series data, the performance of learning prediction models can be reduced by the modeling bias or overfitting. This paper proposes a novel planar flow-based variational auto-encoder prediction model (PFVAE), which uses the long- and short-term memory network (LSTM) as the auto-encoder and designs the variational auto-encoder (VAE) as a time series data predictor to overcome the noise effects. In addition, the internal structure of VAE is transformed using planar flow, which enables it to learn and fit the nonlinearity of time series data and improve the dynamic adaptability of the network. The prediction experiments verify that the proposed model is superior to other models regarding prediction accuracy and proves it is effective for predicting time series data.
X-Net: a dual encoding–decoding method in medical image segmentation
Medical image segmentation has the priori guiding significance for clinical diagnosis and treatment. In the past ten years, a large number of experimental facts have proved the great success of deep convolutional neural networks in various medical image segmentation tasks. However, the convolutional networks seem to focus too much on the local image details, while ignoring the long-range dependence. The Transformer structure can encode long-range dependencies in image and learn high-dimensional image information through the self-attention mechanism. But this structure currently depends on the database scale to give full play to its excellent performance, which limits its application in medical images with limited database size. In this paper, the characteristics of CNNs and Transformer are integrated to propose a dual encoding–decoding structure of the X-shaped network (X-Net). It can serve as a good alternative to the traditional pure convolutional medical image segmentation network. In the encoding phase, the local and global features are simultaneously extracted by two types of encoders, convolutional downsampling, and Transformer and then merged through jump connection. In the decoding phase, a variational auto-encoder branch is added to reconstruct the input image itself in order to weaken the impact of insufficient data. Comparative experiments on three medical image datasets show that X-Net can realize the organic combination of Transformer and CNNs.
Quantum device fine-tuning using unsupervised embedding learning
Quantum devices with a large number of gate electrodes allow for precise control of device parameters. This capability is hard to fully exploit due to the complex dependence of these parameters on applied gate voltages. We experimentally demonstrate an algorithm capable of fine-tuning several device parameters at once. The algorithm acquires a measurement and assigns it a score using a variational auto-encoder. Gate voltage settings are set to optimize this score in real-time in an unsupervised fashion. We report fine-tuning times of a double quantum dot device within approximately 40 min.
VAEEG: Variational auto-encoder for extracting EEG representation
•A VAE-based self-supervised learning model for EEG representation extraction.•VAEEG achieved outstanding performance in the reconstruction of EEG signals.•The latent representations from VAEEG perform well in several clinical tasks.•The VAEEG model enhances the efficiency and accuracy of downstream tasks. The electroencephalogram (EEG) exhibits characteristics of complexity and strong randomness. Existing deep learning models for EEG typically target specific objectives and datasets, with their scalability constrained by the size of the dataset, resulting in limited perceptual and generalization abilities. In order to obtain more intuitive, concise, and useful representations of brain activity, we constructed a reconstruction-based self-supervised learning model for EEG based on Variational Autoencoder (VAE) with separate frequency bands, termed variational auto-encoder for EEG (VAEEG). VAEEG achieved outstanding reconstruction performance. Furthermore, we validated the efficacy of the latent representations in three clinical tasks concerning pediatric brain development, epileptic seizure, and sleep stage classification. We discovered that certain latent features: 1) correlate with adolescent brain developmental changes; 2) exhibit significant distinctions in the distribution between epileptic seizures and background activity; 3) show significant variations across different sleep cycles. In corresponding downstream fitting or classification tasks, models constructed based on the representations extracted by VAEEG demonstrated superior performance. Our model can extract effective features from complex EEG signals, serving as an early feature extractor for downstream classification tasks. This reduces the amount of data required for downstream tasks, simplifies the complexity of downstream models, and streamlines the training process.
t-SNE and variational auto-encoder with a bi-LSTM neural network-based model for prediction of gas concentration in a sealed-off area of underground coal mines
A deep learning network is introduced to predict concentrations of gases in the underground coal mine enclosed region using various IoT-enabled gas sensors installed in a metallic gas chamber. The air is sucked automatically at specific intervals from the sealed-off site utilizing a solenoid valve, suction pump, and programmed microprocessor. The gas sensors monitor the gas content in the underground coal mine and communicate gas concentration to the surface server room through a wireless network and cloud storage media. The t-SNE_VAE_bi-LSTM model is proposed in this study as a prediction model that combines the t-SNE, VAE, and bi-LSTM networks. The proposed model's t-SNE method aims to minimize the dimensionality of the recorded gas concentration; and VAE layer intends to retrieve the inner characteristics of low-dimensional gas concentration. Finally, the given model's Bi-LSTM layer tries to forecast the concentrations of CH 4 , CO 2 , CO, O 2 , and H 2 gases. The proposed model's prediction accuracy is compared with the existing two models, namely auto-regressive integrated average moving (ARIMA) and chaos time series (CHAOS). The experiment findings demonstrate that the t-SNE_VAE_bi-LSTM model forecasted mean square error (MSE) is more accurate, and it has lesser MSE value of 0.029 and 0.069 for CH 4 ; 0.037 and 0.019 for CO 2 ; 0.092 and 0.92 for CO; 1.881 and 1.892 for O 2 ; and 1.235 and 1.200 for H 2 than the ARIMA and CHAOS models, respectively.
Unsupervised Outlier Detection in IOT Using Deep VAE
The Internet of Things (IoT) refers to a system of interconnected, internet-connected devices and sensors that allows the collection and dissemination of data. The data provided by these sensors may include outliers or exhibit anomalous behavior as a result of attack activities or device failure, for example. However, the majority of existing outlier detection algorithms rely on labeled data, which is frequently hard to obtain in the IoT domain. More crucially, the IoT’s data volume is continually increasing, necessitating the requirement for predicting and identifying the classes of future data. In this study, we propose an unsupervised technique based on a deep Variational Auto-Encoder (VAE) to detect outliers in IoT data by leveraging the characteristic of the reconstruction ability and the low-dimensional representation of the input data’s latent variables of the VAE. First, the input data are standardized. Then, we employ the VAE to find a reconstructed output representation from the low-dimensional representation of the latent variables of the input data. Finally, the reconstruction error between the original observation and the reconstructed one is used as an outlier score. Our model was trained only using normal data with no labels in an unsupervised manner and evaluated using Statlog (Landsat Satellite) dataset. The unsupervised model achieved promising and comparable results with the state-of-the-art outlier detection schemes with a precision of ≈90% and an F1 score of 79%.