Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
588 result(s) for "Encoding-Decoding"
Sort by:
Enhancing Distributed Machine Learning through Data Shuffling: Techniques, Challenges, and Implications
In distributed machine learning, data shuffling is a crucial data preprocessing technique that significantly impacts the efficiency and performance of model training. As distributed machine learning scales across multiple computing nodes, the ability to shuffle data effectively and efficiently has become essential for achieving high-quality model performance and minimizing communication costs. This paper systematically explores various data shuffling methods, including random shuffling, stratified shuffling, K-fold shuffling, and coded shuffling, each with distinct advantages, limitations, and application scenarios. Random shuffling is simple and fast but may lead to imbalanced class distributions, while stratified shuffling maintains class proportions at the cost of increased complexity. K-fold shuffling provides robust model evaluation through multiple training-validation splits, though it is computationally demanding. Coded shuffling, on the other hand, optimizes communication costs in distributed settings but requires sophisticated encoding-decoding techniques. The study also highlights the challenges associated with current shuffling techniques, such as handling class imbalance, high computational complexity, and adapting to dynamic, real-time data. This paper proposes potential solutions to enhance the efficacy of data shuffling, including hybrid methodologies, automated stratification processes, and optimized coding strategies. This work aims to guide future research on data shuffling in distributed machine learning environments, ultimately advancing model robustness and generalization across complex real-world applications.
X-Net: a dual encoding–decoding method in medical image segmentation
Medical image segmentation has the priori guiding significance for clinical diagnosis and treatment. In the past ten years, a large number of experimental facts have proved the great success of deep convolutional neural networks in various medical image segmentation tasks. However, the convolutional networks seem to focus too much on the local image details, while ignoring the long-range dependence. The Transformer structure can encode long-range dependencies in image and learn high-dimensional image information through the self-attention mechanism. But this structure currently depends on the database scale to give full play to its excellent performance, which limits its application in medical images with limited database size. In this paper, the characteristics of CNNs and Transformer are integrated to propose a dual encoding–decoding structure of the X-shaped network (X-Net). It can serve as a good alternative to the traditional pure convolutional medical image segmentation network. In the encoding phase, the local and global features are simultaneously extracted by two types of encoders, convolutional downsampling, and Transformer and then merged through jump connection. In the decoding phase, a variational auto-encoder branch is added to reconstruct the input image itself in order to weaken the impact of insufficient data. Comparative experiments on three medical image datasets show that X-Net can realize the organic combination of Transformer and CNNs.
Least-order representation of control-oriented flow estimation exemplified for the fluidic pinball
Deep learning has been widely utilized to accurately estimate the flow state from the sparse sensor measurements. Yet there is still a lack of understanding of the actual dimension of this type of regression problem. In this study, we propose an Autoencoder (AE) based estimation method to tackle the control-oriented, sensor-based flow estimation problem. This method encodes the input information into a few-nodes bottleneck, then decodes the compressed information to the estimated the flow state. This network stands for the least order representation of the input-output relationship. We choose the fluidic pinball as the benchmark problem which contains multiple-input and multiple-output. The rotations of three cylinders in the flow generates rich flow physics. Consequently, strong non-linearity arises between input measurements and output flow state. For fair comparison the Deep Neural Network (DNN) is also employed to highlight the influence of the encoding-decoding process in AE. The results show that when the flow is periodic, the input vector can be effectively encoded into a five-dimensional subspace without a significant loss of estimation accuracy. However, the five-dimensional encoding will eliminate partial information from the input vector when the forced flow becomes chaotic, resulting in a lower estimation accuracy than DNN.
Chameleon-inspired tunable multi-layered infrared-modulating system via stretchable liquid metal microdroplets in elastomer film
This report presents liquid metal-based infrared-modulating materials and systems with multiple modes to regulate the infrared reflection. Inspired by the brightness adjustment in chameleon skin, shape-morphing liquid metal droplets in silicone elastomer (Ecoflex) matrix are used to resemble the dispersed “melanophores”. In the system, Ecoflex acts as hormone to drive the deformation of liquid metal droplets. Both total and specular reflectance-based infrared camouflage are achieved. Typically, the total and specular reflectances show change of ~44.8% and 61.2%, respectively, which are among the highest values reported for infrared camouflage. Programmable infrared encoding/decoding is explored by adjusting the concentration of liquid metal and applying areal strains. By introducing alloys with different melting points, temperature-dependent infrared painting/writing can be achieved. Furthermore, the multi-layered structure of infrared-modulating system is designed, where the liquid metal-based infrared modulating materials are integrated with an evaporated metallic film for enhanced performance of such system. Liquid metal-based infrared-modulating materials and systems have been developed to regulate the infrared reflection with multiple modes, which can be used for infrared camouflage, programmable infrared encryption, and infrared painting/writing.
Generative learning facilitated discovery of high-entropy ceramic dielectrics for capacitive energy storage
Dielectric capacitors offer great potential for advanced electronics due to their high power densities, but their energy density still needs to be further improved. High-entropy strategy has emerged as an effective method for improving energy storage performance, however, discovering new high-entropy systems within a high-dimensional composition space is a daunting challenge for traditional trial-and-error experiments. Here, based on phase-field simulations and limited experimental data, we propose a generative learning approach to accelerate the discovery of high-entropy dielectrics in a practically infinite exploration space of over 10 11 combinations. By encoding-decoding latent space regularities to facilitate data sampling and forward inference, we employ inverse design to screen out the most promising combinations via a ranking strategy. Through only 5 sets of targeted experiments, we successfully obtain a Bi(Mg 0.5 Ti 0.5 )O 3 -based high-entropy dielectric film with a significantly improved energy density of 156 J cm −3 at an electric field of 5104 kV cm −1 , surpassing the pristine film by more than eight-fold. This work introduces an effective and innovative avenue for designing high-entropy dielectrics with drastically reduced experimental cycles, which could be also extended to expedite the design of other multicomponent material systems with desired properties. High-entropy ceramic dielectrics show promise for capacitive energy storage but struggle due to vast composition possibilities. Here, the authors propose a generative learning approach for finding high-energy-density high-entropy dielectrics in a practically infinite exploration space of over 10 11 combinations.
Utilising Deep Learning Techniques for Effective Zero-Day Attack Detection
Machine Learning (ML) and Deep Learning (DL) have been used for building Intrusion Detection Systems (IDS). The increase in both the number and sheer variety of new cyber-attacks poses a tremendous challenge for IDS solutions that rely on a database of historical attack signatures. Therefore, the industrial pull for robust IDSs that are capable of flagging zero-day attacks is growing. Current outlier-based zero-day detection research suffers from high false-negative rates, thus limiting their practical use and performance. This paper proposes an autoencoder implementation for detecting zero-day attacks. The aim is to build an IDS model with high recall while keeping the miss rate (false-negatives) to an acceptable minimum. Two well-known IDS datasets are used for evaluation—CICIDS2017 and NSL-KDD. In order to demonstrate the efficacy of our model, we compare its results against a One-Class Support Vector Machine (SVM). The manuscript highlights the performance of a One-Class SVM when zero-day attacks are distinctive from normal behaviour. The proposed model benefits greatly from autoencoders encoding-decoding capabilities. The results show that autoencoders are well-suited at detecting complex zero-day attacks. The results demonstrate a zero-day detection accuracy of 89–99% for the NSL-KDD dataset and 75–98% for the CICIDS2017 dataset. Finally, the paper outlines the observed trade-off between recall and fallout.
A programmable chemical computer with memory and pattern recognition
Current computers are limited by the von Neumann bottleneck, which constrains the throughput between the processing unit and the memory. Chemical processes have the potential to scale beyond current computing architectures as the processing unit and memory reside in the same space, performing computations through chemical reactions, yet their lack of programmability limits them. Herein, we present a programmable chemical processor comprising of a 5 by 5 array of cells filled with a switchable oscillating chemical (Belousov–Zhabotinsky) reaction. Each cell can be individually addressed in the ‘on’ or ‘off’ state, yielding more than 2.9 × 10 17 chemical states which arise from the ability to detect distinct amplitudes of oscillations via image processing. By programming the array of interconnected BZ reactions we demonstrate chemically encoded and addressable memory, and we create a chemical Autoencoder for pattern recognition able to perform the equivalent of one million operations per second. Unconventional computing architectures might outperform current ones, but their realization has been limited to solving simple specific problems. Here, a network of interconnected Belousov-Zhabotinski reactions, operated by independent magnetic stirrers, performs encoding/decoding operations and data storage.
Dynamic event-triggered state estimation for discrete-time delayed switched neural networks with constrained bit rate
In this paper, a class of discrete-time delayed switched neural networks with dynamic event-triggered mechanism (DETM) and constrained bit rate is considered. In order to reduce the transmission frequency and alleviate the unnecessary resource loss between sensor and estimator, a DETM is proposed. The data transmission from sensor to estimator is realized through constrained bit rate channel. Therefore, in order to reflect the bandwidth allocation rules of accessible neurone nodes, a bit rate constraint model is introduced and an encoding-decoding mechanism is developed. This paper is concerned with the strategy of average dwell time (ADT) and linear matrix inequality, then sufficient conditions for the exponential ultimate boundedness of switched neural networks with DETM and constrained bit rate are proposed. Finally, an example is given to prove the effectiveness of the results.
GAU-Net: U-Net Based on Global Attention Mechanism for brain tumor segmentation
Deep learning has shown great advantages in biomedical image segmentation. The classic model U-Net uses a stacked encoding-decoding structure of convolution operations for feature extraction and pixel-level classification. The stacking of convolutional layers can expand the receptive field, but it is still a local operation and cannot capture long-distance dependence. Therefore, in this work, we propose a Global Attention Mechanism that combines channel attention module and spatial attention module and integrates different convolutions in it. Besides, we design a residual module for the traditional up and down sampling blocks. And finally, we combine them with U-Net to propose a new global attention network GAU-Net. We perform experiments on the dataset BraTS2018. Our model has increased the mIoU from 0.65 to 0.75 with only 5.4% of U-Net parameters. At the same time, the inference time is also significantly shortened with relatively good performance.
Linear-fitting-based recursive filtering for nonlinear systems under encoding-decoding mechanism
This paper deals with a recursive filtering problem for a class of discrete time-varying nonlinear networked systems with the encoding-decoding mechanism. The linear fitting method is introduced to handle the nonlinearity. An encoding-decoding mechanism is constructed to describe the data transmission process in wireless communication networks (WCNs). To be specific, the measurement outputs are mapped by a quantizer to unique codewords for transmission in WCNs. Then, the codewords are decoded by the decoder to recover the measurement outputs which are sent to the filter. The processing/encoding delay and network delay have been considered. Firstly, on the premise that the upper bound of the filtering error covariance is minimum, the appropriate filtering gain is calculated. Then, the mean square exponential boundedness of the filtering error is analyzed. Finally, two simulation examples are presented to verify the effectiveness of the proposed algorithm.