Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12,043
result(s) for
"Compression algorithms"
Sort by:
An Evolving TinyML Compression Algorithm for IoT Environments Based on Data Eccentricity
by
Sisinni, Emiliano
,
Ferrari, Paolo
,
Silva, Marianne
in
algorithm
,
Algorithms
,
Battery powered devices
2021
Currently, the applications of the Internet of Things (IoT) generate a large amount of sensor data at a very high pace, making it a challenge to collect and store the data. This scenario brings about the need for effective data compression algorithms to make the data manageable among tiny and battery-powered devices and, more importantly, shareable across the network. Additionally, considering that, very often, wireless communications (e.g., low-power wide-area networks) are adopted to connect field devices, user payload compression can also provide benefits derived from better spectrum usage, which in turn can result in advantages for high-density application scenarios. As a result of this increase in the number of connected devices, a new concept has emerged, called TinyML. It enables the use of machine learning on tiny, computationally restrained devices. This allows intelligent devices to analyze and interpret data locally and in real time. Therefore, this work presents a new data compression solution (algorithm) for the IoT that leverages the TinyML perspective. The new approach is called the Tiny Anomaly Compressor (TAC) and is based on data eccentricity. TAC does not require previously established mathematical models or any assumptions about the underlying data distribution. In order to test the effectiveness of the proposed solution and validate it, a comparative analysis was performed on two real-world datasets with two other algorithms from the literature (namely Swing Door Trending (SDT) and the Discrete Cosine Transform (DCT)). It was found that the TAC algorithm showed promising results, achieving a maximum compression rate of 98.33%. Additionally, it also surpassed the two other models regarding the compression error and peak signal-to-noise ratio in all cases.
Journal Article
On the Utilization of Reversible Colour Transforms for Lossless 2-D Data Compression
by
Khan, Ali
,
Um, Tai-Won
,
Khan, Aftab
in
Algorithms
,
burrows–wheeler compression algorithm (bwca)
,
colour filter array (cfa)
2020
Reversible Colour Transforms (RCTs) in conjunction with Bi-level Burrows–Wheeler Compression Algorithm (BBWCA) allows for high-level lossless image compression as demonstrated in this study. The RCTs transformation results in exceedingly coordinated image information among the neighbouring pixels as compared to the RGB colour space. This aids the Burrows–Wheeler Transform (BWT) based compression scheme and achieves compression ratios of high degree at the subsequent steps of the program. Validation has been done by comparing the proposed scheme across a range of benchmarks schemes and the performance of the proposed scheme is above par the other schemes. The proposed compression outperforms the techniques exclusively developed for 2-D electrocardiogram (EEG), RASTER map and Color Filter Array (CFA) image compression. The proposed system shows no dependency over parameters like image size, its type or the medium in which it is captured. A comprehensive analysis of the proposed scheme concludes that it achieves a significant increase in compression and depicts comparable complexity similar to the various benchmark schemes.
Journal Article
Image-Compression Techniques: Classical and “Region-of-Interest-Based” Approaches Presented in Recent Papers
2024
Image compression is a vital component for domains in which the computational resources are usually scarce such as automotive or telemedicine fields. Also, when discussing real-time systems, the large amount of data that must flow through the system can represent a bottleneck. Therefore, the storage of images, alongside the compression, transmission, and decompression procedures, becomes vital. In recent years, many compression techniques that only preserve the quality of the region of interest of an image have been developed, the other parts being either discarded or compressed with major quality loss. This paper proposes a study of relevant papers from the last decade which are focused on the selection of a region of interest of an image and on the compression techniques that can be applied to that area. To better highlight the novelty of the hybrid methods, classical state-of-the-art approaches are also analyzed. The current work will provide an overview of classical and hybrid compression methods alongside a categorization based on compression ratio and other quality factors such as mean-square error and peak signal-to-noise ratio, structural similarity index measure, and so on. This overview can help researchers to develop a better idea of what compression algorithms are used in certain domains and to find out if the presented performance parameters are of interest for the intended purpose.
Journal Article
Development and performance evaluation of generalised Doppler compensated adaptive pulse compression algorithm
by
Panda, Ganapati
,
Baghel, Vikas
in
Adaptive algorithms
,
adaptive pulse compression algorithm
,
Algorithms
2014
The adaptive pulse compression (APC) algorithm is superior to conventional normalised matched filter (NMF) and least square mismatched filter techniques. However, its performance degrades for non-stationary targets in noisy condition as the algorithm has not taken into consideration the Doppler shift effect. On the other hand, the recently reported Doppler compensated APC (DC-APC) technique performs well only for non-stationary objects. Thus, there is a need to develop an algorithm that would work efficiently both for stationary and non-stationary conditions. Keeping this in view, a generalised APC (G-APC) algorithm is developed in which the effect of target Doppler is incorporated in the received signal model. The efficacy of the proposed algorithm has been evaluated through five different cases. The results of the simulation study demonstrate comparable and superior performance of the proposed G-APC over several pulse compression models based on NMF, APC and DC-APC in all cases.
Journal Article
Reduced memory, low complexity embedded image compression algorithm using hierarchical listless discrete Tchebichef transform
by
Mahapatra, Kamala Kanta
,
Pati, Umesh C.
,
Senapati, Ranjan Kumar
in
Algorithms
,
Applied sciences
,
Blocking
2014
Listless set partitioning embedded block (LSK) and set partitioning embedded block (SPECK) are known for their low complexity and simple implementation. However, the drawback is that these block-based algorithms encode each insignificant subband by a zero. This generates many zeros at earlier passes because the number of significant coefficients at higher bitplanes is likely to be very few in a transformed image. An improved LSK (ILSK) algorithm that codes a single zero to several insignificant subbands is proposed. This reduces the length of the output bit string, encoding/decoding time and dynamic memory requirement at early passes. Furthermore, ILSK algorithm is coupled with discrete Tchebichef transform (DTT). This gives rise to a novel coder named as hierarchical listless DTT (HLDTT). The proposed HLDTT has desirable attributes like full embeddedness for progressive transmission, precise rate control for constant bit rate traffic and low complexity for low power applications. The performance of HLDTT is assessed using peak-signal-to-noise-ratio (PSNR) and structural-similarity-index-metric (SSIM). Extensive simulation conducted on various standard test images shows that HLDTT exhibits significant improvement in PSNR values from lower to medium bit rates. At the same time, HLDTT shows improvement in SSIM values on all bit rates.
Journal Article
A lossless reference-free sequence compression algorithm leveraging grammatical, statistical, and substitution rules
by
Mukhopadhyay, Anirban
,
Roy, Subhankar
,
Kumar Maity, Dilip
in
Algorithms
,
Compression
,
Compression Algorithms
2025
Abstract
Deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) sequence compressors for novel species frequently face challenges when processing wide-scale raw, FASTA, or multi-FASTA structured data. For years, molecular sequence databases have favored the widely used general-purpose Gzip and Zstd compressors. The absence of sequence-specific characteristics in these encoders results in subpar performance, and their use depends on time-consuming parameter adjustments. To address these limitations, in this article, we propose a reference-free, lossless sequence compressor called GraSS (Grammatical, Statistical, and Substitution Rule-Based). GraSS compresses sequences more effectively by taking advantage of certain characteristics seen in DNA and RNA sequences. It supports various formats, including raw, FASTA, and multi-FASTA, commonly found in GenBank DNA and RNA files. We evaluate GraSS’s performance using ten benchmark DNA sequences with reduced number of repeats, two highly repetitive RNA sequences, and fifteen raw DNA sequences. Test results indicate that the weighted average compression ratios (WACR) for DNA and RNA sequences are 4.5 and 19.6, respectively. Additionally, the entire DNA sequence corpus has a total compression time (TCT) of 246.8 seconds (s). These results demonstrate that the proposed compression method performs better than several advanced algorithms specifically designed to handle various levels of sequence redundancy. The decompression times, memory usage, and CPU usage are also very competitive. Contact: anirban@klyuniv.ac.in
Journal Article
Compression of Text in Selected Languages—Efficiency, Volume, and Time Comparison
2022
The goal of the research was to study the possibility of using the planned language Esperanto for text compression, and to compare the results of the text compression in Esperanto with the compression in natural languages, represented by Polish and English. The authors performed text compression in the created program in Python using four compression algorithms: zlib, lzma, bz2, and zl4 in four versions of the text: in Polish, English, Esperanto, and Esperanto in x notation (without characters outside ASCII encoding). After creating the compression program, and compressing the proper texts, authors conducted an analysis on the comparison of compression time and the volume of the text before and after compression. The results of the study confirmed the hypothesis, based on which the planned language, Esperanto, gives better text compression results than the natural languages represented by Polish and English. The confirmation by scientific methods that Esperanto is more optimal for text compression is the scientific added value of the paper.
Journal Article
Image Compression Algorithm Based On Variational Autoencoder
by
Sun, Ying
,
Xin, Xiangning
,
Ding, Yang
in
Image Compression Algorithm
,
Traditional Autoencoder
,
Variational Autoencoder
2021
Variational Autoencoder (VAE), as a kind of deep hidden space generation model, has achieved great success in performance in recent years, especially in image generation. This paper aims to study image compression algorithms based on variational autoencoders. This experiment uses the image quality evaluation measurement model, because the image super-resolution algorithm based on interpolation is the most direct and simple method to change the image resolution. In the experiment, the first step of the whole picture is transformed by the variational autoencoder, and then the actual coding is applied to the complete coefficient. Experimental data shows that after encoding using the improved encoding method of the variational autoencoder, the number of bits required for the encoding symbol stream required for transmission or storage in the traditional encoding method is greatly reduced, and symbol redundancy is effectively avoided. The experimental results show that the image research algorithm using variational autoencoder for image 1, image 2, and image 3 reduces the time by 3332, 2637, and 1470 bit respectively compared with the traditional image research algorithm of self-encoding. In the future, people will introduce deep convolutional neural networks to optimize the generative adversarial network, so that the generative adversarial network can obtain better convergence speed and model stability.
Journal Article
Evaluating the effect of compressing algorithms for trajectory similarity and classification problems
by
Bogorny Vania
,
Macedo, Jose Antonio
,
Tserpes Konstantinos
in
Algorithms
,
Classification
,
Compression
2021
During the last few years the volumes of the data that synthesize trajectories have expanded to unparalleled quantities. This growth is challenging traditional trajectory analysis approaches and solutions are sought in other domains. In this work, we focus on data compression techniques with the intention to minimize the size of trajectory data, while, at the same time, minimizing the impact on the trajectory analysis methods. To this extent, we evaluate five lossy compression algorithms: Douglas-Peucker (DP), Time Ratio (TR), Speed Based (SP), Time Ratio Speed Based (TR_SP) and Speed Based Time Ratio (SP_TR). The comparison is performed using four distinct real world datasets against six different dynamically assigned thresholds. The effectiveness of the compression is evaluated using classification techniques and similarity measures. The results showed that there is a trade-off between the compression rate and the achieved quality. The is no “best algorithm” for every case and the choice of the proper compression algorithm is an application-dependent process.
Journal Article
Lightweight defect detection algorithm of tunnel lining based on knowledge distillation
2024
Due to the influence of construction quality, engineering geology and hydrological environment, defects such as dehollowing and insufficient compaction can occur in tunnels. Aiming at the problems of complex detection model, poor real-time performance and low accuracy of the current tunnel lining defect detection methods, the study proposes a lightweight defect detection algorithm of tunnel lining based on knowledge distillation. Firstly, a high-precision teacher model based on yolov5s was constructed by constructing a C3CSFM module that combines residual structure and attention mechanism, a MDFPN network structure with multi-scale feature fusion and a reweighted RWNMS re-screening mechanism. Secondly, in the distillation process, the feature and output dimension results are fused to improve the detection accuracy, and the mask feature relationship is learned in the space and channel dimension to improve the real-time detection. Tests on the tunnel lining radar defect image dataset showed that the number of parameters of the improved model was reduced from 16.03 MB to 3.20 MB, a reduction of 80%, and the average accuracy was improved from 83.4 to 86.5%, an increase of 3.1%. On the basis of maintaining the structure and detection performance of the model, the lightweight degree of the model is greatly improved, and the high-precision and real-time detection of tunnel lining defects is realized.
Journal Article