Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
38,827
result(s) for
"data compression"
Sort by:
Introduction to data compression
by
Sayood, Khalid
in
Coding theory
,
Data compression (Computer science)
,
Data compression (Telecommunication)
2006,2005
Introduction to Data Compression, Third Edition, is a concise and comprehensive guide to data compression. This book introduces the reader to the theory underlying today's compression techniques with detailed instruction for their applications using several examples to explain the concepts. Encompassing the entire field of data compression, it covers lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization. It includes all the cutting edge updates the reader will need during the work day and in class. This edition adds new content on the topic of audio compression including a description of the mp3 algorithm, along with a new video coding standard and new facsimile standard explained. It explains in detail established and emerging standards in depth including JPEG 2000, JPEG-LS, MPEG-2, Group 3 and 4 faxes, JBIG 2, ADPCM, LPC, CELP, and MELP. Source code is provided via a companion web site that gives readers the opportunity to build their own algorithms, choose and implement techniques in their own applications. This book will appeal to professionals, software and hardware engineers, students, and to anyone interested in digital libraries and multimedia.
Multilinear subspace learning : dimensionality reduction of multidimensional data
\"Due to advances in sensor, storage, and networking technologies, data is being generated on a daily basis at an ever-increasing pace in a wide range of applications, including cloud computing, mobile Internet, and medical imaging. This large multidimensional data requires more efficient dimensionality reduction schemes than the traditional techniques. Addressing this need, multilinear subspace learning (MSL) reduces the dimensionality of big data directly from its natural multidimensional representation, a tensor. Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data gives a comprehensive introduction to both theoretical and practical aspects of MSL for the dimensionality reduction of multidimensional data based on tensors. It covers the fundamentals, algorithms, and applications of MSL. Emphasizing essential concepts and system-level perspectives, the authors provide a foundation for solving many of today's most interesting and challenging problems in big multidimensional data processing. They trace the history of MSL, detail recent advances, and explore future developments and emerging applications.The book follows a unifying MSL framework formulation to systematically derive representative MSL algorithms. It describes various applications of the algorithms, along with their pseudocode. Implementation tips help practitioners in further development, evaluation, and application. The book also provides researchers with useful theoretical information on big multidimensional data in machine learning and pattern recognition. MATLAB source code, data, and other materials are available at www.comp.hkbu.edu.hk/haiping/MSL.html\"-- Provided by publisher
An Evolving TinyML Compression Algorithm for IoT Environments Based on Data Eccentricity
by
Sisinni, Emiliano
,
Ferrari, Paolo
,
Silva, Marianne
in
algorithm
,
Algorithms
,
Battery powered devices
2021
Currently, the applications of the Internet of Things (IoT) generate a large amount of sensor data at a very high pace, making it a challenge to collect and store the data. This scenario brings about the need for effective data compression algorithms to make the data manageable among tiny and battery-powered devices and, more importantly, shareable across the network. Additionally, considering that, very often, wireless communications (e.g., low-power wide-area networks) are adopted to connect field devices, user payload compression can also provide benefits derived from better spectrum usage, which in turn can result in advantages for high-density application scenarios. As a result of this increase in the number of connected devices, a new concept has emerged, called TinyML. It enables the use of machine learning on tiny, computationally restrained devices. This allows intelligent devices to analyze and interpret data locally and in real time. Therefore, this work presents a new data compression solution (algorithm) for the IoT that leverages the TinyML perspective. The new approach is called the Tiny Anomaly Compressor (TAC) and is based on data eccentricity. TAC does not require previously established mathematical models or any assumptions about the underlying data distribution. In order to test the effectiveness of the proposed solution and validate it, a comparative analysis was performed on two real-world datasets with two other algorithms from the literature (namely Swing Door Trending (SDT) and the Discrete Cosine Transform (DCT)). It was found that the TAC algorithm showed promising results, achieving a maximum compression rate of 98.33%. Additionally, it also surpassed the two other models regarding the compression error and peak signal-to-noise ratio in all cases.
Journal Article
Stream-Based Visually Lossless Data Compression Applying Variable Bit-Length ADPCM Encoding
2021
Video applications have become one of the major services in the engineering field, which are implemented by server–client systems connected via the Internet, broadcasting services for mobile devices such as smartphones and surveillance cameras for security. Recently, the majority of video encoding mechanisms to reduce the data rate are mainly lossy compression methods such as the MPEG format. However, when we consider special needs for high-speed communication such as display applications and object detection ones with high accuracy from the video stream, we need to address the encoding mechanism without any loss of pixel information, called visually lossless compression. This paper focuses on the Adaptive Differential Pulse Code Modulation (ADPCM) that encodes a data stream into a constant bit length per data element. However, the conventional ADPCM does not have any mechanism to control dynamically the encoding bit length. We propose a novel ADPCM that provides a mechanism with a variable bit-length control, called ADPCM-VBL, for the encoding/decoding mechanism. Furthermore, since we expect that the encoded data from ADPCM maintains low entropy, we expect to reduce the amount of data by applying a lossless data compression. Applying ADPCM-VBL and a lossless data compression, this paper proposes a video transfer system that controls throughput autonomously in the communication data path. Through evaluations focusing on the aspects of the encoding performance and the image quality, we confirm that the proposed mechanisms effectively work on the applications that needs visually lossless compression by encoding video stream in low latency.
Journal Article
Imaging biological tissue with high-throughput single-pixel compressive holography
by
Zhang, Runsen
,
Shen, Yuecheng
,
Feng, Xiaohua
in
631/1647/245
,
639/624/1107/328/1650
,
639/624/1107/510
2021
Single-pixel holography (SPH) is capable of generating holographic images with rich spatial information by employing only a single-pixel detector. Thanks to the relatively low dark-noise production, high sensitivity, large bandwidth, and cheap price of single-pixel detectors in comparison to pixel-array detectors, SPH is becoming an attractive imaging modality at wavelengths where pixel-array detectors are not available or prohibitively expensive. In this work, we develop a high-throughput single-pixel compressive holography with a space-bandwidth-
time
product (SBP-T) of 41,667 pixels/s, realized by enabling phase stepping naturally in time and abandoning the need for phase-encoded illumination. This holographic system is scalable to provide either a large field of view (~83 mm
2
) or a high resolution (5.80 μm × 4.31 μm). In particular, high-resolution holographic images of biological tissues are presented, exhibiting rich contrast in both amplitude and phase. This work is an important step towards multi-spectrum imaging using a single-pixel detector in biophotonics.
Single-pixel holography generates holographic images with a single-pixel detector making this relatively inexpensive. Here the authors report a high-throughput single-pixel compressive holography method for imaging biological tissue which can either provide a large field of view or high resolution.
Journal Article
Data compression of Bridge Resilience Control: Algorithm and case analysis
2026
Bridge inspection and structural health monitoring represent the primary approaches to managing bridge resilience. Data acquired through inspection and monitoring activities provides an effective technical basis for the systematic implementation of bridge resilience control strategies. Yet, uninterrupted monitoring and diverse inspection campaigns have yielded an enormous volume of data, which directly imposes comprehensive and stringent challenges on data storage, transmission and processing. Consequently, data compression has become a research priority in the field of bridge resilience control. However, existing data compression algorithms are all general-purpose data processing techniques, which decouple the intrinsic physical relevance between monitoring data and bridge structural behaviors. To tackle this limitation, this study integrates domain knowledge, the time-series characteristics of bridge monitoring data, and bridge deterioration models into the design of a novel data compression algorithm. This approach addresses the issue of indiscriminate data compression inherent to conventional algorithms, thereby enabling efficient data compression while preserving critical bridge structural state information. By incorporating domain knowledge, the proposed method transforms raw monitoring data into data information with engineering attributes. based on these attributes, a set of interrelated monitoring data is further converted into a small subset of key data that is directly applicable to bridge resilience control practice. Leveraging the steady-state variation law of bridge operational performance, the dynamic structural characteristics of bridges are extracted from time-series monitoring data, which correspondingly reduces the storage demand of time-series datasets. For data sampling intervals interrupted by various types of system faults, a sparse data supplementation method is proposed. After data supplementation, the complete dataset is further refined by utilizing the inherent time-series characteristics of the monitoring data, which not only ensures data integrity but also further reduces the overall data volume. Simulation analyses demonstrate that the domain knowledge-based compression method achieves a data compression ratio of 75%. Moreover, the comprehensive compression ratio exceeds 92% after the synergistic processing of time-series feature extraction and sparse data supplementation, with a data fidelity rate of 95%. These performance metrics indicate that the proposed method can reduce the data storage costs and transmission bandwidth consumption associated with bridge resilience control by 75% to 92%. Meanwhile, the 95% feature retention accuracy satisfies the engineering precision requirements for bridge resilience control assessments, which effectively reconciles the inherent contradiction between data compression efficiency and structural evaluation accuracy.
Journal Article
Effective image compression using transformer and residual network for balanced handling of high and low-frequency information
2025
Image compression has made significant progress through end-to-end deep-learning approaches in recent years. The Transformer network, coupled with self-attention mechanisms, efficiently captures high-frequency features during image compression. However, the low-frequency information in the image cannot be obtained well through the Transformer network. To address this issue, the paper introduces a novel end-to-end autoencoder architecture for image compression based on the transformer and residual network. This method, called Transformer and Residual Network (TRN), offers a comprehensive solution for efficient image compression, capturing essential image content while effectively reducing data size. The TRN employs a dual network, comprising a self-attention pathway and a residual network, intricately designed as a high-low-frequency mixer. This dual-network can preserve both high and low-frequency features during image compression. The end-to-end training of this model employs rate-distortion optimization (RDO methods). Experimental results demonstrate that the proposed TRN method outperforms the latest deep learning-based image compression methods, achieving an impressive 8.32% BD-rate (bit-rate distortion performance) improvement on the CLIC dataset. In comparison to traditional methods like JPEG, the proposed achieves a remarkable BD-rate improvement of 70.35% on the CLIC dataset.
Journal Article
Reducing bulky medical images via shape-texture decoupled deep neural networks
2026
The explosive growth of medical data poses significant challenges for storage and sharing. Current compression techniques utilizing Implicit Neural Representations (INRs) effectively strike a balance between encoding accuracy and compression ratio, yet they suffer from slow encoding speeds. By contrast, data-driven compressors encode fast but heavily rely on the training data and cannot generalize well. To develop a practical compression tool overcoming all these limitations, we introduce Shape-Texture Decoupled Compression (DeepSTD), which focuses on the data set of the same modality and body parts and proposes decoupling the variations into shape and texture components for separate encoding. Disentangling two components facilitates designing proper encoding strategies suitable for their respective characteristics—swift shape encoding based on INRs and effective data-driven texture encoding. The proposed approach combines the advantages of INR-based and data-driven models, to achieve high fidelity, fast encoding speed, as well as good generalizability. Comprehensive evaluations on large-scale Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) datasets demonstrate superior performance across encoding quality, compression ratio, and speed. Besides, with features like parallel acceleration with multiple Graphics Processing Units (multi-GPU), flexible control of compression ratio, and broad applicability, DeepSTD offers a robust and efficient solution for the pressing demands of modern medical data compression.
The authors introduce Shape-Texture Decoupled Compression (DeepSTD), a practical compression tool for medical imaging data. They provide evaluations on different modality imaging datasets, providing a route towards a solution for the pressing demands of modern medical data compression.
Journal Article
zDUR: reference-free FASTQ compressor with high compression ratio and speed
2026
Background
High-throughput sequencing technologies generate massive amounts of FASTQ data comprising nucleotide sequences, quality scores, and read identifiers, necessitating efficient compression to alleviate storage and transmission burdens. Compared to general-purpose compressors, specialized FASTQ compressors achieve higher compression performance by exploiting the inherent redundancy in FASTQ files. However, existing FASTQ-specialized compressors often suffer from limited data applicability and tend to over-optimize either compression ratio or compression speed at the expense of the other.
Results
We present zDUR, a reference-free FASTQ compressor designed for efficient and scalable handling of next-generation sequencing data across diverse platforms and sequencing data types. Benchmarking against six reference-free compressors on 15 representative datasets spanning four sequencing data types demonstrates that zDUR achieves a favorable overall balance between compression ratio and speed, with broad applicability across data types. In particular, on single-cell RNA-seq and spatial transcriptomics datasets, zDUR achieves over a tenfold increase in runtime performance while maintaining higher compression ratios than SPRING, one of the state-of-the-art reference-free FASTQ compressors.
Conclusions
zDUR offers a scalable and efficient solution for reference-free FASTQ compression, balancing performance, speed, and usability across diverse datasets.
Journal Article