Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
38,543
result(s) for
"Data compression"
Sort by:
Introduction to data compression
by
Sayood, Khalid
in
Coding theory
,
Data compression (Computer science)
,
Data compression (Telecommunication)
2006,2005
Introduction to Data Compression, Third Edition, is a concise and comprehensive guide to data compression. This book introduces the reader to the theory underlying today's compression techniques with detailed instruction for their applications using several examples to explain the concepts. Encompassing the entire field of data compression, it covers lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization. It includes all the cutting edge updates the reader will need during the work day and in class. This edition adds new content on the topic of audio compression including a description of the mp3 algorithm, along with a new video coding standard and new facsimile standard explained. It explains in detail established and emerging standards in depth including JPEG 2000, JPEG-LS, MPEG-2, Group 3 and 4 faxes, JBIG 2, ADPCM, LPC, CELP, and MELP. Source code is provided via a companion web site that gives readers the opportunity to build their own algorithms, choose and implement techniques in their own applications. This book will appeal to professionals, software and hardware engineers, students, and to anyone interested in digital libraries and multimedia.
Multilinear subspace learning : dimensionality reduction of multidimensional data
\"Due to advances in sensor, storage, and networking technologies, data is being generated on a daily basis at an ever-increasing pace in a wide range of applications, including cloud computing, mobile Internet, and medical imaging. This large multidimensional data requires more efficient dimensionality reduction schemes than the traditional techniques. Addressing this need, multilinear subspace learning (MSL) reduces the dimensionality of big data directly from its natural multidimensional representation, a tensor. Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data gives a comprehensive introduction to both theoretical and practical aspects of MSL for the dimensionality reduction of multidimensional data based on tensors. It covers the fundamentals, algorithms, and applications of MSL. Emphasizing essential concepts and system-level perspectives, the authors provide a foundation for solving many of today's most interesting and challenging problems in big multidimensional data processing. They trace the history of MSL, detail recent advances, and explore future developments and emerging applications.The book follows a unifying MSL framework formulation to systematically derive representative MSL algorithms. It describes various applications of the algorithms, along with their pseudocode. Implementation tips help practitioners in further development, evaluation, and application. The book also provides researchers with useful theoretical information on big multidimensional data in machine learning and pattern recognition. MATLAB source code, data, and other materials are available at www.comp.hkbu.edu.hk/haiping/MSL.html\"-- Provided by publisher
Stream-Based Visually Lossless Data Compression Applying Variable Bit-Length ADPCM Encoding
2021
Video applications have become one of the major services in the engineering field, which are implemented by server–client systems connected via the Internet, broadcasting services for mobile devices such as smartphones and surveillance cameras for security. Recently, the majority of video encoding mechanisms to reduce the data rate are mainly lossy compression methods such as the MPEG format. However, when we consider special needs for high-speed communication such as display applications and object detection ones with high accuracy from the video stream, we need to address the encoding mechanism without any loss of pixel information, called visually lossless compression. This paper focuses on the Adaptive Differential Pulse Code Modulation (ADPCM) that encodes a data stream into a constant bit length per data element. However, the conventional ADPCM does not have any mechanism to control dynamically the encoding bit length. We propose a novel ADPCM that provides a mechanism with a variable bit-length control, called ADPCM-VBL, for the encoding/decoding mechanism. Furthermore, since we expect that the encoded data from ADPCM maintains low entropy, we expect to reduce the amount of data by applying a lossless data compression. Applying ADPCM-VBL and a lossless data compression, this paper proposes a video transfer system that controls throughput autonomously in the communication data path. Through evaluations focusing on the aspects of the encoding performance and the image quality, we confirm that the proposed mechanisms effectively work on the applications that needs visually lossless compression by encoding video stream in low latency.
Journal Article
Imaging biological tissue with high-throughput single-pixel compressive holography
by
Zhang, Runsen
,
Shen, Yuecheng
,
Feng, Xiaohua
in
631/1647/245
,
639/624/1107/328/1650
,
639/624/1107/510
2021
Single-pixel holography (SPH) is capable of generating holographic images with rich spatial information by employing only a single-pixel detector. Thanks to the relatively low dark-noise production, high sensitivity, large bandwidth, and cheap price of single-pixel detectors in comparison to pixel-array detectors, SPH is becoming an attractive imaging modality at wavelengths where pixel-array detectors are not available or prohibitively expensive. In this work, we develop a high-throughput single-pixel compressive holography with a space-bandwidth-
time
product (SBP-T) of 41,667 pixels/s, realized by enabling phase stepping naturally in time and abandoning the need for phase-encoded illumination. This holographic system is scalable to provide either a large field of view (~83 mm
2
) or a high resolution (5.80 μm × 4.31 μm). In particular, high-resolution holographic images of biological tissues are presented, exhibiting rich contrast in both amplitude and phase. This work is an important step towards multi-spectrum imaging using a single-pixel detector in biophotonics.
Single-pixel holography generates holographic images with a single-pixel detector making this relatively inexpensive. Here the authors report a high-throughput single-pixel compressive holography method for imaging biological tissue which can either provide a large field of view or high resolution.
Journal Article
An Evolving TinyML Compression Algorithm for IoT Environments Based on Data Eccentricity
by
Sisinni, Emiliano
,
Ferrari, Paolo
,
Silva, Marianne
in
algorithm
,
Algorithms
,
Battery powered devices
2021
Currently, the applications of the Internet of Things (IoT) generate a large amount of sensor data at a very high pace, making it a challenge to collect and store the data. This scenario brings about the need for effective data compression algorithms to make the data manageable among tiny and battery-powered devices and, more importantly, shareable across the network. Additionally, considering that, very often, wireless communications (e.g., low-power wide-area networks) are adopted to connect field devices, user payload compression can also provide benefits derived from better spectrum usage, which in turn can result in advantages for high-density application scenarios. As a result of this increase in the number of connected devices, a new concept has emerged, called TinyML. It enables the use of machine learning on tiny, computationally restrained devices. This allows intelligent devices to analyze and interpret data locally and in real time. Therefore, this work presents a new data compression solution (algorithm) for the IoT that leverages the TinyML perspective. The new approach is called the Tiny Anomaly Compressor (TAC) and is based on data eccentricity. TAC does not require previously established mathematical models or any assumptions about the underlying data distribution. In order to test the effectiveness of the proposed solution and validate it, a comparative analysis was performed on two real-world datasets with two other algorithms from the literature (namely Swing Door Trending (SDT) and the Discrete Cosine Transform (DCT)). It was found that the TAC algorithm showed promising results, achieving a maximum compression rate of 98.33%. Additionally, it also surpassed the two other models regarding the compression error and peak signal-to-noise ratio in all cases.
Journal Article
Effective image compression using transformer and residual network for balanced handling of high and low-frequency information
2025
Image compression has made significant progress through end-to-end deep-learning approaches in recent years. The Transformer network, coupled with self-attention mechanisms, efficiently captures high-frequency features during image compression. However, the low-frequency information in the image cannot be obtained well through the Transformer network. To address this issue, the paper introduces a novel end-to-end autoencoder architecture for image compression based on the transformer and residual network. This method, called Transformer and Residual Network (TRN), offers a comprehensive solution for efficient image compression, capturing essential image content while effectively reducing data size. The TRN employs a dual network, comprising a self-attention pathway and a residual network, intricately designed as a high-low-frequency mixer. This dual-network can preserve both high and low-frequency features during image compression. The end-to-end training of this model employs rate-distortion optimization (RDO methods). Experimental results demonstrate that the proposed TRN method outperforms the latest deep learning-based image compression methods, achieving an impressive 8.32% BD-rate (bit-rate distortion performance) improvement on the CLIC dataset. In comparison to traditional methods like JPEG, the proposed achieves a remarkable BD-rate improvement of 70.35% on the CLIC dataset.
Journal Article
Reducing bulky medical images via shape-texture decoupled deep neural networks
2026
The explosive growth of medical data poses significant challenges for storage and sharing. Current compression techniques utilizing Implicit Neural Representations (INRs) effectively strike a balance between encoding accuracy and compression ratio, yet they suffer from slow encoding speeds. By contrast, data-driven compressors encode fast but heavily rely on the training data and cannot generalize well. To develop a practical compression tool overcoming all these limitations, we introduce Shape-Texture Decoupled Compression (DeepSTD), which focuses on the data set of the same modality and body parts and proposes decoupling the variations into shape and texture components for separate encoding. Disentangling two components facilitates designing proper encoding strategies suitable for their respective characteristics—swift shape encoding based on INRs and effective data-driven texture encoding. The proposed approach combines the advantages of INR-based and data-driven models, to achieve high fidelity, fast encoding speed, as well as good generalizability. Comprehensive evaluations on large-scale Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) datasets demonstrate superior performance across encoding quality, compression ratio, and speed. Besides, with features like parallel acceleration with multiple Graphics Processing Units (multi-GPU), flexible control of compression ratio, and broad applicability, DeepSTD offers a robust and efficient solution for the pressing demands of modern medical data compression.
The authors introduce Shape-Texture Decoupled Compression (DeepSTD), a practical compression tool for medical imaging data. They provide evaluations on different modality imaging datasets, providing a route towards a solution for the pressing demands of modern medical data compression.
Journal Article
Reduction of procedure times in routine clinical practice with Compressed SENSE magnetic resonance imaging technique
by
Sartoretti, Thomas
,
Binkert, Christoph
,
van Smoorenburg, Luuk
in
Acceleration
,
Biology and Life Sciences
,
Brain
2019
Acceleration of MR sequences beyond current parallel imaging techniques is possible with the Compressed SENSE technique that has recently become available for 1.5 and 3 Tesla scanners, for nearly all image contrasts and for 2D and 3D sequences. The impact of this technique on examination timing parameters and MR protocols in a clinical setting was investigated in this retrospective study.
A numerical analysis of the examination timing parameters (scan time, exam time, procedure time, interscan delay time, changeover time, nonscan time) based on the MR protocols of 6 different body regions (brain, knee, lumbar spine, breast, shoulder) using MR log files was performed and the total number of examinations acquired from January to April both in 2017 and 2018 on a 1.5 T MR scanner was registered. Percentages, box plots and unpaired two-sided t tests were obtained for statistical evaluation.
All examination timing parameters of the six anatomical regions analysed were significantly shortened after implementation of Compressed SENSE. On average, scan times were accelerated by 20.2% (p<0.0001) while procedure times were shortened by 16% (p<0.0001). Considering all anatomical regions and all MR protocols, 27% more examinations were performed over the same 4 month period in 2018 compared to 2017.
Compressed SENSE allows for a significant acceleration of MR examinations and a considerable increase in the total number of MR examinations is possible.
Journal Article
Compression of FASTQ and SAM Format Sequencing Data
2013
Storage and transmission of the data produced by modern DNA sequencing instruments has become a major concern, which prompted the Pistoia Alliance to pose the SequenceSqueeze contest for compression of FASTQ files. We present several compression entries from the competition, Fastqz and Samcomp/Fqzcomp, including the winning entry. These are compared against existing algorithms for both reference based compression (CRAM, Goby) and non-reference based compression (DSRC, BAM) and other recently published competition entries (Quip, SCALCE). The tools are shown to be the new Pareto frontier for FASTQ compression, offering state of the art ratios at affordable CPU costs. All programs are freely available on SourceForge. Fastqz: https://sourceforge.net/projects/fastqz/, fqzcomp: https://sourceforge.net/projects/fqzcomp/, and samcomp: https://sourceforge.net/projects/samcomp/.
Journal Article