Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
22 result(s) for "JPEG entropy coding"
Sort by:
Improved JPEG Coding by Filtering 8 × 8 DCT Blocks
The JPEG format, consisting of a set of image compression techniques, is one of the most commonly used image coding standards for both lossy and lossless image encoding. In this format, various techniques are used to improve image transmission and storage. In the final step of lossy image coding, JPEG uses either arithmetic or Huffman entropy coding modes to further compress data processed by lossy compression. Both modes encode all the 8 × 8 DCT blocks without filtering empty ones. An end-of-block marker is coded for empty blocks, and these empty blocks cause an unnecessary increase in file size when they are stored with the rest of the data. In this paper, we propose a modified version of the JPEG entropy coding. In the proposed version, instead of storing an end-of-block code for empty blocks with the rest of the data, we store their location in a separate buffer and then compress the buffer with an efficient lossless method to achieve a higher compression ratio. The size of the additional buffer, which keeps the information of location for the empty and non-empty blocks, was considered during the calculation of bits per pixel for the test images. In image compression, peak signal-to-noise ratio versus bits per pixel has been a major measure for evaluating the coding performance. Experimental results indicate that the proposed modified algorithm achieves lower bits per pixel while retaining quality.
Hybrid Adaptive Lossless Image Compression Based on Discrete Wavelet Transform
A new hybrid transform for lossless image compression exploiting a discrete wavelet transform (DWT) and prediction is the main new contribution of this paper. Simple prediction is generally considered ineffective in conjunction with DWT but we applied it to subbands of DWT modified using reversible denoising and lifting steps (RDLSs) with step skipping. The new transform was constructed in an image-adaptive way using heuristics and entropy estimation. For a large and diverse test set consisting of 499 photographic and 247 non-photographic (screen content) images, we found that RDLS with step skipping allowed effectively combining DWT with prediction. Using prediction, we nearly doubled the JPEG 2000 compression ratio improvements that could be obtained using RDLS with step skipping. Because for some images it might be better to apply prediction instead of DWT, we proposed compression schemes with various tradeoffs, which are practical contributions of this study. Compared with unmodified JPEG 2000, one scheme improved the compression ratios of photographic and non-photographic images, on average, by 1.2% and 30.9%, respectively, at the cost of increasing the compression time by 2% and introducing only minimal modifications to JPEG 2000. Greater ratio improvements, exceeding 2% and 32%, respectively, are attainable at a greater cost.
FLoCIC: A Few Lines of Code for Raster Image Compression
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes.
Understanding, Optimising, and Extending Data Compression with Anisotropic Diffusion
Galić et al. (Journal of Mathematical Imaging and Vision 31:255–269, 2008 ) have shown that compression based on edge-enhancing anisotropic diffusion (EED) can outperform the quality of JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. However, the reasons for the good performance of EED remained unclear, and they could not outperform the more advanced JPEG 2000. The goals of the present paper are threefold: Firstly, we investigate the compression qualities of various partial differential equations. This sheds light on the favourable properties of EED in the context of image compression. Secondly, we demonstrate that it is even possible to beat the quality of JPEG 2000 with EED if one uses specific subdivisions on rectangles and several important optimisations. These amendments include improved entropy coding, brightness and diffusivity optimisation, and interpolation swapping. Thirdly, we demonstrate how to extend our approach to 3-D and shape data. Experiments on classical test images and 3-D medical data illustrate the high potential of our approach.
A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images
Traditional JPEG series image compression algorithms have limitations in speed. To improve the storage and transmission of 14-bit/pixel images acquired by infrared line-scan detectors, a novel method is introduced for achieving high-speed and highly efficient compression of line-scan infrared images. The proposed method utilizes the features of infrared images to reduce image redundancy and employs improved Huffman coding for entropy coding. The improved Huffman coding addresses the low-probability long coding of 14-bit images by truncating long codes, which results in low complexity and minimal loss in the compression ratio. Additionally, a method is proposed to obtain a Huffman code table that bypasses the pixel counting process required for entropy coding, thereby improving the compression speed. The final implementation is a low-complexity lossless image compression algorithm that achieves fast encoding through simple table lookup rules. The proposed method results in only a 10% loss in compression performance compared to JPEG 2000, while achieving a 20-fold speed improvement. Compared to dictionary-based methods, the proposed method can achieve high-speed compression while maintaining high compression efficiency, making it particularly suitable for the high-speed, high-efficiency lossless compression of line-scan panoramic infrared images. The code table compression effect is 5% lower than the theoretical value. The algorithm can also be applied to analyze images with more bits.
Implementation of JPEG XS entropy encoding and decoding on FPGA
JPEG XS is the latest international standard for shallow compression fields launched by the International Organization for Standardization (ISO). The coding standard was officially released in 2019. The JPEG XS standard can be encoded and decoded on different devices, but there is no research on the implementation of JPEG XS entropy codec on FPGAs. This paper briefly introduces JPEG XS encoding, proposes a modular design scheme of encoder and decoder on FPGA for the entropy encoding and decoding part, and parallelizes the algorithm in JPEG XS coding standard according to the characteristics of FPGA parallelization processing, mainly including low-latency optimization design, storage space optimization design. The optimized scheme in this paper scheme enables encoding speeds of up to 4 coefficients/clock and decoding speeds of up to 2 coefficients/clock, with a 75% reduction in encoding and decoding time. The maximum clock frequency of the entropy encoder is about 222.6 MHz, and the maximum clock frequency of the entropy decoder is about 127 MHz. The design and implementation of the FPGA-based JPEG XS entropy encoding and decoding algorithm is of great significance and provides ideas for the subsequent implementation and optimization of the entire JPEG XS standard on FPGAs. This work is the first in the world to propose the design and implementation of an algorithm that can implement the JPEG XS entropy encoding and decoding process on FPGA. It creates the possibility for the effective application of JPEG XS standard in more media.
Microarray Image Lossless Compression Using General Entropy Coders and Image Compression Standards
Lossless compression is still a challenging task in the case of microarray images. This research proposes two algorithms that aim to improve the lossless compression efficiency for high spatial resolution microarray images using general entropy codecs, namely Huffman and arithmetic coders and the image compression standard JPEG 2000. Using the standards ensures that decoders are available to reassess the images for future applications. Typically, microarray images have a bit-depth of 16. In proposed algorithm 1, every image’s per bit-plane entropy profile is calculated to automatically determine a better threshold T to split the bit-planes into the foreground and background sub-images. T is initially set to 8. However, in algorithm 1, T is updated, balancing the average value of per bit-plane entropies of the segmented sub-images of an image for improved lossless compression results. Codecs are applied individually to the produced sub-images. Proposed algorithm 2 is designed to increase the lossless compression efficiency of any unmodified JPEG 2000-compliant encoder while reducing side information overhead. In this, pixel intensity reindexing and, thereby, changing the histograms of the same segmented sub-images obtained from algorithm 1 are implemented and confirmed to get better JPEG 2000 results in lossless mode than applying it to the original image. The lossless JPEG 2000 compression performance on microarray images is also compared to JPEG-LS in particular. The experiments are carried out to validate the methods on seven benchmark datasets, namely ApoA1, ISREC, Stanford, MicroZip, GEO, Arizona, and IBB. The average first-order entropy of the datasets above is calculated and compared for codecs and better than competitive efforts in the literature.
Privacy Protection in JPEG XS: A Lightweight Spatio-Color Scrambling Approach
This paper presents a lightweight JPEG XS coding scheme incorporating spatio-color scrambling for privacy protection. The proposed approach follows an Encryption-then-Compression (EtC) framework, maintaining compatibility with the JPEG XS standard. Prior to encoding, input images undergo scrambling operations, including line permutation, line reversal, and color permutation. Security analysis indicates that the scrambling technique provides a large key space, making brute-force attacks computationally challenging. Experimental results demonstrate that the proposed method achieves a rate-distortion (RD) performance nearly equivalent to conventional JPEG XS compression while enhancing visual security. Additionally, a rectangular block-based scrambling technique is explored, which offers a trade-off among low latency, reduced memory usage, and visual concealment performance. While real-time processing is possible with or without block-based scrambling, the block-based approach is particularly beneficial for applications that demand lower latency and reduced memory usage. The effectiveness of the proposed method is validated through simulations on 8K ultra-high-definition (UHD) images.
Image Statistics Preserving Encrypt-then-Compress Scheme Dedicated for JPEG Compression Standard
In this paper, the authors analyze in more details an image encryption scheme, proposed by the authors in their earlier work, which preserves input image statistics and can be used in connection with the JPEG compression standard. The image encryption process takes advantage of fast linear transforms parametrized with private keys and is carried out prior to the compression stage in a way that does not alter those statistical characteristics of the input image that are crucial from the point of view of the subsequent compression. This feature makes the encryption process transparent to the compression stage and enables the JPEG algorithm to maintain its full compression capabilities even though it operates on the encrypted image data. The main advantage of the considered approach is the fact that the JPEG algorithm can be used without any modifications as a part of the encrypt-then-compress image processing framework. The paper includes a detailed mathematical model of the examined scheme allowing for theoretical analysis of the impact of the image encryption step on the effectiveness of the compression process. The combinatorial and statistical analysis of the encryption process is also included and it allows to evaluate its cryptographic strength. In addition, the paper considers several practical use-case scenarios with different characteristics of the compression and encryption stages. The final part of the paper contains the additional results of the experimental studies regarding general effectiveness of the presented scheme. The results show that for a wide range of compression ratios the considered scheme performs comparably to the JPEG algorithm alone, that is, without the encryption stage, in terms of the quality measures of reconstructed images. Moreover, the results of statistical analysis as well as those obtained with generally approved quality measures of image cryptographic systems, prove high strength and efficiency of the scheme’s encryption stage.
Skipping Selected Steps of DWT Computation in Lossless JPEG 2000 for Improved Bitrates
In order to improve bitrates of lossless JPEG 2000, we propose to modify the discrete wavelet transform (DWT) by skipping selected steps of its computation. We employ a heuristic to construct the skipped steps DWT (SS-DWT) in an image-adaptive way and define fixed SS-DWT variants. For a large and diverse set of images, we find that SS-DWT significantly improves bitrates of non-photographic images. From a practical standpoint, the most interesting results are obtained by applying entropy estimation of coding effects for selecting among the fixed SS-DWT variants. This way we get the compression scheme that, as opposed to the general SS-DWT case, is compliant with the JPEG 2000 part 2 standard. It provides average bitrate improvement of roughly 5% for the entire test-set, whereas the overall compression time becomes only 3% greater than that of the unmodified JPEG 2000. Bitrates of photographic and non-photographic images are improved by roughly 0.5% and 14%, respectively. At a significantly increased cost of exploiting a heuristic, selecting the steps to be skipped based on the actual bitrate instead of an estimated one, and by applying reversible denoising and lifting steps to SS-DWT, we have attained greater bitrate improvements of up to about 17.5% for non-photographic images.