Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
10,162 result(s) for "Image compression."
Sort by:
Image and video compression : fundamentals, techniques, and applications
\"Preface This book is intended primarily for courses in image compression techniques for undergraduate through postgraduate students, research scholars, and engineers working in the field. It presents the basic concepts and technologies in a student-friendly manner. The major techniques in image compression are explained with informative illustrations, and the concepts are evolved from the basics. Practical implementation is demonstrated with MATLAB
Image compression and encryption algorithm based on uniform non-degeneracy chaotic system and fractal coding
This paper focuses on the design of chaotic image compression encryption algorithms. Firstly, we design a uniform non-degenerate chaotic system based on nonlinear filters and the feed-forward and feed-back structure. Theoretical and experimental analyses indicate that the system can avoid the drawbacks of the existing chaotic systems, such as chaos degradation, uneven trajectory distribution, and weak chaotic behavior. In addition, our chaotic system can produce chaotic sequences with good pseudo-random characteristics. Then, we propose a fractal image compression algorithm based on adaptive horizontal or vertical (HV) partition by improving the baseline HV partition and the time-consuming global matching algorithm. The algorithm does not need to implement time-consuming global matching operations. In addition, analysis results demonstrate that our fractal image compression algorithm can reconstruct the original image with high quality under ultra-high compression ratios. Finally, to protect the confidentiality of images, we propose a chaotic fractal image compression and encryption algorithm by using our chaotic system and fractal image compression algorithm. The algorithm achieves excellent diffusion and confusion abilities without using the hash value of plain images. Therefore, it avoids the failure of decryption caused by the tampering of hash value during the transmission process, and can well resist differential attacks and chosen-ciphertext attacks. In addition, simulation results show the algorithm is efficient and robust.
Remote Sensing Image Compression Based on the Multiple Prior Information
Learned image compression has achieved a series of breakthroughs for nature images, but there is little literature focusing on high-resolution remote sensing image (HRRSI) datasets. This paper focuses on designing a learned lossy image compression framework for compressing HRRSIs. Considering the local and non-local redundancy contained in HRRSI, a mixed hyperprior network is designed to explore both the local and non-local redundancy in order to improve the accuracy of entropy estimation. In detail, a transformer-based hyperprior and a CNN-based hyperprior are fused for entropy estimation. Furthermore, to reduce the mismatch between training and testing, a three-stage training strategy is introduced to refine the network. In this training strategy, the entire network is first trained, and then some sub-networks are fixed while the others are trained. To evaluate the effectiveness of the proposed compression algorithm, the experiments are conducted on an HRRSI dataset. The results show that the proposed algorithm achieves comparable or better compression performance than some traditional and learned image compression algorithms, such as Joint Photographic Experts Group (JPEG) and JPEG2000. At a similar or lower bitrate, the proposed algorithm is about 2 dB higher than the PSNR value of JPEG2000.
Towards an Efficient Remote Sensing Image Compression Network with Visual State Space Model
In the past few years, deep learning has achieved remarkable advancements in the area of image compression. Remote sensing image compression networks focus on enhancing the similarity between the input and reconstructed images, effectively reducing the storage and bandwidth requirements for high-resolution remote sensing images. As the network’s effective receptive field (ERF) expands, it can capture more feature information across the remote sensing images, thereby reducing spatial redundancy and improving compression efficiency. However, the majority of these learned image compression (LIC) techniques are primarily CNN-based and transformer-based, often failing to balance the global ERF and computational complexity optimally. To alleviate this issue, we propose a learned remote sensing image compression network with visual state space model named VMIC to achieve a better trade-off between computational complexity and performance. Specifically, instead of stacking small convolution kernels or heavy self-attention mechanisms, we employ a 2D-bidirectional selective scan mechanism. Every element within the feature map aggregates data from multiple spatial positions, establishing a globally effective receptive field with linear computational complexity. We extend it to an omni-selective scan for the global-spatial correlations within our Channel and Global Context Entropy Model (CGCM), enabling the integration of spatial and channel priors to minimize redundancy across slices. Experimental results demonstrate that the proposed method achieves superior trade-off between rate-distortion performance and complexity. Furthermore, in comparison to traditional codecs and learned image compression algorithms, our model achieves BD-rate reductions of −4.48%, −9.80% over the state-of-the-art VTM on the AID and NWPU VHR-10 datasets, respectively, as well as −6.73% and −7.93% on the panchromatic and multispectral images of the WorldView-3 remote sensing dataset.
Combining Image Space and q-Space PDEs for Lossless Compression of Diffusion MR Images
Diffusion MRI is a modern neuroimaging modality with a unique ability to acquire microstructural information by measuring water self-diffusion at the voxel level. However, it generates huge amounts of data, resulting from a large number of repeated 3D scans. Each volume samples a location in q-space, indicating the direction and strength of a diffusion sensitizing gradient during the measurement. This captures detailed information about the self-diffusion and the tissue microstructure that restricts it. Lossless compression with GZIP is widely used to reduce the memory requirements. We introduce a novel lossless codec for diffusion MRI data. It reduces file sizes by more than 30% compared to GZIP and also beats lossless codecs from the JPEG family. Our codec builds on recent work on lossless PDE-based compression of 3D medical images, but additionally exploits smoothness in q-space. We demonstrate that, compared to using only image space PDEs, q-space PDEs further improve compression rates. Moreover, implementing them with finite element methods and a custom acceleration significantly reduces computational expense. Finally, we show that our codec clearly benefits from integrating subject motion correction and slightly from optimizing the order in which the 3D volumes are coded.
Scalable image compression algorithms with small and fixed-size memory
The SPIHT image compression algorithm is characterized by low computational complexity, good performance, and the production of a quality scalable bitstream that can be decoded at several bit-rates with image quality enhancement as more bits are received. However, it suffers from the enormous computer memory consumption due to utilizing linked lists of size of about 2–3 times the image size. In addition, it does not exploit the multi-resolution feature of the wavelet transform to produce a resolution scalable bitstream by which the image can be decoded at numerous resolutions (sizes). The Single List SPIHT (SLS) algorithm resolved the high memory problem of SPIHT by using only one list of fixed size equals to just 1/4 the image size, and state marker bits with an average of 2.25 bits/pixel. This paper introduces two new algorithms that are based on SLS. Like SLS, the first algorithm also produces a quality scalable bitstream. However, it has lower time complexity and better performance than SLS. The second algorithm, which is the major contribution of the work, upgrades the first algorithm to produce a bitstream that is both quality and resolution scalable. As such, the algorithm is very suitable for the modern heterogeneous nature of the internet users to satisfy their different capabilities and desires in terms of image quality and resolution.
Multispectral Transforms Using Convolution Neural Networks for Remote Sensing Multispectral Image Compression
A multispectral image is a three-order tensor since it is a three-dimensional matrix, i.e., one spectral dimension and two spatial position dimensions. Multispectral image compression can be achieved by means of the advantages of tensor decomposition (TD), such as Nonnegative Tucker Decomposition (NTD). Unfortunately, the TD suffers from high calculation complexity and cannot be used in the on-board low-complexity case (e.g., multispectral cameras) that the hardware resources and power are limited. Here, we propose a low-complexity compression approach for multispectral images based on convolution neural networks (CNNs) with NTD. We construct a new spectral transform using CNNs, where the CNNs are able to transform the three-dimension spectral tensor from large-scale to a small-scale version. The NTD resources only allocate the small-scale three-dimension tensor to improve calculation efficiency. We obtain the optimized small-scale spectral tensor by the minimization of original and reconstructed three-dimension spectral tensor in self-learning CNNs. Then, the NTD is applied to the optimized three-dimension spectral tensor in the DCT domain to obtain the high compression performance. We experimentally confirmed the proposed method on multispectral images. Compared to the case that the new spectral tensor transform with CNNs is not applied to the original three-dimension spectral tensor at the same compression bit-rates, the reconstructed image quality could be improved. Compared with the full NTD-based method, the computation efficiency was obviously improved with only a small sacrifices of PSNR without affecting the quality of images.
An optimal adaptive reweighted sampling-based adaptive block compressed sensing for underwater image compression
The use of Block Compressed Sensing (BCS) as an alternative to conventional Compressed Sensing (CS) in image sampling and acquisition has gained attention due to its potential benefits. However, BCS can suffer from blocking artifacts and blurs in the reconstructed images, which can degrade the overall image quality. To address these issues and improve reconstruction performance, Adaptive Block Compressed Sensing (ABCS) techniques can be used. ABCS minimizes the blurs and artifacts that occur during the reconstruction process by adaptively selecting samples from different image blocks. To further enhance the sampling efficiency and overall performance in underwater image compression, a new approach called adaptive reweighted sampling-based ABCS (ARS-ABCS) in Fast Haar Wavelet Transform (FHWT) domain is proposed in the paper. This reweighting process allows the system to allocate more samples to the areas where reconstruction quality is low or artifacts are prevalent, improving the overall image reconstruction in a targeted manner. Performance is measured in terms of Peak Signal to Noise Ratio (PSNR), Structural SIMilarity index (SSIM), Normalized Cross-Correlation (NCC) and Normalized Absolute Error (NAE). The results demonstrate that the proposed ARS-ABCS has achieved 1.5 to 5dB increase in PSNR with respect to other non-weighted adaptive schemes. It has produced space saving of 60 to 70% with utilization of only around 30% of total samples in the image. SSIM and NCC values obtained are closer to 1 with low NAE values.
Deep learning-assisted medical image compression challenges and opportunities: systematic review
Over the preceding decade, there has been a discernible surge in the prominence of artificial intelligence, marked by the development of various methodologies, among which deep learning emerges as a particularly auspicious technique. The captivating attribute of deep learning, characterised by its capacity to glean intricate feature representations from data, has served as a catalyst for pioneering approaches and methodologies spanning a multitude of domains. In the face of the burgeoning exponential growth in digital medical image data, the exigency for adept image compression methodologies has become increasingly pronounced. These methodologies are designed to preserve bandwidth and storage resources, thereby ensuring the seamless and efficient transmission of data within medical applications. The critical nature of medical image compression accentuates the imperative to confront the challenges precipitated by the escalating deluge of medical image data. This review paper undertakes a comprehensive examination of medical image compression, with a predominant focus on sophisticated, research-driven deep learning techniques. It delves into a spectrum of approaches, encompassing the amalgamation of deep learning with conventional compression algorithms and the application of deep learning to enhance compression quality. Additionally, the review endeavours to explicate these fundamental concepts, elucidating their inherent characteristics, merits, and limitations.
A joint color image encryption and compression scheme based on hyper-chaotic system
For the low security and compression performance of the existing joint image encryption and compression technology, an improvement algorithm for joint image compression and encryption is proposed. The algorithm employs the discrete cosine transformation dictionary to sparsely represent the color image and then combines it with the encryption algorithm based on the hyper-chaotic system to achieve image compression and encryption simultaneously. Through the experimental analysis, the algorithm proposed in this paper has a good performance in security and compression.