Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,280 result(s) for "image filtering"
Sort by:
Quantum implementation of the classical guided image filtering algorithm
Image filtering involves the application of window operations that perform valuable functions, such as noise removal, image enhancement, high dynamic range (HDR) compression, and so on. Guided image filtering is a new type of explicit image filter with multiple advantages. It can effectively remove noise while preserving edge details, and can be used in a variety of scenarios. Here, we report a quantum implementation of guided image filtering algorithm, based on the novel enhanced quantum representation (NEQR) model, and the corresponding quantum circuit has been designed. We find that the speed and quality of filtering are improved significantly due to the quantization, and the time complexity is reduced exponentially from O ( 2 2 q ) to O ( q 2 ) .
Image reconstruction method for incomplete CT projection based on self-guided image filtering
In some fields of medical diagnosis or industrial nondestructive testing, it is difficult to obtain complete computed tomography (CT) data due to the limitation of radiation dose or other factors. Therefore, image reconstruction of incomplete projection data is the focus of this paper. In this paper, a new image reconstruction model based on self-guided image filtering (SGIF) term is proposed for few-view and segmental limited-angle (SLA) CT reconstruction. Then the alternating direction method (ADM) is used to solve this model. For simplicity, we call it ADM-SGIF method. The key idea of ADM-SGIF method is to use the reconstructed image itself as a reference and utilize its structural features to guide CT reconstruction. This method can effectively preserve image structures and remove shading artifacts. To validate the effectiveness of the proposed reconstruction method, we conduct digital phantom and real CT data experiments. The results indicate that ADM-SGIF method outperforms competing methods, including total variation (TV), relative total variation (RTV), and L0-norm minimization solved by ADM (ADM-L0) methods, in both subjective and objective evaluations.
Despeckling Algorithm for Removing Speckle Noise from Ultrasound Images
Ultrasound (US) imaging can examine human bodies of various ages; however, in the process of obtaining a US image, speckle noise is generated. The speckle noise inhibits physicians from accurately examining lesions; thus, a speckle noise removal method is essential technology. To enhance speckle noise elimination, we propose a novel algorithm using the characteristics of speckle noise and filtering methods based on speckle reducing anisotropic diffusion (SRAD) filtering, discrete wavelet transform (DWT) using symmetry characteristics, weighted guided image filtering (WGIF), and gradient domain guided image filtering (GDGIF). The SRAD filter is exploited as a preprocessing filter because it can be directly applied to a medical US image containing speckle noise without a log-compression. The wavelet domain has the advantage of suppressing the additive noise. Therefore, a homomorphic transformation is utilized to convert the multiplicative noise into additive noise. After two-level DWT decomposition is applied, to suppress the residual noise of an SRAD filtered image, GDGIF and WGIF are exploited to reduce noise from seven high-frequency sub-band images and one low-frequency sub-band image, respectively. Finally, a noise-free image is attained through inverse DWT and an exponential transform. The proposed algorithm exhibits excellent speckle noise elimination and edge conservation as compared with conventional denoising methods.
Guided filter-based multi-scale super-resolution reconstruction
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.
Dual Convolutional Neural Networks for Low-Level Vision
We propose a general dual convolutional neural network (DualCNN) for low-level vision problems, e.g., super-resolution, edge-preserving filtering, deraining, and dehazing. These problems usually involve estimating two components of the target signals: structures and details. Motivated by this, we design the proposed DualCNN to have two parallel branches, which respectively recovers the structures and details in an end-to-end manner. The recovered structures and details can generate desired signals according to the formation model for each particular application. The DualCNN is a flexible framework for low-level vision tasks and can be easily incorporated into existing CNNs. Experimental results show that the DualCNN can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods that have been specially designed for each individual task.
Image Encryption Algorithm Based on Plane-Level Image Filtering and Discrete Logarithmic Transform
Image encryption is an effective way to protect image data. However, existing image encryption algorithms are still unable to strike a good balance between security and efficiency. To overcome the shortcomings of these algorithms, an image encryption algorithm based on plane-level image filtering and discrete logarithmic transformation (IEA-IF-DLT) is proposed. By utilizing the hash value more rationally, our proposed IEA-IF-DLT avoids the overhead caused by repeated generations of chaotic sequences and further improves the encryption efficiency through plane-level and three-dimensional (3D) encryption operations. Aiming at the problem that common modular addition and XOR operations are subject to differential attacks, IEA-IF-DLT additionally includes discrete logarithmic transformation to boost security. In IEA-IF-DLT, the plain image is first transformed into a 3D image, and then three rounds of plane-level permutation, plane-level pixel filtering, and 3D chaotic image superposition are performed. Next, after a discrete logarithmic transformation, a random pixel swapping is conducted to obtain the cipher image. To demonstrate the superiority of IEA-IF-DLT, we compared it with some state-of-the-art algorithms. The test and analysis results show that IEA-IF-DLT not only has better security performance, but also exhibits significant efficiency advantages.
Uncooled Thermal Camera Calibration and Optimization of the Photogrammetry Process for UAV Applications in Agriculture
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.
A Fast Approximation of the Bilateral Filter Using a Signal Processing Approach
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.
High-Noise Grayscale Image Denoising Using an Improved Median Filter for the Adaptive Selection of a Threshold
Grayscale image processing is a key research area in the field of computer vision and image analysis, where image quality and visualization effects may be seriously damaged by high-density salt and pepper noise. A traditional median filter for noise removal may result in poor detail reservation performance under strong noise and the judgment performance of different noise characteristics has strong dependence and rather weak robustness. In order to reduce the effects of high-density salt and pepper noise on image quality when processing high-noise grayscale images, an improved two-dimensional maximum Shannon entropy median filter (TSETMF) is proposed for the adaptive selection of a threshold to enhance the filter performance while stably and effectively retaining the details of the images. The framework of the proposed improved TSETMF algorithm is designed in detail. The noise in images is filtered by means of automatically partitioning a window size, the threshold value of which is adaptively calculated using two-dimensional maximum Shannon entropy. The theoretical model is verified and analyzed through comparative experiments using three kinds of classical grayscale images. The experimental results demonstrate that the proposed improved TSETMF algorithm exhibits better processing performance than that of the traditional filter, with a higher suppression of high-density noise and denoising stability. This stronger ability while processing high-density noise is demonstrated by a higher peak signal-to-noise ratio (PSNR) of 24.97 dB with a 95% noise density located in the classical Lena grayscale image. The better denoising stability, with a noise density from 5% to 95%, is demonstrated by the minor decline in the PSNR of approximately 10.78% relative to a PSNR of 23.10 dB located in the classical Cameraman grayscale image. Furthermore, it can be advanced to promote higher noise filtering and stability for processing high-density salt and pepper noise in grayscale images.
Cut2Self: A single image based self‐supervised denoiser
Despite the recent upsurge of self‐supervised methods in single image denoising, achieving robustness and efficiency of performance is still challenging due to some prevalent issues like identity mapping, overfitting, and increased variance of network predictions. Recent self‐supervised approaches prescribe a dropout‐based single‐pixel masking strategy in this regard. However, real camera noise is signal‐dependent, and typically poses trivial changes to the images. Hence, such a strategy still preserves contextual information about target location even after dropping them out, leading to an identity mapping and overfitting problems in practice. Here, Cut2Self, a new denoising method to address this issue, which cuts out random block‐regions instead of singleton pixels to provide the higher possibility to remove contextual information from the neighbouring pixels, thus reducing identity mapping chances while being resilient against overfitting is proposed. Cut2Self creates distinct training pairs for each training iteration by randomly cutting out square regions of input and sending them to the denoising network. Thus, iteration‐wise different network predictions are generated, which are then assembled to generate the final denoised output. Cut2Self is evaluated with synthetic and real‐world noise, visualising its consistent denoising performance compared to other supervised, unsupervised, and self‐supervised methods. Recent self‐supervised denoising approaches prescribe a dropout‐based single‐pixel masking strategy. However, real camera noise is signal dependent, and typically poses trivial changes to the images. Hence, such strategy still preserves contextual information about target location even after dropping them out, leading to identity mapping and overfitting problems in practice. Here, Cut2Self, a new denoising method to address this issue, which cuts out random block‐regions instead of singleton pixels to provide the higher possibility to remove contextual information from the neighbouring pixels, thus reducing identity mapping chances while being resilient against overfitting is proposed.