Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,999 result(s) for "Atmospheric scattering"
Sort by:
Atmospheric Light Estimation Based Remote Sensing Image Dehazing
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
Influence of Atmospheric Scattering on the Accuracy of Laser Altimetry of the GF-7 Satellite and Corrections
Satellite laser altimetry can obtain sub-meter or even centimeter-scale surface elevation data over large areas, but it is inevitably affected by scattering caused by clouds, aerosols, and other atmospheric particles. This laser ranging error caused by scattering cannot be ignored. In this study, we systematically combined existing atmospheric scattering identification technology used in satellite laser altimetry and observed that the traditional algorithm cannot effectively estimate the laser multiple scattering of the GaoFen-7 (GF-7) satellite. To solve this problem, we used data from the GF-7 satellite to analyze the importance of atmospheric scattering and propose an identification scheme for atmospheric scattering data over land and water areas. We also used a look-up table and a multi-layer perceptron (MLP) model to identify and correct atmospheric scattering, for which the availability of land and water data reached 16.67% and 26.09%, respectively. After correction using the MLP model, the availability of land and water data increased to 21% and 30%, respectively. These corrections mitigated the low identification accuracy due to atmospheric scattering, which is significant for facilitating satellite laser altimetry data processing.
Single image dehazing via an improved atmospheric scattering model
Under foggy or hazy weather conditions, the visibility and color fidelity of outdoor images are prone to degradation. Hazy images can be the cause of serious errors in many computer vision systems. Consequently, image haze removal has practical significance for real-world applications. In this study, we first analyze the inherent weaknesses of the atmospheric scattering model and propose an improvement to address those weaknesses. Then, we present a fast image haze removal algorithm based on the improved model. In our proposed method, the input image is partitioned into several scenes based on the haze thickness. Next, averaging and erosion operations calculate the rough scene luminance map in a scene-wise manner. We obtain the rough scene transmission map by maximizing the contrast in each scene and then develop a way to gently remove the haze using an adaptive method for adjusting scene transmission based on scene features. In addition, we propose a guided total variation model for edge optimization, so as to prevent from the block effect as well as to eliminate the negative effect from the wrong scene segmentation results. The experimental results demonstrate that our method is effective in solving a series of common problems, including uneven illuminance, overenhanced and oversaturated images, and so forth. Moreover, our method outperforms most current dehazing algorithms in terms of visual effects, universality, and processing speed.
Enhanced CycleGAN Network with Adaptive Dark Channel Prior for Unpaired Single-Image Dehazing
Unpaired single-image dehazing has become a challenging research hotspot due to its wide application in modern transportation, remote sensing, and intelligent surveillance, among other applications. Recently, CycleGAN-based approaches have been popularly adopted in single-image dehazing as the foundations of unpaired unsupervised training. However, there are still deficiencies with these approaches, such as obvious artificial recovery traces and the distortion of image processing results. This paper proposes a novel enhanced CycleGAN network with an adaptive dark channel prior for unpaired single-image dehazing. First, a Wave-Vit semantic segmentation model is utilized to achieve the adaption of the dark channel prior (DCP) to accurately recover the transmittance and atmospheric light. Then, the scattering coefficient derived from both physical calculations and random sampling means is utilized to optimize the rehazing process. Bridged by the atmospheric scattering model, the dehazing/rehazing cycle branches are successfully combined to form an enhanced CycleGAN framework. Finally, experiments are conducted on reference/no-reference datasets. The proposed model achieved an SSIM of 94.9% and a PSNR of 26.95 on the SOTS-outdoor dataset and obtained an SSIM of 84.71% and a PSNR of 22.72 on the O-HAZE dataset. The proposed model significantly outperforms typical existing algorithms in both objective quantitative evaluation and subjective visual effect.
An image quality-aware approach with adaptive scattering coefficients for single image dehazing
Most conventional dehazing methods obtain quality results by solving atmospheric scattering model (ASM) using acquired variables (i.e., global atmospheric light and transmission map). Prior-based strategies have made significant achievements in this task. Nonetheless, they usually obtain unrealistic dehazed images since strong assumptions can barely suit all circumstances. In this paper, we propose a novel image dehazing method with adaptive scattering coefficients to realize visual-friendly and quality-orientated restoration. Specifically, a regional rank-based technique is applied to find the most likely atmospheric light candidate. And then, different from previous image dehazing methods that rely on haze-relevant priors to estimate a transmission map, we develop an image quality-aware approach, together with a dynamic scattering coefficient. In this phase, an optimization function constrained by the image quality-aware indicators is designed to compute the scattering coefficient or transmission. The Fibonacci algorithm is further employed to solve this optimization problem. The proposed method produces high-quality results and exhibits favorable quantitative and qualitative performance compared to related methods.
The Solar Radiation Climate of Saudi Arabia
In the present work, we investigate the solar radiation climate of Saudi Arabia, using solar radiation data from 43 sites in the country covering the period 2013–2021. These data include hourly values of global, G, and diffuse, Gd, horizontal irradiances from which the direct, Gb, horizontal irradiance is estimated. The diffuse fraction, kd; the direct-beam fraction, kb; and the ratio ke = Gd/Gb, are used in the analysis. Solar maps of the annual mean G, Gd, kd, kb, and ke are prepared for Saudi Arabia under all- and clear-sky conditions, which show interesting but explainable patterns. Additionally, the intra-annual and seasonal variabilities of these parameters are presented, and regression equations are provided. We find that Gb has a negative linear relationship with kd; the same applies to G with respect to kd or the latitude, φ, of the site. It is shown that kd and kb can reflect the scattering and absorption effects of the atmosphere on solar radiation, respectively; therefore, they can be used as atmospheric scattering and absorption indices. Part of the analysis considers the defined solar energy zones in Saudi Arabia.
Cycle-Iterative Image Dehazing Based on Noise Evolution
Benefiting from the prevalence of machine learning theory, deep learning-based image dehazing algorithms have made remarkable progress. However, limited by (1) the incompleteness of symmetric datasets, (2) insufficient extraction of deep priors, and (3) the excessive scale of the network, such algorithms have poor generalization ability on real-world datasets and lack real-time performance. To address these issues, this paper proposes a noise evolution-based cycle-iterative dehazing algorithm. In our method, the noise evolution in each iteration includes an atmospheric scattering model (ASM)-based dehazing module, a random haze addition module, and a Retinex-based inverse enhancement module. More specifically, the ASM-based image dehazing module initially clarifies hazy images by extracting haze-related features according to the ASM. The random haze addition module combines the depth-related parameters extracted by the previous module with a random adjustment or an assignment mechanism to expand the samples, thereby addressing the problem of data shortage. The Retinex-based inverse enhancement module is introduced to mine “depth” features related to illumination, aiming to ensure the extraction of richer priors from the Retinex model. It is worth noting that both the dehazing module and the inverse enhancement module use the low-complexity U-Net as the main backbone, and the random haze addition module only involves simple operation. Therefore, it effectively suppresses the deployment scale and computational complexity of our algorithm. Experiments reveal that the proposed algorithm not only robustly restores hazy images but also exhibits promising advantages in terms of running time and the scale of network parameters.
Single image dehazing based on multi-scale segmentation and deep learning
Existing image dehazing methods suffer from problems of insufficient dehazing, distortion, and low color contrast. Aiming at this problem, a deep learning single-image dehazing method based on multi-scale segmentation is proposed in this paper. The study found that the haze information in the haze image will decrease with the increase of frequency. Therefore, the haze image is first decomposed into four sub-images of different frequency domains through image segmentation in this article. A dehazing network model composed of four sub-network channels with different complexity is then constructed to extract the haze information contained in each sub-image. After the transmission sub-images are generated, the image fusion technology is used to obtain the final transmittance map. Finally, the haze-free image is obtained based on the physical model of atmospheric scattering. Experimental results on the synthetic and real images dataset show that the proposed method achieves significant dehazing effect and high color contrast with no distortion, showing superior performance than other dehazing methods.
Single image defogging with a dual multiscale neural network model
Images captured by image acquisition systems in scenes with fog or haze contain missing details, dull color, and reduced brightness. To address this problem, the dual multiscale neural network model based on the AOD theory is proposed in this paper. First, two parameters, namely transmittance and atmospheric layer coefficient, of the atmospheric scattering model are combined into a single parameter. The new neural network model proposed in this paper is then used to train this parameter. The network model proposed in this paper consists of two multiscale modules and a mapping module. In order to extract more perfect image features, this paper designs two multiscale modules for feature extraction. The convolution parameters of Multiscale Module 1 are designed to maintain the size of original images during feature extraction by adding pooling, sampling, etc. After each convolution operation, multiscale module 2 uses multiple small-sized convolution kernels for convolution, in which the concat operation is added to better connect the individual kernels, the mapping module maps the fogged images onto the extracted feature map and is able to extract more detail from the original image to obtain better defogging results after processing. Training is performed to derive a unified parameter model for image defogging, and finally, the defogged image is obtained using this parameter estimation model. The experimental results show that the model proposed this paper not only outperforms the AOD network in terms of peak signal-to-noise ratio, structural similarity, and subjective vision but also outperforms the mainstream deep learning and traditional methods in terms of image defogging; moreover, the defogged images are optimized in terms of detail, color, and brightness. In addition, ablation experiments had demonstrated that all of the structures in this paper were necessary.
A Joint De-Rain and De-Mist Network Based on the Atmospheric Scattering Model
Rain can have a detrimental effect on optical components, leading to the appearance of streaks and halos in images captured during rainy conditions. These visual distortions caused by rain and mist contribute significant noise information that can compromise image quality. In this paper, we propose a novel approach for simultaneously removing both streaks and halos from the image to produce clear results. First, based on the principle of atmospheric scattering, a rain and mist model is proposed to initially remove the streaks and halos from the image by reconstructing the image. The Deep Memory Block (DMB) selectively extracts the rain layer transfer spectrum and the mist layer transfer spectrum from the rainy image to separate these layers. Then, the Multi-scale Convolution Block (MCB) receives the reconstructed images and extracts both structural and detailed features to enhance the overall accuracy and robustness of the model. Ultimately, extensive results demonstrate that our proposed model JDDN (Joint De-rain and De-mist Network) outperforms current state-of-the-art deep learning methods on synthetic datasets as well as real-world datasets, with an average improvement of 0.29 dB on the heavy-rainy-image dataset.