Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,149 result(s) for "atmospheric scattering model"
Sort by:
Atmospheric Light Estimation Based Remote Sensing Image Dehazing
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
Single Image Haze Removal via Multiple Variational Constraints for Vision Sensor Enhancement
Images captured by vision sensors in outdoor environments often suffer from haze-induced degradations, including blurred details, faded colors, and reduced visibility, which severely impair the performance of sensing and perception systems. To address this issue, we propose a haze-removal algorithm for hazy images using multiple variational constraints. Based on the classic atmospheric scattering model, a mixed variational framework is presented that incorporates three regularization terms for the transmission map and scene radiance. Concretely, an ℓp norm and an ℓ2 norm were constructed to jointly enforce the transmissions for smoothing the details and preserving the structures, and a weighted ℓ1 norm was devised to constrain the scene radiance for suppressing the noises. Furthermore, our devised weight function takes into account both the local variances and the gradients of the scene radiance, which adaptively perceives the textures and structures and controls the smoothness in the process of image restoration. To address the mixed variational model, a re-weighted least square strategy was employed to iteratively solve two separated subproblems. Finally, a gamma correction was applied to adjust the overall brightness, yielding the final recovered result. Extensive comparisons with state-of-the-art methods demonstrated that our proposed algorithm produces visually satisfactory results with a superior clarity and vibrant colors. In addition, our proposed algorithm demonstrated a superior generalization to diverse degradation scenarios, including low-light and remote sensing hazy images, and it effectively improved the performance of high-level vision tasks.
Single image dehazing via an improved atmospheric scattering model
Under foggy or hazy weather conditions, the visibility and color fidelity of outdoor images are prone to degradation. Hazy images can be the cause of serious errors in many computer vision systems. Consequently, image haze removal has practical significance for real-world applications. In this study, we first analyze the inherent weaknesses of the atmospheric scattering model and propose an improvement to address those weaknesses. Then, we present a fast image haze removal algorithm based on the improved model. In our proposed method, the input image is partitioned into several scenes based on the haze thickness. Next, averaging and erosion operations calculate the rough scene luminance map in a scene-wise manner. We obtain the rough scene transmission map by maximizing the contrast in each scene and then develop a way to gently remove the haze using an adaptive method for adjusting scene transmission based on scene features. In addition, we propose a guided total variation model for edge optimization, so as to prevent from the block effect as well as to eliminate the negative effect from the wrong scene segmentation results. The experimental results demonstrate that our method is effective in solving a series of common problems, including uneven illuminance, overenhanced and oversaturated images, and so forth. Moreover, our method outperforms most current dehazing algorithms in terms of visual effects, universality, and processing speed.
ScaleViM-PDD: Multi-Scale EfficientViM with Physical Decoupling and Dual-Domain Fusion for Remote Sensing Image Dehazing
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm for vision tasks, showing great promise due to their computational efficiency and robust capacity to model global dependencies. However, most existing learning-based dehazing methods lack physical interpretability, leading to weak generalization. Furthermore, they typically rely on spatial features while neglecting crucial frequency domain information, resulting in incomplete feature representation. To address these challenges, we propose ScaleViM-PDD, a novel network that enhances an SSM backbone with two key innovations: a Multi-scale EfficientViM with Physical Decoupling (ScaleViM-P) module and a Dual-Domain Fusion (DD Fusion) module. The ScaleViM-P module synergistically integrates a Physical Decoupling block within a Multi-scale EfficientViM architecture. This design enables the network to mitigate haze interference in a physically grounded manner at each representational scale while simultaneously capturing global contextual information to adaptively handle complex haze distributions. To further address detail loss, the DD Fusion module replaces conventional skip connections by incorporating a novel Frequency Domain Module (FDM) alongside channel and position attention. This allows for a more effective fusion of spatial and frequency features, significantly improving the recovery of fine-grained details, including color and texture information. Extensive experiments on nine publicly available remote sensing datasets demonstrate that ScaleViM-PDD consistently surpasses state-of-the-art baselines in both qualitative and quantitative evaluations, highlighting its strong generalization ability.
Enhanced CycleGAN Network with Adaptive Dark Channel Prior for Unpaired Single-Image Dehazing
Unpaired single-image dehazing has become a challenging research hotspot due to its wide application in modern transportation, remote sensing, and intelligent surveillance, among other applications. Recently, CycleGAN-based approaches have been popularly adopted in single-image dehazing as the foundations of unpaired unsupervised training. However, there are still deficiencies with these approaches, such as obvious artificial recovery traces and the distortion of image processing results. This paper proposes a novel enhanced CycleGAN network with an adaptive dark channel prior for unpaired single-image dehazing. First, a Wave-Vit semantic segmentation model is utilized to achieve the adaption of the dark channel prior (DCP) to accurately recover the transmittance and atmospheric light. Then, the scattering coefficient derived from both physical calculations and random sampling means is utilized to optimize the rehazing process. Bridged by the atmospheric scattering model, the dehazing/rehazing cycle branches are successfully combined to form an enhanced CycleGAN framework. Finally, experiments are conducted on reference/no-reference datasets. The proposed model achieved an SSIM of 94.9% and a PSNR of 26.95 on the SOTS-outdoor dataset and obtained an SSIM of 84.71% and a PSNR of 22.72 on the O-HAZE dataset. The proposed model significantly outperforms typical existing algorithms in both objective quantitative evaluation and subjective visual effect.
A Foggy Weather Simulation Algorithm for Traffic Image Synthesis Based on Monocular Depth Estimation
This study addresses the ongoing challenge for learning-based methods to achieve accurate object detection in foggy conditions. In response to the scarcity of foggy traffic image datasets, we propose a foggy weather simulation algorithm based on monocular depth estimation. The algorithm involves a multi-step process: a self-supervised monocular depth estimation network generates a relative depth map and then applies dense geometric constraints for scale recovery to derive an absolute depth map. Subsequently, the visibility of the simulated image is defined to generate a transmittance map. The dark channel map is then used to distinguish sky regions and estimate atmospheric light values. Finally, the atmospheric scattering model is used to generate fog simulation images under specified visibility conditions. Experimental results show that more than 90% of fog images have AuthESI values of less than 2, which indicates that their non-structural similarity (NSS) characteristics are very close to those of natural fog. The proposed fog simulation method is able to convert clear images in natural environments, providing a solution to the problem of lack of foggy image datasets and incomplete visibility data.
An image quality-aware approach with adaptive scattering coefficients for single image dehazing
Most conventional dehazing methods obtain quality results by solving atmospheric scattering model (ASM) using acquired variables (i.e., global atmospheric light and transmission map). Prior-based strategies have made significant achievements in this task. Nonetheless, they usually obtain unrealistic dehazed images since strong assumptions can barely suit all circumstances. In this paper, we propose a novel image dehazing method with adaptive scattering coefficients to realize visual-friendly and quality-orientated restoration. Specifically, a regional rank-based technique is applied to find the most likely atmospheric light candidate. And then, different from previous image dehazing methods that rely on haze-relevant priors to estimate a transmission map, we develop an image quality-aware approach, together with a dynamic scattering coefficient. In this phase, an optimization function constrained by the image quality-aware indicators is designed to compute the scattering coefficient or transmission. The Fibonacci algorithm is further employed to solve this optimization problem. The proposed method produces high-quality results and exhibits favorable quantitative and qualitative performance compared to related methods.
Cycle-Iterative Image Dehazing Based on Noise Evolution
Benefiting from the prevalence of machine learning theory, deep learning-based image dehazing algorithms have made remarkable progress. However, limited by (1) the incompleteness of symmetric datasets, (2) insufficient extraction of deep priors, and (3) the excessive scale of the network, such algorithms have poor generalization ability on real-world datasets and lack real-time performance. To address these issues, this paper proposes a noise evolution-based cycle-iterative dehazing algorithm. In our method, the noise evolution in each iteration includes an atmospheric scattering model (ASM)-based dehazing module, a random haze addition module, and a Retinex-based inverse enhancement module. More specifically, the ASM-based image dehazing module initially clarifies hazy images by extracting haze-related features according to the ASM. The random haze addition module combines the depth-related parameters extracted by the previous module with a random adjustment or an assignment mechanism to expand the samples, thereby addressing the problem of data shortage. The Retinex-based inverse enhancement module is introduced to mine “depth” features related to illumination, aiming to ensure the extraction of richer priors from the Retinex model. It is worth noting that both the dehazing module and the inverse enhancement module use the low-complexity U-Net as the main backbone, and the random haze addition module only involves simple operation. Therefore, it effectively suppresses the deployment scale and computational complexity of our algorithm. Experiments reveal that the proposed algorithm not only robustly restores hazy images but also exhibits promising advantages in terms of running time and the scale of network parameters.
Dehaze-UNet: A Lightweight Network Based on UNet for Single-Image Dehazing
Numerous extant image dehazing methods based on learning improve performance by increasing the depth or width, the size of the convolution kernel, or using the Transformer structure. However, this will inevitably introduce many parameters and increase the computational overhead. Therefore, we propose a lightweight dehazing framework: Dehaze-UNet, which has excellent dehazing performance and very low computational overhead to be suitable for terminal deployment. To allow Dehaze-UNet to aggregate the features of haze, we design a LAYER module. This module mainly aggregates the haze features of different hazy images through the batch normalization layer, so that Dehaze-UNet can pay more attention to haze. Furthermore, we revisit the use of the physical model in the network. We design an ASMFUN module to operate the feature map of the network, allowing the network to better understand the generation and removal of haze and learn prior knowledge to improve the network’s generalization to real hazy scenes. Extensive experimental results indicate that the lightweight Dehaze-UNet outperforms state-of-the-art methods, especially for hazy images of real scenes.
Single image dehazing based on multi-scale segmentation and deep learning
Existing image dehazing methods suffer from problems of insufficient dehazing, distortion, and low color contrast. Aiming at this problem, a deep learning single-image dehazing method based on multi-scale segmentation is proposed in this paper. The study found that the haze information in the haze image will decrease with the increase of frequency. Therefore, the haze image is first decomposed into four sub-images of different frequency domains through image segmentation in this article. A dehazing network model composed of four sub-network channels with different complexity is then constructed to extract the haze information contained in each sub-image. After the transmission sub-images are generated, the image fusion technology is used to obtain the final transmittance map. Finally, the haze-free image is obtained based on the physical model of atmospheric scattering. Experimental results on the synthetic and real images dataset show that the proposed method achieves significant dehazing effect and high color contrast with no distortion, showing superior performance than other dehazing methods.