Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
45 result(s) for "Han, Chengshan"
Sort by:
Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement
Enhancing low-light remote sensing images is crucial for preserving the accuracy and reliability of downstream analyses in a wide range of applications. Although numerous enhancement algorithms have been developed, many fail to effectively address the challenges posed by non-uniform illumination in low-light scenes. These images often exhibit significant brightness inconsistencies, leading to two primary problems: insufficient enhancement in darker regions and over-enhancement in brighter areas, frequently accompanied by color distortion and visual artifacts. These issues largely stem from the limitations of existing methods, which insufficiently account for non-uniform atmospheric attenuation and local brightness variations in reflectance estimation. To overcome these challenges, we propose a robust enhancement method based on non-uniform illumination compensation and the Atmospheric Scattering Model (ASM). Unlike conventional approaches, our method utilizes ASM to initialize reflectance estimation by adaptively adjusting atmospheric light and transmittance. A weighted graph is then employed to effectively handle local brightness variation. Additionally, a regularization term is introduced to suppress noise, refine reflectance estimation, and maintain balanced brightness enhancement. Extensive experiments on multiple benchmark remote sensing datasets demonstrate that our approach outperforms state-of-the-art methods, delivering superior enhancement performance and visual quality, even under complex non-uniform low-light conditions.
Event Density Based Denoising Method for Dynamic Vision Sensor
Dynamic vision sensor (DVS) is a new type of image sensor, which has application prospects in the fields of automobiles and robots. Dynamic vision sensors are very different from traditional image sensors in terms of pixel principle and output data. Background activity (BA) in the data will affect image quality, but there is currently no unified indicator to evaluate the image quality of event streams. This paper proposes a method to eliminate background activity, and proposes a method and performance index for evaluating filter performance: noise in real (NIR) and real in noise (RIN). The lower the value, the better the filter. This evaluation method does not require fixed pattern generation equipment, and can also evaluate filter performance using natural images. Through comparative experiments of the three filters, the comprehensive performance of the method in this paper is optimal. This method reduces the bandwidth required for DVS data transmission, reduces the computational cost of target extraction, and provides the possibility for the application of DVS in more fields.
Vibration Detection and Degraded Image Restoration of Space Camera Based on Correlation Imaging of Rolling-Shutter CMOS
To mitigate the influence of satellite platform vibrations on space camera imaging quality, a novel approach is proposed to detect vibration parameters based on correlation imaging of rolling-shutter CMOS. In the meantime, a restoration method to address the image degradation of rolling-shutter CMOS caused by such vibrations is proposed. The vibration parameter detection method utilizes the time-sharing and row-by-row imaging principle of rolling-shutter CMOS to obtain relative offset by comparing two frames of correlation images from continuous imaging. Then, the space camera’s vibration parameters are derived from the fitting curve parameters of the relative offset. According to the detected vibration parameters, the discrete point spread function is obtained, and the rolling-shutter CMOS image degradation caused by vibration is restored row by row. The verification experiments demonstrate that the proposed detection method for two-dimensional vibration achieves a relative accuracy of less than 1% in period detection and less than 2% in amplitude detection. Additionally, the proposed restoration method can enhance the MTF index by over 20%. The experimental results demonstrate that the detection method is capable of detecting high-frequency vibrations through low-frame-frequency image sequences, and it exhibits excellent applicability in both push-scan cameras and staring cameras. The restoration method effectively enhances the evaluation parameters of image quality and yields a remarkable restorative effect on degraded images.
An Innovative Approach for Removing Stripe Noise in Infrared Images
The non-uniformity of infrared detectors’ readout circuits can lead to stripe noise in infrared images, which affects their effective information and poses challenges for subsequent applications. Traditional denoising algorithms have limited effectiveness in maintaining effective information. This paper proposes a multi-level image decomposition method based on an improved LatLRR (MIDILatLRR). By utilizing the global low-rank structural characteristics of stripe noise, the noise and smooth information are decomposed into low-rank part images, and texture information is adaptively decomposed into several salient part images, thereby better preserving texture edge information in the image. Sparse terms are constructed according to the smoothness of the effective information in the final low-rank part of the image and the sparsity of the stripe noise direction. The modeling of stripe noise is achieved using multi-sparse constraint representation (MSCR), and the Alternating Direction Method of Multipliers (ADMM) is used for calculation. Extensive experiments demonstrated the proposed algorithm’s effectiveness and compared it with state-of-the-art algorithms in subjective judgments and objective indicators. The experimental results fully demonstrate the proposed algorithm’s superiority and efficacy.
A multi-feature spatial–temporal fusion network for traffic flow prediction
The traffic flow prediction is the key to alleviate traffic congestion, yet very challenging due to the complex influence factors. Currently, the most of deep learning models are designed to dig out the intricate dependency in continuous standardized sequences, which are dependent to high requirements for data continuity and regularized distribution. However, the data discontinuity and irregular distribution are inevitable in the real-world practical application, then we need find a way to utilize the powerful effect of the multi-feature fusion rather than continuous relation in standardized sequences. To this end, we conduct the prediction based on the multiple traffic features reflecting the complex influence factors. Firstly, we propose the ATFEM, an adaptive traffic features extraction mechanism, which can select important influence factors to construct joint temporal features matrix and global spatial features matrix according to the traffic condition. In this way, the feature’s representation ability can be improved. Secondly, we propose the MFSTN, a multi-feature spatial–temporal fusion network, which include the temporal transformer encoder and graph attention network to obtain the latent representation of spatial–temporal features. Especially, we design the scaled spatial–temporal fusion module, which can automatically learn optimal fusion weights, further adapt to inconsistent spatial–temporal dimensions. Finally, the multi-layer perceptron gets the mapping function between these comprehensive features and traffic flow. This method helps to improve the interpretability of the prediction. Experimental results show that the proposed model outperforms a variety of baselines, and it can accurately predict the traffic flow when the data missing rate is high.
A Novel Stripe Noise Removal Model for Infrared Images
Infrared images often carry obvious streak noises due to the non-uniformity of the infrared detector and the readout circuit. These streak noises greatly affect the image quality, adding difficulty to subsequent image processing. Compared with current elimination algorithms for infrared stripe noises, our approach fully utilizes the difference between the stripe noise components and the actual information components, takes the gradient sparsity along the stripe direction and the global sparsity of the stripe noises as regular terms, and treats the sparsity of the components across the stripe direction as a fidelity term. On this basis, an adaptive edge-preserving operator (AEPO) based on edge contrast was proposed to protect the image edge and, thus, prevent the loss of edge details. The final solution was obtained by the alternating direction method of multipliers (ADMM). To verify the effectiveness of our approach, many real experiments were carried out to compare it with state-of-the-art methods in two aspects: subjective judgment and objective indices. Experimental results demonstrate the superiority of our approach.
Irradiance Restoration Based Shadow Compensation Approach for High Resolution Multispectral Satellite Remote Sensing Images
Numerous applications are hindered by shadows in high resolution satellite remote sensing images, like image classification, target recognition and change detection. In order to improve remote sensing image utilization, significant importance appears for restoring surface feature information under shadow regions. Problems inevitably occur for current shadow compensation methods in processing high resolution multispectral satellite remote sensing images, such as color distortion of compensated shadow and interference of non-shadow. In this study, to further settle these problems, we analyzed the surface irradiance of both shadow and non-shadow areas based on a satellite sensor imaging mechanism and radiative transfer theory, and finally develop an irradiance restoration based (IRB) shadow compensation approach under the assumption that the shadow area owns the same irradiance to the nearby non-shadow area containing the same type features. To validate the performance of the proposed IRB approach for shadow compensation, we tested numerous images of WorldView-2 and WorldView-3 acquired at different sites and times. We particularly evaluated the shadow compensation performance of the proposed IRB approach by qualitative visual sense comparison and quantitative assessment with two WorldView-3 test images of Tripoli, Libya. The resulting images automatically produced by our IRB method deliver a good visual sense and relatively low relative root mean square error (rRMSE) values. Experimental results show that the proposed IRB shadow compensation approach can not only compensate information of surface features in shadow areas both effectively and automatically, but can also well preserve information of objects in non-shadow regions for high resolution multispectral satellite remote sensing images.
Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model
Low-light images are prevalent in intelligent monitoring and many other applications, with low brightness hindering further processing. Although low-light image enhancement can reduce the influence of such problems, current methods often involve a complex network structure or many iterations, which are not conducive to their efficiency. This paper proposes a Zero-Reference Camera Response Network using a camera response model to achieve efficient enhancement for arbitrary low-light images. A double-layer parameter-generating network with a streamlined structure is established to extract the exposure ratio K from the radiation map, which is obtained by inverting the input through a camera response function. Then, K is used as the parameter of a brightness transformation function for one transformation on the low-light image to realize enhancement. In addition, a contrast-preserving brightness loss and an edge-preserving smoothness loss are designed without the requirement for references from the dataset. Both can further retain some key information in the inputs to improve precision. The enhancement is simplified and can reach more than twice the speed of similar methods. Extensive experiments on several LLIE datasets and the DARK FACE face detection dataset fully demonstrate our method’s advantages, both subjectively and objectively.
Blind Infrared Remote-Sensing Image Deblurring Algorithm via Edge Composite-Gradient Feature Prior and Detail Maintenance
The problem of blind image deblurring remains a challenging inverse problem, due to the ill-posed nature of estimating unknown blur kernels and latent images within the Maximum A Posteriori (MAP) framework. To address this challenge, traditional methods often rely on sparse regularization priors to mitigate the uncertainty inherent in the problem. In this paper, we propose a novel blind deblurring model based on the MAP framework that leverages Composite-Gradient Feature (CGF) variations in edge regions after image blurring. This prior term is specifically designed to exploit the high sparsity of sharp edge regions in clear images, thereby effectively alleviating the ill-posedness of the problem. Unlike existing methods that focus on local gradient information, our approach focuses on the aggregation of edge regions, enabling better detection of both sharp and smoothed edges in blurred images. In the blur kernel estimation process, we enhance the accuracy of the kernel by assigning effective edge information from the blurred image to the smoothed intermediate latent image, preserving critical structural details lost during the blurring process. To further improve the edge-preserving restoration, we introduce an adaptive regularizer that outperforms traditional total variation regularization by better maintaining edge integrity in both clear and blurred images. The proposed variational model is efficiently implemented using alternating iterative techniques. Extensive numerical experiments and comparisons with state-of-the-art methods demonstrate the superior performance of our approach, highlighting its effectiveness and real-world applicability in diverse image-restoration tasks.
AG-Yolo: Attention-Guided Yolo for Efficient Remote Sensing Oriented Object Detection
Remote sensing can efficiently acquire information and is widely used in many areas. Object detection is a key component in most applications. But complex backgrounds in remote sensing images severely degrade detection performance. Current methods fail to effectively suppress background interference while maintaining fast detection speeds. This paper proposes Attention-Guided Yolo (AG-Yolo), an efficient oriented object detection (OOD) method tailored for remote sensing. AG-Yolo incorporates an additional rotation parameter into the head of Yolo-v10 and extends its dual label assignment strategy to maintain high efficiency in OOD. An attention branch is further introduced to generate attention maps from shallow input features, guiding feature aggregation to focus on foreground objects and suppress complex background interference. Additionally, derived from the background complexity, a three-stage curriculum learning strategy is designed to train the model from some much easier samples generated from the labeled data. This approach can give the model a better starting point, improving its ability to handle complicated datasets and increasing detection precision. On the DOTA-v1.0 and DOTA-v1.5 datasets, compared with other advanced methods, our algorithm reduces the processing latency from 33.8 ms to 19.7 ms (a roughly 40% decrease) and produces a certain degree of improvement in the mAP metric.