Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
145 result(s) for "Detail enhancement"
Sort by:
Single Image Haze Removal from Image Enhancement Perspective for Real-Time Vision-Based Systems
Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.
Infrared image enhancement algorithm based on detail enhancement guided image filtering
Because of the unique imaging mechanism of infrared (IR) sensors, IR images commonly suffer from blurred edge details, low contrast, and poor signal-to-noise ratio. A new method is proposed in this paper to enhance IR image details so that the enhanced images can effectively inhibit image noise and improve image contrast while enhancing image details. First, for the traditional guided image filter (GIF) applied to IR image enhancement is prone to halo artifacts, this paper proposes a detail enhancement guided filter (DGIF). It mainly adds the constructed edge perception and detail regulation factors to the cost function of the GIF. Then, according to the visual characteristics of human eyes, this paper applies the detail regulation factor to the detail layer enhancement, which solves the problem of amplifying image noise using fixed gain coefficient enhancement. Finally, the enhanced detail layer is directly fused with the base layer so that the enhanced image has rich detail information. We first compare the DGIF with four guided image filters and then compare the algorithm of this paper with three traditional IR image enhancement algorithms and two IR image enhancement algorithms based on the GIF on 20 IR images. The experimental results show that the DGIF has better edge-preserving and smoothing characteristics than the four guided image filters. The mean values of quantitative evaluation of information entropy, average gradient, edge intensity, figure definition, and root-mean-square contrast of the enhanced images, respectively, achieved about 0.23%, 3.4%, 4.3%, 2.1%, and 0.17% improvement over the optimal parameter. It shows that the algorithm in this paper can effectively suppress the image noise in the detail layer while enhancing the detail information, improving the image contrast, and having a better visual effect.
Research on Real‐Time Detail Enhancement Algorithm for Endoscopic Video Images and Hardware Implementation
In order to solve the problems of low contrast and weak detail information of endoscope images, the image adaptive histogram detail enhancement algorithm is presented. Although the adaptive histogram equalization (AHE) algorithm has been studied in some depth, the detail enhancement algorithm is relatively complicated and difficult to implement in endoscope hardware. In order to realize the real‐time and adaptive enhancement of endoscope image details on the hardware system, the AHE algorithm is improved to reduce the hardware resource consumption and time complexity. The improved algorithm selects the segmentation condition suitable for real‐time image, the threshold interception, and the pipeline structure to process the low contrast endoscopic image. Xilinx’s Artix‐7 chip is used to implement the hardware circuit and process images with a resolution of 640 x 480 in real time at a rate of up to 160 frames per second. The design utilizes 25K look‐up tables (LUTs), 6K flip–flops, and 33 block RAMS. The experimental results show that the improved algorithm has the characteristics of fast processing speed, good detail enhancement effect, and strong portability, which can meet the requirements of real‐time video processing in endoscopy.
Fast Global Image Smoothing via Quasi Weighted Least Squares
Image smoothing is a long-studied research area with tremendous approaches proposed. However, how to perform high-quality image smoothing with less computational cost still remains a challenging problem. In this paper, we try to solve this problem with a newly proposed global optimization based method named quasi weighted least squares. In our method, the 2D image is first re-ordered into a 1D vector via a newly proposed 2D-to-1D transformation. We then properly remove some original 2D neighborhood connections. The remaining neighboring pixels can simply form 1D neighborhood connections in the transformed 1D vector while they still contain the 2D neighborhood information in the original 2D image space. These together result in a quite compact linear system that can be easily and efficiently solved, which makes our method a fast global image smoothing approach. Our method is on par with the fastest approaches in terms of processing speed, however, it is able to yield comparable performance with the state-of-the-art ones in terms of smoothing quality. Our method can also work as a solver to approximate the weighted least squares problem in complex systems, and it can achieve similar results but runs much faster. The efficiency and effectiveness of our method are validated through comprehensive experiments in several tasks. Our code is publicly available at: https://github.com/wliusjtu/Q-WLS.
AIDEDNet: anti-interference and detail enhancement dehazing network for real-world scenes
The haze phenomenon seriously interferes the image acquisition and reduces image quality. Due to many uncertain factors, dehazing is typically a challenge in image processing. The most existing deep learning-based dehazing approaches apply the atmospheric scattering model (ASM) or a similar physical model, which originally comes from traditional dehazing methods. However, the data set trained in deep learning does not match well this model for three reasons. Firstly, the atmospheric illumination in ASM is obtained from prior experience, which is not accurate for dehazing real-scene. Secondly, it is difficult to get the depth of outdoor scenes for ASM. Thirdly, the haze is a complex natural phenomenon, and it is difficult to find an accurate physical model and related parameters to describe this phenomenon. In this paper, we propose a black box method, in which the haze is considered an image quality problem without using any physical model such as ASM. Analytically, we propose a novel dehazing equation to combine two mechanisms: interference item and detail enhancement item. The interference item estimates the haze information for dehazing the image, and then the detail enhancement item can repair and enhance the details of the dehazed image. Based on the new equation, we design an anti-interference and detail enhancement dehazing network (AIDEDNet), which is dramatically different from existing dehazing networks in that our network is fed into the haze-free images for training. Specifically, we propose a new way to construct a haze patch on the flight of network training. The patch is randomly selected from the input images and the thickness of haze is also randomly set. Numerous experiment results show that AIDEDNet outperforms the state-of-the-art methods on both synthetic haze scenes and real-world haze scenes.
Liver segmentation network based on detail enhancement and multi-scale feature fusion
Due to the low contrast of abdominal CT (Computer Tomography) images and the similar color and shape of the liver to other organs such as the spleen, stomach, and kidneys, liver segmentation presents significant challenges. Additionally, 2D CT images obtained from different angles (such as sagittal, coronal, and transverse planes) increase the diversity of liver morphology and the complexity of segmentation. To address these issues, this paper proposes a Detail Enhanced Convolution (DE Conv) to improve liver feature learning and thereby enhance liver segmentation performance. Furthermore, to enable the model to better learn liver features at different scales, a Multi-Scale Feature Fusion module (MSFF) is added to the skip connections in the model. The MSFF module enhances the capture of global features, thus improving the accuracy of the liver segmentation model. Through the aforementioned research, this paper proposes a liver segmentation network based on detail enhancement and multi-scale feature fusion (DEMF-Net). We conducted extensive experiments on the LiTS17 dataset, and the results demonstrate that the DEMF-Net model achieved significant improvements across various evaluation metrics. Therefore, the proposed DEMF-Net model can achieve precise liver segmentation.
Tone Mapping Operator for High Dynamic Range Images Based on Modified iCAM06
This study attempted to solve the problem of conventional standard display devices encountering difficulties in displaying high dynamic range (HDR) images by proposing a modified tone-mapping operator (TMO) based on the image color appearance model (iCAM06). The proposed model, called iCAM06-m, combined iCAM06 and a multi-scale enhancement algorithm to correct the chroma of images by compensating for saturation and hue drift. Subsequently, a subjective evaluation experiment was conducted to assess iCAM06-m considering other three TMOs by rating the tone mapped images. Finally, the objective and subjective evaluation results were compared and analyzed. The results confirmed the better performance of the proposed iCAM06-m. Furthermore, the chroma compensation effectively alleviated the problem of saturation reduction and hue drift in iCAM06 for HDR image tone-mapping. In addition, the introduction of multi-scale decomposition enhanced the image details and sharpness. Thus, the proposed algorithm can overcome the shortcomings of other algorithms and is a good candidate for a general purpose TMO.
MDEM: A Multi-Scale Damage Enhancement MambaOut for Pavement Damage Classification
Pavement damage classification is crucial for road maintenance and driving safety. However, restricted to the varying scales, irregular shapes, small area ratios, and frequent overlap with background noise, traditional methods struggle to achieve accurate recognition. To address these challenges, a novel pavement damage classification model is designed based on the MambaOut named Multi-scale Damage Enhancement MambaOut (MDEM). The model incorporates two key modules to improve damage classification performance. The Multi-scale Dynamic Feature Fusion Block (MDFF) adaptively integrates multi-scale information to enhance feature extraction, effectively distinguishing visually similar cracks at different scales. The Damage Detail Enhancement Block (DDE) emphasizes fine structural details while suppressing background interference, thereby improving the representation of small-scale damage regions. Experiments were conducted on multiple datasets, including CQU-BPMDD, CQU-BPDD, and Crack500-PDD. On the CQU-BPMDD dataset, MDEM outperformed the baseline model with improvements of 2.01% in accuracy, 2.64% in precision, 2.7% in F1-score, and 4.2% in AUC. The extensive experimental results demonstrate that MDEM significantly surpasses MambaOut and other comparable methods in pavement damage classification tasks. It effectively addresses challenges such as varying scales, irregular shapes, small damage areas, and background noise, enhancing inspection accuracy in real-world road maintenance.
3D animation design image detail enhancement based on intelligent fuzzy algorithm
When zooming in on low resolution images, Lanczos interpolation method is prone to produce ringing effects at the edges and high contrast areas. When processing high texture 3D animations, the method cannot effectively optimize for different areas, significantly affecting image quality and detail representation. This study utilized SRGAN (Super-Resolution Generative Adversarial Network) to enhance image resolution details, combined with fuzzy logic and attention mechanism, adaptively focused on different regions of the image, enhanced key details and suppressed noise. The image was divided into superpixel regions using SLIC (Simple Linear Iterative Clustering) algorithm, and local features such as texture, contrast, and edge intensity were extracted; in the SRGAN model, the generator improved image resolution through deep residual blocks and Convolutional Neural Network (CNN), while the discriminator optimized the generated image quality through adversarial training; at the same time, a Fuzzy Logic System (FLS) was constructed to dynamically adjust the image fuzzy degree; channel and spatial attention modules in the generator were integrated to enhance key area details. The research results indicated that Fuzzy Algorithm-SRGAN (FA-SRGAN) had an average PSNR (Peak Signal-to-Noise Ratio) exceeding 32.8 dB in four test scenes; in architectural design scenes, the algorithm improved image contrast by 18%, and increased energy and uniformity by 14% and 11%, respectively. The adopted approach can significantly enhance the details of different regions in high texture 3D animation design images.
Two-Layer Attention Feature Pyramid Network for Small Object Detection
Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection. However, small objects are difficult to detect accurately because they contain less information. Many current methods, particularly those based on Feature Pyramid Network (FPN), address this challenge by leveraging multi-scale feature fusion. However, existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers, leading to suboptimal small object detection. To address this problem, we propose the Two-layer Attention Feature Pyramid Network (TA-FPN), featuring two key modules: the Two-layer Attention Module (TAM) and the Small Object Detail Enhancement Module (SODEM). TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer, so that each layer contains similar semantic information, to alleviate the problem of small object information being submerged due to semantic gaps between different layers. At the same time, SODEM is introduced to strengthen the local features of the object, suppress background noise, enhance the information details of the small object, and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information, to improve small object detection accuracy. Our extensive experiments on challenging datasets such as Microsoft Common Objects in Context (MS COCO) and Pattern Analysis Statistical Modelling and Computational Learning, Visual Object Classes (PASCAL VOC) demonstrate the validity of the proposed method. Experimental results show a significant improvement in small object detection accuracy compared to state-of-the-art detectors.