Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
33,690 result(s) for "Image enhancement"
Sort by:
Beyond Brightening Low-light Images
Images captured under low-light conditions often suffer from (partially) poor visibility. Besides unsatisfactory lightings, multiple types of degradation, such as noise and color distortion due to the limited quality of cameras, hide in the dark. In other words, solely turning up the brightness of dark regions will inevitably amplify pollution. Thus, low-light image enhancement should not only brighten dark regions, but also remove hidden artifacts. To achieve the goal, this work builds a simple yet effective network, which, inspired by Retinex theory, decomposes images into two components. Following a divide-and-conquer principle, one component (illumination) is responsible for light adjustment, while the other (reflectance) for degradation removal. In such a way, the original space is decoupled into two smaller subspaces, expecting for better regularization/learning. It is worth noticing that our network is trained with paired images shot under different exposure conditions, instead of using any ground-truth reflectance and illumination information. Extensive experiments are conducted to demonstrate the efficacy of our design and its superiority over the state-of-the-art alternatives, especially in terms of the robustness against severe visual defects and the flexibility in adjusting light levels. Our code is made publicly available at https://github.com/zhangyhuaee/KinD_plus.
Low-light Image Enhancement via Breaking Down the Darkness
Images captured in low-light environments often suffer from complex degradation. Simply adjusting light would inevitably result in burst of hidden noise and color distortion. To seek results with satisfied lighting, cleanliness, and realism from degraded inputs, this paper presents a novel framework inspired by the divide-and-rule principle, greatly alleviating the degradation entanglement. Assuming that an image can be decomposed into texture (with possible noise) and color components, one can specifically execute noise removal and color correction along with light adjustment. For this purpose, we propose to convert an image from the RGB colorspace into a luminance-chrominance one. An adjustable noise suppression network is designed to eliminate noise in the brightened luminance, having the illumination map estimated to indicate noise amplification levels. The enhanced luminance further serves as guidance for the chrominance mapper to generate realistic colors. Extensive experiments are conducted to reveal the effectiveness of our design, and demonstrate its superiority over state-of-the-art alternatives both quantitatively and qualitatively on several benchmark datasets. Our code has been made publicly available at https://github.com/mingcv/Bread.
Underwater image enhancement: a comprehensive review, recent trends, challenges and applications
The mysteries of deep-sea ecosystems can be unlocked to reveal new sources, for developing medical drugs, food and energy resources, and products of renewable energy. Research in the area of underwater image processing has increased significantly in the last decade. This is primarily due to the dependence of human beings on the valuable resources existing underwater. Effective work of exploring the underwater environment is achievable by having excellent methods for underwater image enhancement. The work presented in this article highlights the survey of underwater image enhancement algorithms. This work presents an overview of various underwater image enhancement techniques and their broad classifications. The methods under each classification are briefly discussed. Underwater datasets required for performing experiments are summarized from the available literature. Attention is also drawn towards various evaluation metrics required for the quantitative assessment of underwater images and recent areas of application in the domain.
Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset
Low-light image enhancement is challenging in that it needs to consider not only brightness recovery but also complex issues like color distortion and noise, which usually hide in the dark. Simply adjusting the brightness of a low-light image will inevitably amplify those artifacts. To address this difficult problem, this paper proposes a novel end-to-end attention-guided method based on multi-branch convolutional neural network. To this end, we first construct a synthetic dataset with carefully designed low-light simulation strategies. The dataset is much larger and more diverse than existing ones. With the new dataset for training, our method learns two attention maps to guide the brightness enhancement and denoising tasks respectively. The first attention map distinguishes underexposed regions from well lit regions, and the second attention map distinguishes noises from real textures. With their guidance, the proposed multi-branch decomposition-and-fusion enhancement network works in an input adaptive way. Moreover, a reinforcement-net further enhances color and contrast of the output image. Extensive experiments on multiple datasets demonstrate that our method can produce high fidelity enhancement results for low-light images and outperforms the current state-of-the-art methods by a large margin both quantitatively and visually.
The UK Biobank imaging enhancement of 100,000 participants: rationale, data collection, management and future directions
UK Biobank is a population-based cohort of half a million participants aged 40–69 years recruited between 2006 and 2010. In 2014, UK Biobank started the world’s largest multi-modal imaging study, with the aim of re-inviting 100,000 participants to undergo brain, cardiac and abdominal magnetic resonance imaging, dual-energy X-ray absorptiometry and carotid ultrasound. The combination of large-scale multi-modal imaging with extensive phenotypic and genetic data offers an unprecedented resource for scientists to conduct health-related research. This article provides an in-depth overview of the imaging enhancement, including the data collected, how it is managed and processed, and future directions. Between 2014 and 2023, 100,000 UK Biobank participants are undergoing brain, heart and abdominal MRI, as well as DXA and carotid ultrasound scans. In this review, authors provide a detailed overview of the rationale for the collection of these imaging data, the procedures of data collection and management, and the future directions of the UK biobank imaging enhancement.
Underwater vision enhancement technologies: a comprehensive review, challenges, and recent trends
Cameras are integrated with various underwater vision systems for underwater object detection and marine biological monitoring. However, underwater images captured by cameras rarely achieve the desired visual quality, which may affect their further applications. Various underwater vision enhancement technologies have been proposed to improve the visual quality of underwater images in the past few decades, which is the focus of this paper. Specifically, we review the theory of underwater image degradations and the underwater image formation models. Meanwhile, this review summarizes various underwater vision enhancement technologies and reports the existing underwater image datasets. Further, we conduct extensive and systematic experiments to explore the limitations and superiority of various underwater vision enhancement methods. Finally, the recent trends and challenges of underwater vision enhancement are discussed. We wish this paper could serve as a reference source for future study and promote the development of this research field.
Learning to Adapt to Light
Light adaptation or brightness correction is a key step in improving the contrast and visual appeal of an image. There are multiple light-related tasks (for example, low-light enhancement and exposure correction) and previous studies have mainly investigated these tasks individually. It is interesting to consider whether the common light adaptation sub-problem in these light-related tasks can be executed by a unified model, especially considering that our visual system adapts to external light in such way. In this study, we propose a biologically inspired method to handle light-related image enhancement tasks with a unified network (called LA-Net). First, we proposed a new goal-oriented task decomposition perspective to solve general image enhancement problems, and specifically decouple light adaptation from multiple light-related tasks with frequency-based decomposition. Then, a unified module is built inspired by biological visual adaptation to achieve light adaptation in the low-frequency pathway. Combined with the proper noise suppression and detail enhancement along the high-frequency pathway, the proposed network performs unified light adaptation across various scenes. Extensive experiments on three tasks—low-light enhancement, exposure correction, and tone mapping—demonstrate that the proposed method obtains reasonable performance simultaneously for all of these three tasks compared with recent methods designed for these individual tasks. Our code is made publicly available at https://github.com/kaifuyang/LA-Net.
Fiji: an open-source platform for biological-image analysis
Presented is an overview of the image-analysis software platform Fiji, a distribution of ImageJ that updates the underlying ImageJ architecture and adds modern software design elements to expand the capabilities of the platform and facilitate collaboration between biologists and computer scientists. Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI
•Proposes methods for modelling different types of uncertainty that arise in deep learning (DL) applications for image enhancement problems.•Demonstrates in dMRI super-resolution tasks that modelling uncertainty enhances the safety of DL-based enhancement system by bringing two categories of practical benefits:(1) “performance improvement”: e.g., the generalisation to out-of-distribution data, robustness to noise and outliers (Section 4.3)(2) “reliability assessment of prediction”: e.g., certification of performance based on uncertainty-thresholding (Section 4.4.1); detection of unfamiliar structures and understanding the sources of uncertainty (Section 4.4.2).•Provide a comprehensive set of experiments in a diverse set of datasets, which vary in demographics, scanner types, acquisition protocols or pathology.•The methods are in theory applicable to many other imaging modalities and data enhancement applications.•Codes will be available on Github. Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, most existing approaches are based on deterministic models, neglecting the presence of different sources of uncertainty in such problems. Here we introduce methods to characterise different components of uncertainty, and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference, and integrate the two to quantify predictive uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images—Diffusion Tensor images and Mean Apparent Propagator MRI—and their derived quantities such as mean diffusivity and fractional anisotropy, on multiple datasets of both healthy and pathological human brains. Results highlight three key potential benefits of modelling uncertainty for improving the safety of DL-based image enhancement systems. Firstly, modelling uncertainty improves the predictive performance even when test data departs from training data (“out-of-distribution” datasets). Secondly, the predictive uncertainty highly correlates with reconstruction errors, and is therefore capable of detecting predictive “failures”. Results on both healthy subjects and patients with brain glioma or multiple sclerosis demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the super-resolved images that can be accounted for in subsequent analysis. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level “explanations” for the model performance by separately quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples. The introduced concepts of uncertainty modelling extend naturally to many other imaging modalities and data enhancement applications.
CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement
Low-light environments introduce various complex degradations into captured images. Retinex-based methods have demonstrated effective enhancement performance by decomposing an image into illumination and reflectance, allowing for selective adjustment and removal of degradations. However, different types of pollutions in reflectance are often treated together. The absence of explicit distinction and definition of various pollution types results in residual pollutions in the results. Typically, the color shift, which is generally spatially invariant, differs from other spatially variant pollution and proves challenging to eliminate with denoising methods. The remaining color shift compromises color constancy both theoretically and in practice. In this paper, we consider different manifestations of degradations and further decompose them. We propose a color-shift aware Retinex model, termed as CRetinex, which decomposes an image into reflectance, color shift, and illumination. Specific networks are designed to remove spatially variant pollution, correct color shift, and adjust illumination separately. Comparative experiments with the state-of-the-art demonstrate the qualitative and quantitative superiority of our approach. Furthermore, extensive experiments on multiple datasets, including real and synthetic images, along with extended validation, confirm the effectiveness of color-shift aware decomposition and the generalization of CRetinex over a wide range of low-light levels.