Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,989 result(s) for "image super-resolution"
Sort by:
Memory-Augmented Deep Unfolding Network for Guided Image Super-resolution
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image. However, previous model-based methods mainly take the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image, simply ignoring many non-local common characteristics between them. To alleviate this issue, we firstly propose a maximum a posteriori (MAP) estimation model for GISR with two types of priors on the HR target image, i.e., local implicit prior and global implicit prior. The local implicit prior aims to model the complex relationship between the HR target image and the HR guidance image from a local perspective, and the global implicit prior considers the non-local auto-regression property between the two images from a global perspective. Secondly, we design a novel alternating optimization algorithm to solve this model for GISR. The algorithm is in a concise framework that facilitates to be replicated into commonly used deep network structures. Thirdly, to reduce the information loss across iterative stages, the persistent memory mechanism is introduced to augment the information representation by exploiting the Long short-term memory unit (LSTM) in the image and feature spaces. In this way, a deep network with certain interpretation and high representation ability is built. Extensive experimental results validate the superiority of our method on a variety of GISR tasks, including Pan-sharpening, depth image super-resolution, and MR image super-resolution. Code will be released at https://github.com/manman1995/pansharpening.
A comprehensive review of deep learning-based single image super-resolution
Image super-resolution (SR) is one of the vital image processing methods that improve the resolution of an image in the field of computer vision. In the last two decades, significant progress has been made in the field of super-resolution, especially by utilizing deep learning methods. This survey is an effort to provide a detailed survey of recent progress in single-image super-resolution in the perspective of deep learning while also informing about the initial classical methods used for image super-resolution. The survey classifies the image SR methods into four categories, i.e., classical methods, supervised learning-based methods, unsupervised learning-based methods, and domain-specific SR methods. We also introduce the problem of SR to provide intuition about image quality metrics, available reference datasets, and SR challenges. Deep learning-based approaches of SR are evaluated using a reference dataset. Some of the reviewed state-of-the-art image SR methods include the enhanced deep SR network (EDSR), cycle-in-cycle GAN (CinCGAN), multiscale residual network (MSRN), meta residual dense network (Meta-RDN), recurrent back-projection network (RBPN), second-order attention network (SAN), SR feedback network (SRFBN) and the wavelet-based residual attention network (WRAN). Finally, this survey is concluded with future directions and trends in SR and open problems in SR to be addressed by the researchers.
Lightweight Single Image Super-Resolution with Selective Channel Processing Network
With the development of deep learning, considerable progress has been made in image restoration. Notably, many state-of-the-art single image super-resolution (SR) methods have been proposed. However, most of them contain many parameters, which leads to a significant amount of calculation consumption in the inference phase. To make current SR networks more lightweight and resource-friendly, we present a convolution neural network with the proposed selective channel processing strategy (SCPN). Specifically, the selective channel processing module (SCPM) is first designed to dynamically learn the significance of each channel in the feature map using a channel selection matrix in the training phase. Correspondingly, in the inference phase, only the essential channels indicated by the channel selection matrixes need to be further processed. By doing so, we can significantly reduce the parameters and the calculation consumption. Moreover, the differential channel attention (DCA) block is proposed, which takes into consideration the data distribution of the channels in feature maps to restore more high-frequency information. Extensive experiments are performed on the natural image super-resolution benchmarks (i.e., Set5, Set14, B100, Urban100, Manga109) and remote-sensing benchmarks (i.e., UCTest and RESISCTest), and our method achieves superior results to other state-of-the-art methods. Furthermore, our method keeps a slim size with fewer than 1 M parameters, which proves the superiority of our method. Owing to the proposed SCPM and DCA, our SCPN model achieves a better trade-off between calculation cost and performance in both general and remote-sensing SR applications, and our proposed method can be extended to other computer vision tasks for further research.
Image super‐resolution via dynamic network
Convolutional neural networks depend on deep network architectures to extract accurate information for image super‐resolution. However, obtained information of these convolutional neural networks cannot completely express predicted high‐quality images for complex scenes. A dynamic network for image super‐resolution (DSRNet) is presented, which contains a residual enhancement block, wide enhancement block, feature refinement block and construction block. The residual enhancement block is composed of a residual enhanced architecture to facilitate hierarchical features for image super‐resolution. To enhance robustness of obtained super‐resolution model for complex scenes, a wide enhancement block achieves a dynamic architecture to learn more robust information to enhance applicability of an obtained super‐resolution model for varying scenes. To prevent interference of components in a wide enhancement block, a refinement block utilises a stacked architecture to accurately learn obtained features. Also, a residual learning operation is embedded in the refinement block to prevent long‐term dependency problem. Finally, a construction block is responsible for reconstructing high‐quality images. Designed heterogeneous architecture can not only facilitate richer structural information, but also be lightweight, which is suitable for mobile digital devices. Experimental results show that our method is more competitive in terms of performance, recovering time of image super‐resolution and complexity. The code of DSRNet can be obtained at https://github.com/hellloxiaotian/DSRNet.
PixelCraftSR: Efficient Super-Resolution with Multi-Agent Reinforcement for Edge Devices
Single-image super-resolution imaging methods are increasingly being employed owing to their immense applicability in numerous domains, such as medical imaging, display manufacturing, and digital zooming. Despite their widespread usability, the existing learning-based super-resolution (SR) methods are computationally expensive and inefficient for resource-constrained IoT devices. In this study, we propose a lightweight model based on a multi-agent reinforcement-learning approach that employs multiple agents at the pixel level to construct super-resolution images by following the asynchronous actor–critic policy. The agents iteratively select a predefined set of actions to be executed within five time steps based on the new image state, followed by the action that maximizes the cumulative reward. We thoroughly evaluate and compare our proposed method with existing super-resolution methods. Experimental results illustrate that the proposed method can outperform the existing models in both qualitative and quantitative scores despite having significantly less computational complexity. The practicability of the proposed method is confirmed further by evaluating it on numerous IoT platforms, including edge devices.
SECANet: A structure‐enhanced attention network with dual‐domain contrastive learning for scene text image super‐resolution
In this letter, we developed novel Structure Enhanced Channel Attention Network (SECANet) for scene text image super‐resolution (STISR). The newly proposed SECANet integrates a group of Structure‐Enhanced Attention Modules to focus more on both local and global structural features in the character regions of text images. Moreover, we elaborately formulate a Dual‐Domain Contrastive Learning framework that integrates one pixel‐level contrastive loss and the other semantic‐level contrastive loss to jointly optimize the SECANet for generating more visually pleasing yet better recognizable high‐quality SR images without introducing any additional prior generators in both the training and testing stages, showing promising computational efficiency. Experimental results on the Textzoom dataset indicate that our method can achieve both decent performance in super‐resolving more impressive scene text images from low‐resolution ones and better recognition accuracy than other competitors. The proposed structure enhanced channel attention network assembles a group of structure‐enhanced attention blocks to learn both global and local structure features for the detailed recovery of scene text images. Moreover, a joint dual‐domain contrastive loss function is formulated to optimize the model parameters, benefiting to synthesizing more recognizable text images.
Direct localization and delineation of human pedunculopontine nucleus based on a self‐supervised magnetic resonance image super‐resolution method
The pedunculopontine nucleus (PPN) is a small brainstem structure and has attracted attention as a potentially effective deep brain stimulation (DBS) target for the treatment of Parkinson's disease (PD). However, the in vivo location of PPN remains poorly described and barely visible on conventional structural magnetic resonance (MR) images due to a lack of high spatial resolution and tissue contrast. This study aims to delineate the PPN on a high‐resolution (HR) atlas and investigate the visibility of the PPN in individual quantitative susceptibility mapping (QSM) images. We combine a recently constructed Montreal Neurological Institute (MNI) space unbiased QSM atlas (MuSus‐100), with an implicit representation‐based self‐supervised image super‐resolution (SR) technique to achieve an atlas with improved spatial resolution. Then guided by a myelin staining histology human brain atlas, we localize and delineate PPN on the atlas with improved resolution. Furthermore, we examine the feasibility of directly identifying the approximate PPN location on the 3.0‐T individual QSM MR images. The proposed SR network produces atlas images with four times the higher spatial resolution (from 1 to 0.25 mm isotropic) without a training dataset. The SR process also reduces artifacts and keeps superb image contrast for further delineating small deep brain nuclei, such as PPN. Using the myelin staining histological atlas as guidance, we first identify and annotate the location of PPN on the T1‐weighted (T1w)‐QSM hybrid MR atlas with improved resolution in the MNI space. Then, we relocate and validate that the optimal targeting site for PPN‐DBS is at the middle‐to‐caudal part of PPN on our atlas. Furthermore, we confirm that the PPN region can be identified in a set of individual QSM images of 10 patients with PD and 10 healthy young adults. The contrast ratios of the PPN to its adjacent structure, namely the medial lemniscus, on images of different modalities indicate that QSM substantially improves the visibility of the PPN both in the atlas and individual images. Our findings indicate that the proposed SR network is an efficient tool for small‐size brain nucleus identification. HR QSM is promising for improving the visibility of the PPN. The PPN can be directly identified on the individual QSM images acquired at the 3.0‐T MR scanners, facilitating a direct targeting of PPN for DBS surgery. The pedunculopontine nucleus is a potentially effective deep‐brain‐stimulation target for the treatment of Parkinson's disease, however, the in‐vivo location of PPN remains poorly described and barely visible on conventional structural magnetic resonance (MR) images. We provide a high‐resolution (HR) T1w‐QSM hybrid MR atlas, on which we localize and delineate PPN. Furthermore, we validate that the PPN region can be identified in a set of individual QSM images. Our findings indicate that HR QSM is promising for improving the visibility of the PPN, and the PPN can be directly identified on the individual QSM images acquired at the 3.0‐T MR scanners.
Single-Image Super-Resolution Challenges: A Brief Review
Single-image super-resolution (SISR) is an important task in image processing, aiming to achieve enhanced image resolution. With the development of deep learning, SISR based on convolutional neural networks has also gained great progress, but as the network deepens and the task of SISR becomes more complex, SISR networks become difficult to train, which hinders SISR from achieving greater success. Therefore, to further promote SISR, many challenges have emerged in recent years. In this review, we briefly review the SISR challenges organized from 2017 to 2022 and focus on the in-depth classification of these challenges, the datasets employed, the evaluation methods used, and the powerful network architectures proposed or accepted by the winners. First, depending on the tasks of the challenges, the SISR challenges can be broadly classified into four categories: classic SISR, efficient SISR, perceptual extreme SISR, and real-world SISR. Second, we introduce the datasets commonly used in the challenges in recent years and describe their characteristics. Third, we present the image evaluation methods commonly used in SISR challenges in recent years. Fourth, we introduce the network architectures used by the winners, mainly to explore in depth where the advantages of their network architectures lie and to compare the results of previous years’ winners. Finally, we summarize the methods that have been widely used in SISR in recent years and suggest several possible promising directions for future SISR.
A Review of Single Image Super Resolution Techniques using Convolutional Neural Networks
Single Image Super- Resolution (SISR) is a complex restoration method to recover high-resolution (HR) image from degraded low-resolution (LR) form. SISR is used in many applications, such as microscopic image analysis, medical imaging, security and surveillance, astronomical observation, hyperspectral imaging, and text image super-resolution. Convolutional Neural Networks (CNNs) are most widely used technique to solve Super-Resolution (SR) problems. This paper presents review of SISR methods based on CNN. The SISR CNN models are analyzed based on the design and their performance on benchmark datasets: Set 5, Set 14, BSD 100, and Urban 100. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are used for quantitative analysis. ESRGAN model shows the best results on all benchmark datasets and reconstructs images with good visual quality at large upscaling factors. The model performs excellently with PSNR 27.03 dB and SSIM 0.8153 on the Urban 100 dataset for ×4 upscaling factor. The models are further analyzed on the basis of the loss function, scalability, processing time, and number of parameters. The framework and implementation setup of SISR CNN models are also discussed. Perceptual loss function can help to boost the network performance by increasing the visual quality of the reconstructed images. Hence, it has emerged as a new research trend in recent years. It is also observed that there is tremendous growth in the field of blind or unsupervised SISR. The research has shifted to developing reference less performance evaluation parameters for unsupervised SISR.
URNet: A U-Shaped Residual Network for Lightweight Image Super-Resolution
It is extremely important and necessary for low computing power or portable devices to design more lightweight algorithms for image super-resolution (SR). Recently, most SR methods have achieved outstanding performance by sacrificing computational cost and memory storage, or vice versa. To address this problem, we introduce a lightweight U-shaped residual network (URNet) for fast and accurate image SR. Specifically, we propose a more effective feature distillation pyramid residual group (FDPRG) to extract features from low-resolution images. The FDPRG can effectively reuse the learned features with dense shortcuts and capture multi-scale information with a cascaded feature pyramid block. Based on the U-shaped structure, we utilize a step-by-step fusion strategy to improve the performance of feature fusion of different blocks. This strategy is different from the general SR methods which only use a single Concat operation to fuse the features of all basic blocks. Moreover, a lightweight asymmetric residual non-local block is proposed to model the global context information and further improve the performance of SR. Finally, a high-frequency loss function is designed to alleviate smoothing image details caused by pixel-wise loss. Simultaneously, the proposed modules and high-frequency loss function can be easily plugged into multiple mature architectures to improve the performance of SR. Extensive experiments on multiple natural image datasets and remote sensing image datasets show the URNet achieves a better trade-off between image SR performance and model complexity against other state-of-the-art SR methods.