Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
18 result(s) for "multiscale super-resolution network"
Sort by:
Guided filter-based multi-scale super-resolution reconstruction
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.
Wavelet Domain Generative Adversarial Network for Multi-scale Face Hallucination
Most modern face hallucination methods resort to convolutional neural networks (CNN) to infer high-resolution (HR) face images. However, when dealing with very low-resolution (LR) images, these CNN based methods tend to produce over-smoothed outputs. To address this challenge, this paper proposes a wavelet-domain generative adversarial method that can ultra-resolve a very low-resolution (like \\[16\\times 16\\] or even \\[8\\times 8\\]) face image to its larger version of multiple upscaling factors (\\[2\\times \\] to \\[16\\times \\]) in a unified framework. Different from the most existing studies that hallucinate faces in image pixel domain, our method firstly learns to predict the wavelet information of HR face images from its corresponding LR inputs before image-level super-resolution. To capture both global topology information and local texture details of human faces, a flexible and extensible generative adversarial network is designed with three types of losses: (1) wavelet reconstruction loss aims to push wavelets closer with the ground-truth; (2) wavelet adversarial loss aims to generate realistic wavelets; (3) identity preserving loss aims to help identity information recovery. Extensive experiments demonstrate that the presented approach not only achieves more appealing results both quantitatively and qualitatively than state-of-the-art face hallucination methods, but also can significantly improve identification accuracy for low-resolution face images captured in the wild.
Hallucinating Unaligned Face Images by Multiscale Transformative Discriminative Networks
Conventional face hallucination methods heavily rely on accurate alignment of low-resolution (LR) faces before upsampling them. Misalignment often leads to deficient results and unnatural artifacts for large upscaling factors. However, due to the diverse range of poses and different facial expressions, aligning an LR input image, in particular when it is tiny, is severely difficult. In addition, when the resolutions of LR input images vary, previous deep neural network based face hallucination methods require the interocular distances of input face images to be similar to the ones in the training datasets. Downsampling LR input faces to a required resolution will lose high-frequency information of the original input images. This may lead to suboptimal super-resolution performance for the state-of-the-art face hallucination networks. To overcome these challenges, we present an end-to-end multiscale transformative discriminative neural network devised for super-resolving unaligned and very small face images of different resolutions ranging from 16 × 16 to 32 × 32 pixels in a unified framework. Our proposed network embeds spatial transformation layers to allow local receptive fields to line-up with similar spatial supports, thus obtaining a better mapping between LR and HR facial patterns. Furthermore, we incorporate a class-specific loss designed to classify upright realistic faces in our objective through a successive discriminative network to improve the alignment and upsampling performance with semantic information. Extensive experiments on a large face dataset show that the proposed method significantly outperforms the state-of-the-art.
AMSFANet: attention-based multiscale small face aware restoration method
Deep learning has achieved remarkable performance in various fields, including face recognition. However, recognizing small-sized face images remains a challenge in this domain. The limited number of pixels in small face images makes it difficult to extract facial features, leading to decreased accuracy of face recognition systems. Furthermore, small face images often suffer from low resolution and poor image quality, which further complicates the recognition process. To address this issue, this paper proposes a novel method for low-resolution face restoration by transforming it into a mapping problem from low-resolution small face images to high-resolution face images. We introduce an attention-based multiscale small face aware network (AMSFANet) for low-resolution face restoration. The proposed method is based on a super-resolution generative adversarial network (SRGAN) with improved loss constraints using the wasserstein distance and gradient penalty strategy to enhance the model’s robustness during training. We also propose an attention-based multiscale residual module to replace the traditional residual structure, which strengthens the generator’s focus on faces, reduces the impact of complex backgrounds on face restoration, and improves the final image’s facial clarity, making it effective for subsequent face recognition. Experimental results demonstrate that the proposed method effectively improves the quality of low-resolution face images and enhances subsequent face recognition accuracy.
Geometry-aware light field angular super-resolution using multiple representations
Light Field Angular Super-Resolution (LFASR) is a critical task that enables applications such as depth estimation, refocusing, and 3D scene reconstruction. Acquiring LFASR from Plenoptic cameras has an inherent trade-off between the angular and spatial resolution due to sensor limitations. To address this challenge, many learning-based LFASR methods have been proposed; however, the reconstruction problem of LF with a wide baseline remains a significant challenge. In this study, we proposed an end-to-end learning-based geometry-aware network using multiple representations. A multi-scale residual network with varying receptive fields is employed to effectively extract spatial and angular features, enabling angular resolution enhancement without compromising spatial fidelity. Extensive experiments demonstrate that the proposed method effectively recovers fine details with high angular resolution while preserving the intricate parallax structure of the light field. Quantitative and qualitative evaluations on both synthetic and real-world datasets further confirm that the proposed approach outperforms existing state-of-the-art methods. This research improves the angular resolution of the light field without reducing spatial sharpness, supporting applications such as depth estimation and 3D reconstruction. The method is able to preserve parallax details and structure with better results than current methods.
Super resolution reconstruction of CT images based on multi-scale attention mechanism
CT diagnosis has been widely used in clinic because of its special diagnostic value. The image resolution of CT imaging system is constrained by X-ray focus size, detector element spacing, reconstruction algorithm and other factors, which makes the generated CT image have some problems, such as low contrast, insufficient high-frequency information, poor perceptual quality and so on. To solve the above problems, a super-resolution reconstruction method of CT image based on multi-scale attention mechanism is proposed. First, use a 3 × 3 and a 1 × 1 convolution layer extracting shallow features. In order to better extract the high-frequency features of CT images and improve the image contrast, a multi-scale attention module is designed to adaptively detect the information of different scales, improve the expression ability of features, integrate the channel attention mechanism and spatial attention mechanism, and pay more attention to important information, retain more valuable information. Finally, sub-pixel convolution is used to improve the resolution of CT image and reconstruct high-resolution CT image. The experimental results show that this method can effectively improve the CT image contrast and suppress the noise. The peak signal-to-noise ratio and structural similarity of the reconstructed CT image are better than the comparison method, and has a good subjective visual effect.
Improving Image Super-Resolution Based on Multiscale Generative Adversarial Networks
Convolutional neural networks have greatly improved the performance of image super-resolution. However, perceptual networks have problems such as blurred line structures and a lack of high-frequency information when reconstructing image textures. To mitigate these issues, a generative adversarial network based on multiscale asynchronous learning is proposed in this paper, whereby a pyramid structure is employed in the network model to integrate high-frequency information at different scales. Our scheme employs a U-net as a discriminator to focus on the consistency of adjacent pixels in the input image and uses the LPIPS loss for perceptual extreme super-resolution with stronger supervision. Experiments on benchmark datasets and independent datasets Set5, Set14, BSD100, and SunHays80 show that our approach is effective in restoring detailed texture information from low-resolution images.
Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color–depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model.
Fusing multi-scale information in convolution network for MR image super-resolution reconstruction
Background Magnetic resonance (MR) images are usually limited by low spatial resolution, which leads to errors in post-processing procedures. Recently, learning-based super-resolution methods, such as sparse coding and super-resolution convolution neural network, have achieved promising reconstruction results in scene images. However, these methods remain insufficient for recovering detailed information from low-resolution MR images due to the limited size of training dataset. Methods To investigate the different edge responses using different convolution kernel sizes, this study employs a multi-scale fusion convolution network (MFCN) to perform super-resolution for MRI images. Unlike traditional convolution networks that simply stack several convolution layers, the proposed network is stacked by multi-scale fusion units (MFUs). Each MFU consists of a main path and some sub-paths and finally fuses all paths within the fusion layer. Results We discussed our experimental network parameters setting using simulated data to achieve trade-offs between the reconstruction performance and computational efficiency. We also conducted super-resolution reconstruction experiments using real datasets of MR brain images and demonstrated that the proposed MFCN has achieved a remarkable improvement in recovering detailed information from MR images and outperforms state-of-the-art methods. Conclusions We have proposed a multi-scale fusion convolution network based on MFUs which extracts different scales features to restore the detail information. The structure of the MFU is helpful for extracting multi-scale information and making full-use of prior knowledge from a few training samples to enhance the spatial resolution.
A multi-scale mixed convolutional network for infrared image super-resolution reconstruction
Infrared image is widely used in military, medical, monitoring security and other fields. Due to the limitation of hardware devices, infrared image has the problems of low signal-to-noise ratio, blurred edge and low contrast. In view of the above problems, In this paper, a super-resolution reconstruction method of infrared image based on mixed convolution multi-scale residual network is proposed. Through the multi-scale residual network to improve the utilization of features, the mixed convolution is introduced into the multi-scale residual network, which can increase the receptive field without changing the size of the feature map and eliminate the blind spots. The extracted features are fused by recursive fusion to improve the utilization of features. Through experiments and tests on multiple infrared image data sets, Through the test on the infrared image data set show that the proposed method can improve the infrared image edge information, fully extract the texture details from the infrared image, and suppress noise. The objective index of the reconstructed infrared image is mainly better than that of the contrast method, and can still achieve a better reconstruction effect in the real scene.