Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
11 result(s) for "multiscale super-resolution reconstruction network"
Sort by:
Guided filter-based multi-scale super-resolution reconstruction
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.
Wavelet Domain Generative Adversarial Network for Multi-scale Face Hallucination
Most modern face hallucination methods resort to convolutional neural networks (CNN) to infer high-resolution (HR) face images. However, when dealing with very low-resolution (LR) images, these CNN based methods tend to produce over-smoothed outputs. To address this challenge, this paper proposes a wavelet-domain generative adversarial method that can ultra-resolve a very low-resolution (like \\[16\\times 16\\] or even \\[8\\times 8\\]) face image to its larger version of multiple upscaling factors (\\[2\\times \\] to \\[16\\times \\]) in a unified framework. Different from the most existing studies that hallucinate faces in image pixel domain, our method firstly learns to predict the wavelet information of HR face images from its corresponding LR inputs before image-level super-resolution. To capture both global topology information and local texture details of human faces, a flexible and extensible generative adversarial network is designed with three types of losses: (1) wavelet reconstruction loss aims to push wavelets closer with the ground-truth; (2) wavelet adversarial loss aims to generate realistic wavelets; (3) identity preserving loss aims to help identity information recovery. Extensive experiments demonstrate that the presented approach not only achieves more appealing results both quantitatively and qualitatively than state-of-the-art face hallucination methods, but also can significantly improve identification accuracy for low-resolution face images captured in the wild.
Geometry-aware light field angular super-resolution using multiple representations
Light Field Angular Super-Resolution (LFASR) is a critical task that enables applications such as depth estimation, refocusing, and 3D scene reconstruction. Acquiring LFASR from Plenoptic cameras has an inherent trade-off between the angular and spatial resolution due to sensor limitations. To address this challenge, many learning-based LFASR methods have been proposed; however, the reconstruction problem of LF with a wide baseline remains a significant challenge. In this study, we proposed an end-to-end learning-based geometry-aware network using multiple representations. A multi-scale residual network with varying receptive fields is employed to effectively extract spatial and angular features, enabling angular resolution enhancement without compromising spatial fidelity. Extensive experiments demonstrate that the proposed method effectively recovers fine details with high angular resolution while preserving the intricate parallax structure of the light field. Quantitative and qualitative evaluations on both synthetic and real-world datasets further confirm that the proposed approach outperforms existing state-of-the-art methods. This research improves the angular resolution of the light field without reducing spatial sharpness, supporting applications such as depth estimation and 3D reconstruction. The method is able to preserve parallax details and structure with better results than current methods.
Super resolution reconstruction of CT images based on multi-scale attention mechanism
CT diagnosis has been widely used in clinic because of its special diagnostic value. The image resolution of CT imaging system is constrained by X-ray focus size, detector element spacing, reconstruction algorithm and other factors, which makes the generated CT image have some problems, such as low contrast, insufficient high-frequency information, poor perceptual quality and so on. To solve the above problems, a super-resolution reconstruction method of CT image based on multi-scale attention mechanism is proposed. First, use a 3 × 3 and a 1 × 1 convolution layer extracting shallow features. In order to better extract the high-frequency features of CT images and improve the image contrast, a multi-scale attention module is designed to adaptively detect the information of different scales, improve the expression ability of features, integrate the channel attention mechanism and spatial attention mechanism, and pay more attention to important information, retain more valuable information. Finally, sub-pixel convolution is used to improve the resolution of CT image and reconstruct high-resolution CT image. The experimental results show that this method can effectively improve the CT image contrast and suppress the noise. The peak signal-to-noise ratio and structural similarity of the reconstructed CT image are better than the comparison method, and has a good subjective visual effect.
Fusing multi-scale information in convolution network for MR image super-resolution reconstruction
Background Magnetic resonance (MR) images are usually limited by low spatial resolution, which leads to errors in post-processing procedures. Recently, learning-based super-resolution methods, such as sparse coding and super-resolution convolution neural network, have achieved promising reconstruction results in scene images. However, these methods remain insufficient for recovering detailed information from low-resolution MR images due to the limited size of training dataset. Methods To investigate the different edge responses using different convolution kernel sizes, this study employs a multi-scale fusion convolution network (MFCN) to perform super-resolution for MRI images. Unlike traditional convolution networks that simply stack several convolution layers, the proposed network is stacked by multi-scale fusion units (MFUs). Each MFU consists of a main path and some sub-paths and finally fuses all paths within the fusion layer. Results We discussed our experimental network parameters setting using simulated data to achieve trade-offs between the reconstruction performance and computational efficiency. We also conducted super-resolution reconstruction experiments using real datasets of MR brain images and demonstrated that the proposed MFCN has achieved a remarkable improvement in recovering detailed information from MR images and outperforms state-of-the-art methods. Conclusions We have proposed a multi-scale fusion convolution network based on MFUs which extracts different scales features to restore the detail information. The structure of the MFU is helpful for extracting multi-scale information and making full-use of prior knowledge from a few training samples to enhance the spatial resolution.
A multi-scale mixed convolutional network for infrared image super-resolution reconstruction
Infrared image is widely used in military, medical, monitoring security and other fields. Due to the limitation of hardware devices, infrared image has the problems of low signal-to-noise ratio, blurred edge and low contrast. In view of the above problems, In this paper, a super-resolution reconstruction method of infrared image based on mixed convolution multi-scale residual network is proposed. Through the multi-scale residual network to improve the utilization of features, the mixed convolution is introduced into the multi-scale residual network, which can increase the receptive field without changing the size of the feature map and eliminate the blind spots. The extracted features are fused by recursive fusion to improve the utilization of features. Through experiments and tests on multiple infrared image data sets, Through the test on the infrared image data set show that the proposed method can improve the infrared image edge information, fully extract the texture details from the infrared image, and suppress noise. The objective index of the reconstructed infrared image is mainly better than that of the contrast method, and can still achieve a better reconstruction effect in the real scene.
Improving Image Super-Resolution Based on Multiscale Generative Adversarial Networks
Convolutional neural networks have greatly improved the performance of image super-resolution. However, perceptual networks have problems such as blurred line structures and a lack of high-frequency information when reconstructing image textures. To mitigate these issues, a generative adversarial network based on multiscale asynchronous learning is proposed in this paper, whereby a pyramid structure is employed in the network model to integrate high-frequency information at different scales. Our scheme employs a U-net as a discriminator to focus on the consistency of adjacent pixels in the input image and uses the LPIPS loss for perceptual extreme super-resolution with stronger supervision. Experiments on benchmark datasets and independent datasets Set5, Set14, BSD100, and SunHays80 show that our approach is effective in restoring detailed texture information from low-resolution images.
A weighted least squares optimisation strategy for medical image super resolution via multiscale convolutional neural networks for healthcare applications
Medical imaging is an essential medical diagnosis system subsequently integrated with artificial intelligence for assistance in clinical diagnosis. The actual medical images acquired during the image capturing procedures generate poor quality images as a result of numerous physical restrictions of the imaging equipment and time constraints. Recently, medical image super-resolution (SR) has emerged as an indispensable research subject in the community of image processing to address such limitations. SR is a classical computer vision operation that attempts to restore a visually sharp high-resolution images from the degraded low-resolution images. In this study, an effective medical super-resolution approach based on weighted least squares optimisation via multiscale convolutional neural networks (CNNs) has been proposed for lesion localisation. The weighted least squares optimisation strategy that particularly is well-suited for progressively coarsening the original images and simultaneously extract multiscale information has been executed. Subsequently, a SR model by training CNNs based on wavelet analysis has been designed by carrying out wavelet decomposition of optimized images for multiscale representations. Then multiple CNNs have been trained separately to approximate the wavelet multiscale representations. The trained multiple convolutional neural networks characterize medical images in many directions and multiscale frequency bands, and thus facilitate image restoration subject to increased number of variations depicted in different dimensions and orientations. Finally, the trained CNNs regress wavelet multiscale representations from a LR medical images, followed by wavelet synthesis that forms a reconstructed HR medical image. The experimental performance indicates that the proposed model SR restoration approach achieve superior SR efficiency over existing comparative methods
Microscopic image super resolution using deep convolutional neural networks
Recently, deep convolutional neural networks (CNNs) have achieved excellent results in single image super resolution (SISR). Owing to the strength of deep CNNs, it gives promising results compared to state-of-the-art learning based models on natural images. Therefore, deep CNNs techniques have also been successfully applied to medical images to obtain better quality images. In this study, we present the first multi-scale deep CNNs capable of SISR for low resolution (LR) microscopic images. To achieve the difficulty of training deep CNNs, residual learning scheme is adopted where the residuals are explicitly supervised by the difference between the high resolution (HR) and the LR images and HR image is reconstructed by adding the lost details into the LR image. Furthermore, gradient clipping is used to avoid gradient explosions with high learning rates. Unlike the deep CNNs based SISR on natural images where the corresponding LR images are obtained by blurring and subsampling HR images, the proposed deep CNNs approach is tested using thin smear blood samples that are imaged at lower objective lenses and the performance is compared with the HR images taken at higher objective lenses. Extensive evaluations show that the superior performance on SISR for microscopic images is obtained using the proposed approach.
Multi-sensor image super-resolution with fuzzy cluster by using multi-scale and multi-view sparse coding for infrared image
Super-resolution (SR) methods are effective for generating a high-resolution image from a single low-resolution image. However, four problems are observed in existing SR methods. (1) They cannot reconstruct many details from a low-resolution infrared image because infrared images always lack detailed information. (2) They cannot extract the desired information from images because they do not consider that images naturally come at different scales in many cases. (3) They fail to reveal different physical structures of low-resolution patch because they extract features from a single view. (4) They fail to extract all the different patterns because they use only one dictionary to represent all patterns. To overcome these problems, we propose a novel SR method for infrared images. First, we combine the information of high-resolution visible light images and low-resolution infrared images to improve the resolution of infrared images. Second, we use multiscale patches instead of fixed-size patches to represent infrared images more accurately. Third, we use different feature vectors rather than a single feature to represent infrared images. Finally, we divide training patches into several clusters, and multiple dictionaries are learned for each cluster to provide each patch with a more accurate dictionary. In the proposed method, clustering information for low-resolution patches is learnt by using fuzzy clustering theory. Experiments validate that the proposed method yields better results in terms of quantization and visual perception than the state-of-the-art algorithms.