Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
10
result(s) for
"multiscale super-resolution reconstruction method"
Sort by:
Guided filter-based multi-scale super-resolution reconstruction
by
Li, Jinjiang
,
Hua, Zhen
,
Feng, Xiaomei
in
Algorithms
,
B0290F Interpolation and function approximation (numerical analysis)
,
B6135 Optical, image and video signal processing
2020
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.
Journal Article
Wavelet Domain Generative Adversarial Network for Multi-scale Face Hallucination
by
Tan, Tieniu
,
Sun, Zhenan
,
He, Ran
in
Artificial neural networks
,
Generative adversarial networks
,
Hallucinations
2019
Most modern face hallucination methods resort to convolutional neural networks (CNN) to infer high-resolution (HR) face images. However, when dealing with very low-resolution (LR) images, these CNN based methods tend to produce over-smoothed outputs. To address this challenge, this paper proposes a wavelet-domain generative adversarial method that can ultra-resolve a very low-resolution (like \\[16\\times 16\\] or even \\[8\\times 8\\]) face image to its larger version of multiple upscaling factors (\\[2\\times \\] to \\[16\\times \\]) in a unified framework. Different from the most existing studies that hallucinate faces in image pixel domain, our method firstly learns to predict the wavelet information of HR face images from its corresponding LR inputs before image-level super-resolution. To capture both global topology information and local texture details of human faces, a flexible and extensible generative adversarial network is designed with three types of losses: (1) wavelet reconstruction loss aims to push wavelets closer with the ground-truth; (2) wavelet adversarial loss aims to generate realistic wavelets; (3) identity preserving loss aims to help identity information recovery. Extensive experiments demonstrate that the presented approach not only achieves more appealing results both quantitatively and qualitatively than state-of-the-art face hallucination methods, but also can significantly improve identification accuracy for low-resolution face images captured in the wild.
Journal Article
Super resolution reconstruction of CT images based on multi-scale attention mechanism
2023
CT diagnosis has been widely used in clinic because of its special diagnostic value. The image resolution of CT imaging system is constrained by X-ray focus size, detector element spacing, reconstruction algorithm and other factors, which makes the generated CT image have some problems, such as low contrast, insufficient high-frequency information, poor perceptual quality and so on. To solve the above problems, a super-resolution reconstruction method of CT image based on multi-scale attention mechanism is proposed. First, use a 3 × 3 and a 1 × 1 convolution layer extracting shallow features. In order to better extract the high-frequency features of CT images and improve the image contrast, a multi-scale attention module is designed to adaptively detect the information of different scales, improve the expression ability of features, integrate the channel attention mechanism and spatial attention mechanism, and pay more attention to important information, retain more valuable information. Finally, sub-pixel convolution is used to improve the resolution of CT image and reconstruct high-resolution CT image. The experimental results show that this method can effectively improve the CT image contrast and suppress the noise. The peak signal-to-noise ratio and structural similarity of the reconstructed CT image are better than the comparison method, and has a good subjective visual effect.
Journal Article
PartSeg: a tool for quantitative feature extraction from 3D microscopy images for dummies
2021
Background
Bioimaging techniques offer a robust tool for studying molecular pathways and morphological phenotypes of cell populations subjected to various conditions. As modern high-resolution 3D microscopy provides access to an ever-increasing amount of high-quality images, there arises a need for their analysis in an automated, unbiased, and simple way. Segmentation of structures within the cell nucleus, which is the focus of this paper, presents a new layer of complexity in the form of dense packing and significant signal overlap. At the same time, the available segmentation tools provide a steep learning curve for new users with a limited technical background. This is especially apparent in the bulk processing of image sets, which requires the use of some form of programming notation.
Results
In this paper, we present PartSeg, a tool for segmentation and reconstruction of 3D microscopy images, optimised for the study of the cell nucleus. PartSeg integrates refined versions of several state-of-the-art algorithms, including a new multi-scale approach for segmentation and quantitative analysis of 3D microscopy images. The features and user-friendly interface of PartSeg were carefully planned with biologists in mind, based on analysis of multiple use cases and difficulties encountered with other tools, to offer an ergonomic interface with a minimal entry barrier. Bulk processing in an ad-hoc manner is possible without the need for programmer support. As the size of datasets of interest grows, such bulk processing solutions become essential for proper statistical analysis of results. Advanced users can use PartSeg components as a library within Python data processing and visualisation pipelines, for example within Jupyter notebooks. The tool is extensible so that new functionality and algorithms can be added by the use of plugins. For biologists, the utility of PartSeg is presented in several scenarios, showing the quantitative analysis of nuclear structures.
Conclusions
In this paper, we have presented PartSeg which is a tool for precise and verifiable segmentation and reconstruction of 3D microscopy images. PartSeg is optimised for cell nucleus analysis and offers multi-scale segmentation algorithms best-suited for this task. PartSeg can also be used for the bulk processing of multiple images and its components can be reused in other systems or computational experiments.
Journal Article
Geometry-aware light field angular super-resolution using multiple representations
2025
Light Field Angular Super-Resolution (LFASR) is a critical task that enables applications such as depth estimation, refocusing, and 3D scene reconstruction. Acquiring LFASR from Plenoptic cameras has an inherent trade-off between the angular and spatial resolution due to sensor limitations. To address this challenge, many learning-based LFASR methods have been proposed; however, the reconstruction problem of LF with a wide baseline remains a significant challenge. In this study, we proposed an end-to-end learning-based geometry-aware network using multiple representations. A multi-scale residual network with varying receptive fields is employed to effectively extract spatial and angular features, enabling angular resolution enhancement without compromising spatial fidelity. Extensive experiments demonstrate that the proposed method effectively recovers fine details with high angular resolution while preserving the intricate parallax structure of the light field. Quantitative and qualitative evaluations on both synthetic and real-world datasets further confirm that the proposed approach outperforms existing state-of-the-art methods. This research improves the angular resolution of the light field without reducing spatial sharpness, supporting applications such as depth estimation and 3D reconstruction. The method is able to preserve parallax details and structure with better results than current methods.
Journal Article
Fusing multi-scale information in convolution network for MR image super-resolution reconstruction
2018
Background
Magnetic resonance (MR) images are usually limited by low spatial resolution, which leads to errors in post-processing procedures. Recently, learning-based super-resolution methods, such as sparse coding and super-resolution convolution neural network, have achieved promising reconstruction results in scene images. However, these methods remain insufficient for recovering detailed information from low-resolution MR images due to the limited size of training dataset.
Methods
To investigate the different edge responses using different convolution kernel sizes, this study employs a multi-scale fusion convolution network (MFCN) to perform super-resolution for MRI images. Unlike traditional convolution networks that simply stack several convolution layers, the proposed network is stacked by multi-scale fusion units (MFUs). Each MFU consists of a main path and some sub-paths and finally fuses all paths within the fusion layer.
Results
We discussed our experimental network parameters setting using simulated data to achieve trade-offs between the reconstruction performance and computational efficiency. We also conducted super-resolution reconstruction experiments using real datasets of MR brain images and demonstrated that the proposed MFCN has achieved a remarkable improvement in recovering detailed information from MR images and outperforms state-of-the-art methods.
Conclusions
We have proposed a multi-scale fusion convolution network based on MFUs which extracts different scales features to restore the detail information. The structure of the MFU is helpful for extracting multi-scale information and making full-use of prior knowledge from a few training samples to enhance the spatial resolution.
Journal Article
A multi-scale mixed convolutional network for infrared image super-resolution reconstruction
2023
Infrared image is widely used in military, medical, monitoring security and other fields. Due to the limitation of hardware devices, infrared image has the problems of low signal-to-noise ratio, blurred edge and low contrast. In view of the above problems, In this paper, a super-resolution reconstruction method of infrared image based on mixed convolution multi-scale residual network is proposed. Through the multi-scale residual network to improve the utilization of features, the mixed convolution is introduced into the multi-scale residual network, which can increase the receptive field without changing the size of the feature map and eliminate the blind spots. The extracted features are fused by recursive fusion to improve the utilization of features. Through experiments and tests on multiple infrared image data sets, Through the test on the infrared image data set show that the proposed method can improve the infrared image edge information, fully extract the texture details from the infrared image, and suppress noise. The objective index of the reconstructed infrared image is mainly better than that of the contrast method, and can still achieve a better reconstruction effect in the real scene.
Journal Article
Improving Image Super-Resolution Based on Multiscale Generative Adversarial Networks
2022
Convolutional neural networks have greatly improved the performance of image super-resolution. However, perceptual networks have problems such as blurred line structures and a lack of high-frequency information when reconstructing image textures. To mitigate these issues, a generative adversarial network based on multiscale asynchronous learning is proposed in this paper, whereby a pyramid structure is employed in the network model to integrate high-frequency information at different scales. Our scheme employs a U-net as a discriminator to focus on the consistency of adjacent pixels in the input image and uses the LPIPS loss for perceptual extreme super-resolution with stronger supervision. Experiments on benchmark datasets and independent datasets Set5, Set14, BSD100, and SunHays80 show that our approach is effective in restoring detailed texture information from low-resolution images.
Journal Article
Microscopic image super resolution using deep convolutional neural networks
2020
Recently, deep convolutional neural networks (CNNs) have achieved excellent results in single image super resolution (SISR). Owing to the strength of deep CNNs, it gives promising results compared to state-of-the-art learning based models on natural images. Therefore, deep CNNs techniques have also been successfully applied to medical images to obtain better quality images. In this study, we present the first multi-scale deep CNNs capable of SISR for low resolution (LR) microscopic images. To achieve the difficulty of training deep CNNs, residual learning scheme is adopted where the residuals are explicitly supervised by the difference between the high resolution (HR) and the LR images and HR image is reconstructed by adding the lost details into the LR image. Furthermore, gradient clipping is used to avoid gradient explosions with high learning rates. Unlike the deep CNNs based SISR on natural images where the corresponding LR images are obtained by blurring and subsampling HR images, the proposed deep CNNs approach is tested using thin smear blood samples that are imaged at lower objective lenses and the performance is compared with the HR images taken at higher objective lenses. Extensive evaluations show that the superior performance on SISR for microscopic images is obtained using the proposed approach.
Journal Article
Multi-sensor image super-resolution with fuzzy cluster by using multi-scale and multi-view sparse coding for infrared image
2017
Super-resolution (SR) methods are effective for generating a high-resolution image from a single low-resolution image. However, four problems are observed in existing SR methods. (1) They cannot reconstruct many details from a low-resolution infrared image because infrared images always lack detailed information. (2) They cannot extract the desired information from images because they do not consider that images naturally come at different scales in many cases. (3) They fail to reveal different physical structures of low-resolution patch because they extract features from a single view. (4) They fail to extract all the different patterns because they use only one dictionary to represent all patterns. To overcome these problems, we propose a novel SR method for infrared images. First, we combine the information of high-resolution visible light images and low-resolution infrared images to improve the resolution of infrared images. Second, we use multiscale patches instead of fixed-size patches to represent infrared images more accurately. Third, we use different feature vectors rather than a single feature to represent infrared images. Finally, we divide training patches into several clusters, and multiple dictionaries are learned for each cluster to provide each patch with a more accurate dictionary. In the proposed method, clustering information for low-resolution patches is learnt by using fuzzy clustering theory. Experiments validate that the proposed method yields better results in terms of quantization and visual perception than the state-of-the-art algorithms.
Journal Article