Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
171 result(s) for "Ren Wenqi"
Sort by:
Single Image Dehazing via Multi-scale Convolutional Neural Networks with Holistic Edges
Single image dehazing has been a challenging problem which aims to recover clear images from hazy ones. The performance of existing image dehazing methods is limited by hand-designed features and priors. In this paper, we propose a multi-scale deep neural network for single image dehazing by learning the mapping between hazy images and their transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines dehazed results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. In addition, we propose a holistic edge guided network to refine edges of the estimated transmission map. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.
A survey on computational spectral reconstruction methods from RGB to hyperspectral imaging
Hyperspectral imaging enables many versatile applications for its competence in capturing abundant spatial and spectral information, which is crucial for identifying substances. However, the devices for acquiring hyperspectral images are typically expensive and very complicated, hindering the promotion of their application in consumer electronics, such as daily food inspection and point-of-care medical screening, etc. Recently, many computational spectral imaging methods have been proposed by directly reconstructing the hyperspectral information from widely available RGB images. These reconstruction methods can exclude the usage of burdensome spectral camera hardware while keeping a high spectral resolution and imaging performance. We present a thorough investigation of more than 25 state-of-the-art spectral reconstruction methods which are categorized as prior-based and data-driven methods. Simulations on open-source datasets show that prior-based methods are more suitable for rare data situations, while data-driven methods can unleash the full potential of deep learning in big data cases. We have identified current challenges faced by those methods (e.g., loss function, spectral accuracy, data generalization) and summarized a few trends for future work. With the rapid expansion in datasets and the advent of more advanced neural networks, learnable methods with fine feature representation abilities are very promising. This comprehensive review can serve as a fruitful reference source for peer researchers, thus paving the way for the development of computational hyperspectral imaging.
Discovery of a novel ferroptosis inducer-talaroconvolutin A—killing colorectal cancer cells in vitro and in vivo
Ferropotsis is among the most important mechanisms of cancer suppression, which could be harnessed for cancer therapy. However, no natural small-molecule compounds with cancer inhibitory activity have been identified to date. In the present study, we reported the discovery of a novel ferroptosis inducer, talaroconvolutin A (TalaA), and the underlying molecular mechanism. We discovered that TalaA killed colorectal cancer cells in dose-dependent and time-dependent manners. Interestingly, TalaA did not induce apoptosis, but strongly triggered ferroptosis. Notably, TalaA was significantly more effective than erastin (a well-known ferroptosis inducer) in suppressing colorectal cancer cells via ferroptosis. We revealed a dual mechanism of TalaA’ action against cancer. On the one hand, TalaA considerably increased reactive oxygen species levels to a certain threshold, the exceeding of which induced ferroptosis. On the other hand, this compound downregulated the expression of the channel protein solute carrier family 7 member 11 (SLC7A11) but upregulated arachidonate lipoxygenase 3 (ALOXE3), promoting ferroptosis. Furthermore, in vivo experiments in mice evidenced that TalaA effectively suppressed the growth of xenografted colorectal cancer cells without obvious liver and kidney toxicities. The findings of this study indicated that TalaA could be a new potential powerful drug candidate for colorectal cancer therapy due to its outstanding ability to kill colorectal cancer cells via ferroptosis induction.
A Comprehensive Benchmark Analysis of Single Image Deraining: Current Challenges and Future Perspectives
The capability of image deraining is a highly desirable component of intelligent decision-making in autonomous driving and outdoor surveillance systems. Image deraining aims to restore the clean scene from the degraded image captured in a rainy day. Although numerous single image deraining algorithms have been recently proposed, these algorithms are mainly evaluated using certain type of synthetic images, assuming a specific rain model, plus a few real images. It remains unclear how these algorithms would perform on rainy images acquired “in the wild” and how we could gauge the progress in the field. This paper aims to bridge this gap. We present a comprehensive study and evaluation of existing single image deraining algorithms, using a new large-scale benchmark consisting of both synthetic and real-world rainy images of various rain types. This dataset highlights diverse rain models (rain streak, rain drop, rain and mist), as well as a rich variety of evaluation criteria (full- and no-reference objective, subjective, and task-specific). We further provide a comprehensive suite of criteria for deraining algorithm evaluation, including full- and no-reference metrics, subjective evaluation, and the novel task-driven evaluation. The proposed benchmark is accompanied with extensive experimental results that facilitate the assessment of the state-of-the-arts on a quantitative basis. Our evaluation and analysis indicate the gap between the achievable performance on synthetic rainy images and the practical demand on real-world images. We show that, despite many advances, image deraining is still a largely open problem. The paper is concluded by summarizing our general observations, identifying open research challenges and pointing out future directions. Our code and dataset is publicly available at http://uee.me/ddQsw.
Photo-realistic dehazing via contextual generative adversarial networks
Single image dehazing is a challenging task due to its ambiguous nature. In this paper we present a new model based on generative adversarial networks (GANs) for single image dehazing, called as dehazing GAN. In contrast to estimating the transmission map and the atmospheric light separately as most existing deep learning methods, dehazing GAN restores the corresponding hazy-free image directly from a hazy image via a generative adversarial network. Extensive experimental results on both synthetic dataset and real-world images show our model outperforms the state-of-the-art algorithms.
Single Image Dehazing Using Sparse Contextual Representation
In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block artifacts and halos produced by only using dark channel prior without soft matting as the transmission is not always constant in a local patch. A novel way to use dictionary is proposed to smooth an image and generate the sharp dehazed result. Experimental results demonstrate that our proposed method performs favorably against the state-of-the-art dehazing methods and produces high-quality dehazed and vivid color results.
Memory-Augmented Deep Unfolding Network for Guided Image Super-resolution
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image. However, previous model-based methods mainly take the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image, simply ignoring many non-local common characteristics between them. To alleviate this issue, we firstly propose a maximum a posteriori (MAP) estimation model for GISR with two types of priors on the HR target image, i.e., local implicit prior and global implicit prior. The local implicit prior aims to model the complex relationship between the HR target image and the HR guidance image from a local perspective, and the global implicit prior considers the non-local auto-regression property between the two images from a global perspective. Secondly, we design a novel alternating optimization algorithm to solve this model for GISR. The algorithm is in a concise framework that facilitates to be replicated into commonly used deep network structures. Thirdly, to reduce the information loss across iterative stages, the persistent memory mechanism is introduced to augment the information representation by exploiting the Long short-term memory unit (LSTM) in the image and feature spaces. In this way, a deep network with certain interpretation and high representation ability is built. Extensive experimental results validate the superiority of our method on a variety of GISR tasks, including Pan-sharpening, depth image super-resolution, and MR image super-resolution. Code will be released at https://github.com/manman1995/pansharpening.
Deep Image Deblurring: A Survey
Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning have led to significant progress in solving this problem, and a large number of deblurring networks have been proposed. This paper presents a comprehensive and timely survey of recently published deep-learning based image deblurring approaches, aiming to serve the community as a useful literature review. We start by discussing common causes of image blur, introduce benchmark datasets and performance metrics, and summarize different problem formulations. Next, we present a taxonomy of methods using convolutional neural networks (CNN) based on architecture, loss function, and application, offering a detailed review and comparison. In addition, we discuss some domain-specific deblurring applications including face images, text, and stereo image pairs. We conclude by discussing key challenges and future research directions.
Joint learning of image detail and transmission map for single image dehazing
Single image haze removal is an important task in computer vision. However, haze removal is an extremely challenging problem due to its massively ill-posed, which is that at each pixel we must estimate the transmission and the global atmospheric light from a single color measurement. In this paper, we propose a new deep learning-based method for removing haze from single input image. First, we estimate a transmission map via joint estimation of clear image detail and transmission map, which is different from traditional methods only estimating a transmission map for a hazy image. Second, we use a global regularization method to eliminate the halos and artifacts. Experimental results on synthetic dataset and real-world images show our method outperforms the other state-of-the-art methods.