Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
16 result(s) for "guide image filter"
Sort by:
Depth Map Super-Resolution Based on Semi-Couple Deformable Convolution Networks
Depth images obtained from lightweight, real-time depth estimation models and consumer-oriented sensors typically have low-resolution issues. Traditional interpolation methods for depth image up-sampling result in a significant information loss, especially in edges with discontinuous depth variations (depth discontinuities). To address this issue, this paper proposes a semi-coupled deformable convolution network (SCD-Net) based on the idea of guided depth map super-resolution (GDSR). The method employs a semi-coupled feature extraction scheme to learn unique and similar features between RGB images and depth images. We utilize a Coordinate Attention (CA) to suppress redundant information in RGB features. Finally, a deformable convolutional module is employed to restore the original resolution of the depth image. The model is tested on NYUv2, Middlebury, Lu, and a Real-Sense real-world dataset created using an Intel Real-sense D455 structured-light camera. The super-resolution accuracy of SCD-Net at multiple scales is much higher than that of traditional methods and superior to recent state-of-the-art (SOTA) models, which demonstrates the effectiveness and flexibility of our model on GDSR tasks. In particular, our method further solves the problem of an RGB texture being over-transferred in GDSR tasks.
Guided filter-based multi-scale super-resolution reconstruction
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.
Automatic fetal ultrasound image segmentation of first trimester for measuring biometric parameters based on deep learning
Transvaginal ultrasonography (TVS) is a common method used by doctors to monitor the embryonic development. In the early stage of pregnancy, doctors assess the growth and development of the embryo by measuring biological indicators such as gestational sac area (GSA), yolk sac diameter (YSD), and crown-rump length (CRL) in TVS images. Even though these indicators can be manually obtained by experienced physicians, the manual measurement process is time-consuming, inefficient, and heavily dependent on the sonographer's expertise. To improve this situation, we, here, aimed to establish a modified Unet model, namely AFG-net, which is capable of automatically obtaining the related clinical values required for measuring embryonic development. Using this method, the essential values, including gestational sac (GS), yolk sac (YS) and embryo region in the TVS image, were easily and accurately identified and located, which were further completely separated by image segmentation to obtain the corresponding measurement values. Notably, this model is able to achieve superior segmentation effect even when the input image with poor quality, low contrast, fuzzy region boundary and complex anatomical shape by applying some advanced methods such as attention fusion and guide filter. Consequently, our results showed our model demonstrated a higher average precision, Intersection Over Union (IOU), and Dice coefficient (Dice) of GS, YS and embryo compared to a normal Unet, with 94.75%, 86.15% and 92.11% versus 92.01%, 83.00%, and 90.00%, respectively. The absolute error between the biological indicators (GSA, YSD and CRL) automatically extracted from the segmentation results and the manual measurement results is 0.66mm. The automatic segmentation and measurement process significantly reduces the subjectivity of manual measurement and reduces the clinician workload. It also helps to improve diagnostic accuracy, enables repeatability and standardization in clinical practice, and provides a valuable tool for prenatal care and monitoring.
Clarity Method of Low-illumination and Dusty Coal Mine Images Based on Improved Amef
The existing image processing methods based on physical models can have a significant impact on defogging performance due to inaccurate estimation of the depth of field information. These methods often encounter problems such as low brightness, invisible color distortion, and loss of detail when processing images with poor lighting conditions, such as those taken in coal mines. To address these issues, this paper proposes a new algorithm based on artificial multi-exposure image fusion. The proposed method performs global exposure on images with uneven illumination by combining S-type functions and the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm in the Hue-Saturation-Value (HSV) color space. This reduces the spatial dependence of brightness during processing and avoids color distortion problems that may arise in the Red-Green-Blue (RGB) color space. To mitigate the issue of detail loss, a gradient-domain guidedfilter is used to preserve fine structures in images, while an improved homomorphic filtering algorithm is introduced during the Laplacian pyramid decomposition process to reduce image content loss arising from large dark areas. This paper also conducted subjective, objective, and computational time comparisons to evaluate performance, providing reliable results regarding speed, quality, and reliability in processing hazy images.
Detail enhancement decolorization algorithm based on rolling guided filtering
An important goal of color image gray-scale is to keep the edge details of the original color image as much as possible. In many cases, the degree of feature discrimination is maintained, but in some cases, edge details are still lost or blurred. Therefore, this paper first uses an improved non-linear global mapping grayscale method to grayscale the color image, and then proposes a grayscale image detail enhancement algorithm based on rolling guided filtering. The method in this paper is to enhance the edge details of the grayscale image by rolling guided filter processing on the basis of the grayscale image. In addition, the rolling-guided filter is a local linear model with better edge retention characteristics, which can overcome the defect that other filters are prone to gradient flips on the edges where the gray level of the image changes sharply, causing the image to appear “false edges”. The experimental results show that when the traditional method loses or blurs the detailed features, the method in this paper can maintain better detailed features.
Hue-saturation-depth Guided Image-based Lidar Upsampling Technique for Ultra-high-resolution and Omnidirectional 3D Scanning
This paper proposes a lidar upsampling technique to obtain ultra-high resolution and omnidirectional 3D data. Obtaining a large amount of omnidirectional 3D data is expensive because of the cost of sensing and scanning systems. Instead of using expensive commercial scanning systems, we introduce a new type of low-cost 360-degree 3D scanning system to obtain 3D lidar data. The original lidar data from the system is upsampled by using the weighted median filter with novel Hue-Saturation-Depth (HSD) guide image. The proposed upsampling technique consists of two steps. The first step upsampling is performed by using linear interpolation based on pixel distance and edge-area refinement in the interpolated depth map. Thereafter, in order to reduce the saturation effect of a high-contrast RGB color guide image, we propose to add the depth information to the guide image. A novel HSD guide image is generated by replacing the intensity of the RGB image with the scaled-depth from the first-step upsampling. Finally, the second-step upsampling is performed by applying the weighted median filter to the result of the first-step upsampling. In the experiments, we present ultra-high-resolution 3D scanning results and error analysis in complex indoor environments.
Robust multi-lane detection and tracking using adaptive threshold and lane classification
Many global automotive companies have been putting efforts to reduce traffic accidents by developing advanced driver assistance system (ADAS) as well as autonomous vehicles. Lane detection is essential for both autonomous driving and ADAS because the vehicle must follow the lane. However, existing lane detection algorithms have been struggling in achieving robust performance under real-world road conditions where poor road markings, surrounding obstacles, and guardrails are present. Therefore, in this paper, we propose a multi-lane detection algorithm that is robust to the challenging road conditions. To solve the above problems, we introduce three key technologies. First, an adaptive threshold is applied to extract strong lane features from images with obstacles and barely visible lanes. Next, since erroneous lane features can be extracted, an improved RANdom SAmple Consensus algorithm is introduced by using the feedback from lane edge angles and the curvature of lane history to prevent false lane detection. Finally, the lane detection performance is greatly improved by selecting only the lanes that are verified through the lane classification algorithm. The proposed algorithm is evaluated on our dataset that captures challenging road conditions. The proposed method performs better than the state-of-the-art method, showing 3% higher True Positive Rate and 2% lower False Positive Rate performance.
Ensemble Kalman inversion for image guided guide wire navigation in vascular systems
This paper addresses the challenging task of guide wire navigation in cardiovascular interventions, focusing on the parameter estimation of a guide wire system using Ensemble Kalman Inversion (EKI) with a subsampling technique. The EKI uses an ensemble of particles to estimate the unknown quantities. However, since the data misfit has to be computed for each particle in each iteration, the EKI may become computationally infeasible in the case of high-dimensional data, e.g. high-resolution images. This issue can been addressed by randomised algorithms that utilize only a random subset of the data in each iteration. We introduce and analyse a subsampling technique for the EKI, which is based on a continuous-time representation of stochastic gradient methods and apply it to on the parameter estimation of our guide wire system. Numerical experiments with real data from a simplified test setting demonstrate the potential of the method.
Nanoribbon Waveguides for Subwavelength Photonics Integration
Although the electrical integration of chemically synthesized nanowires has been achieved with lithography, optical integration, which promises high speeds and greater device versatility, remains unexplored. We describe the properties and functions of individual crystalline oxide nanoribbons that act as subwavelength optical waveguides and assess their applicability as nanoscale photonic elements. The length, flexibility, and strength of these structures enable their manipulation on surfaces, including the optical linking of nanoribbon waveguides and other nanowire elements to form networks and device components. We demonstrate the assembly of ribbon waveguides with nanowire light sources and detectors as a first step toward building nanowire photonic circuitry.
Stereo Matching Method with Cost Volume Collaborative Filtering
Aiming at the problem of matching ambiguity and low disparity accuracy at the object boundary in stereo matching, a novel stereo matching algorithm with cost volume collaborative filtering is proposed. Firstly, for each pixel, two support windows are built, namely a local cross- support window as well as a global support window for the whole image. Secondly, a new adaptive weighted guide filter with a cross-support window as a kernel window is derived, and it is used to locally filter the cost volume. In addition, a minimum spanning tree is constructed in the whole image window, and then the minimum spanning tree filter is used to globally filter the cost volume. The collaborative filtering of cost volume is realized by fusing the filtering results of the local filter and global filter, so that each pixel can not only receive the support of the neighboring pixels in the local adaptive window, but can also receive the effective support of other pixels in the whole image, thus effectively eliminating the matching ambiguity in different texture regions while maintaining the disparity edges. The experimental results show that the average matching error rate of our method on the Middlebury stereo images is 3.17%. Compared with the other state-of-the-art methods, our method has higher robustness and matching accuracy, the generated disparity maps are smoother, and the disparity edges are better preserved.