Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,330 result(s) for "Geometric transformation"
Sort by:
Learning Geometric Transformation for Point Cloud Completion
Point cloud completion aims to estimate the missing shape from a partial point cloud. Existing encoder-decoder based generative models usually reconstruct the complete point cloud from the learned distribution of the shape prior, which may lead to distortion of geometric details (such as sharp structures and structures without smooth surfaces) due to the information loss of the latent space embedding. To address this problem, we formulate point cloud completion as a geometric transformation problem and propose a simple yet effective geometric transformation network (GTNet). It exploits the repetitive geometric structures in common 3D objects to recover the complete shapes, which contains three sub-networks: geometric patch network, structure transformation network, and detail refinement network. Specifically, the geometric patch network iteratively discovers repetitive geometric structures that are related or similar to the missing parts. Then, the structure transformation network uses the discovered geometric structures to complete the corresponding missing parts by learning their spatial transformations such as symmetry, rotation, translation, and uniform scaling. Finally, the detail refinement network performs global optimization to eliminate unnatural structures. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art methods on the Shape-Net55-34, MVP, PCN, and KITTI datasets. Models and code will be available at https://github.com/ivislabhit/GTNet.
A general geometric transformation model for line-scan image registration
A reasonable geometric transformation model is the key to image registration. When the relative motion direction between the line-scan camera and the object is strictly parallel to the planar object, it is possible to align the image by using the eight-parameter geometric transformation model of the line-scan image. However, it will be invalid when the relative motion direction is arbitrary. Therefore, a new general geometric transformation model of line-scan images is proposed for line-scan image registration in this paper. Considering the different initial poses and motion directions of the line-scan camera, the proposed model is established based on the imaging model of the line-scan camera. In order to acquire line-scan images to verify the proposed model, a line-scan image acquisition system was built. The method based on feature points is used to register the line-scan images. The experimental results show that the proposed geometric transformation model can align the line-scan image collected under arbitrary relative motion direction, not just the parallel case. Besides, the statistical errors of the image feature point coordinates are the best performance after registration. The accuracy of the registration results is better than that of other existing geometric transformation models, which verifies the correctness and generality of the geometric transformation model of the line-scan camera proposed in this paper.
Improving Co-Registration for Sentinel-1 SAR and Sentinel-2 Optical Images
Co-registering the Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data of the European Space Agency (ESA) is of great importance for many remote sensing applications. However, we find that there are evident misregistration shifts between the Sentinel-1 SAR and Sentinel-2 optical images that are directly downloaded from the official website. To address that, this paper presents a fast and effective registration method for the two types of images. In the proposed method, a block-based scheme is first designed to extract evenly distributed interest points. Then, the correspondences are detected by using the similarity of structural features between the SAR and optical images, where the three-dimensional (3D) phase correlation (PC) is used as the similarity measure for accelerating image matching. Lastly, the obtained correspondences are employed to measure the misregistration shifts between the images. Moreover, to eliminate the misregistration, we use some representative geometric transformation models such as polynomial models, projective models, and rational function models for the co-registration of the two types of images, and we compare and analyze their registration accuracy under different numbers of control points and different terrains. Six pairs of the Sentinel-1 SAR L1 and Sentinel-2 optical L1C images covering three different terrains are tested in our experiments. Experimental results show that the proposed method can achieve precise correspondences between the images, and the third-order polynomial achieves the most satisfactory registration results. Its registration accuracy of the flat areas is less than 1.0 10 m pixel, that of the hilly areas is about 1.5 10 m pixels, and that of the mountainous areas is between 1.7 and 2.3 10 m pixels, which significantly improves the co-registration accuracy of the Sentinel-1 SAR and Sentinel-2 optical images.
Dirichlet and Liouville-based normality scores for deep anomaly detection using transformations: applications to images and beyond images
We address the problem of anomaly detection in data by learning a normality score function through the use of data transformations. Applying transformations to a dataset is essential for enhancing its representation and revealing underlying patterns. First, we propose geometric transformations for image data. The core idea of our approach is to train a multi-class deep classifier to distinguish between various geometric transformations. At test time, we construct the normality score function by approximating the softmax output predictions vector using generalized forms of Dirichlet distributions, including the generalized Dirichlet (GD), scaled Dirichlet (SD), shifted scaled Dirichlet (SSD), and Beta-Liouville (BL) distributions. These generalized forms of the Dirichlet distribution are more robust in real-world applications compared to the standard Dirichlet distribution. They offer a more flexible covariance structure, making them suitable for approximating both symmetric and asymmetric distributions. For parameter estimation, we use the maximum likelihood method based on the transformed forms of the original data. In the second step, we extend our approach to non-image data by selecting appropriate transformations. This transformation procedure involves building several neural networks, training them on the original data to obtain its transformed form, and then passing the transformed data through an auto-encoder. Experiments conducted on both image and non-image data demonstrate the effectiveness of our proposed strategy. The results show that our anomaly detection models, based on generalized Dirichlet distributions, outperform baseline techniques and achieve high Area Under the Receiver Operating Characteristic (AUROC) scores.
Enhancing Ulcerative Colitis Diagnosis: A Multi-Level Classification Approach with Deep Learning
The evaluation of disease severity through endoscopy is pivotal in managing patients with ulcerative colitis, a condition with significant clinical implications. However, endoscopic assessment is susceptible to inherent variations, both within and between observers, compromising the reliability of individual evaluations. This study addresses this challenge by harnessing deep learning to develop a robust model capable of discerning discrete levels of endoscopic disease severity. To initiate this endeavor, a multi-faceted approach is embarked upon. The dataset is meticulously preprocessed, enhancing the quality and discriminative features of the images through contrast limited adaptive histogram equalization (CLAHE). A diverse array of data augmentation techniques, encompassing various geometric transformations, is leveraged to fortify the dataset’s diversity and facilitate effective feature extraction. A fundamental aspect of the approach involves the strategic incorporation of transfer learning principles, harnessing a modified ResNet-50 architecture. This augmentation, informed by domain expertise, contributed significantly to enhancing the model’s classification performance. The outcome of this research endeavor yielded a highly promising model, demonstrating an accuracy rate of 86.85%, coupled with a recall rate of 82.11% and a precision rate of 89.23%.
Development of the Non-Iterative Supervised Learning Predictor Based on the Ito Decomposition and SGTM Neural-Like Structure for Managing Medical Insurance Costs
The paper describes a new non-iterative linear supervised learning predictor. It is based on the use of Ito decomposition and the neural-like structure of the successive geometric transformations model (SGTM). Ito decomposition (Kolmogorov–Gabor polynomial) is used to extend the inputs of the SGTM neural-like structure. This provides high approximation properties for solving various tasks. The search for the coefficients of this polynomial is carried out using the fast, non-iterative training algorithm of the SGTM linear neural-like structure. The developed method provides high speed and increased generalization properties. The simulation of the developed method’s work for solving the medical insurance costs prediction task showed a significant increase in accuracy compared with existing methods (common SGTM neural-like structure, multilayer perceptron, Support Vector Machine, adaptive boosting, linear regression). Given the above, the developed method can be used to process large amounts of data from a variety of industries (medicine, materials science, economics, etc.) to improve the accuracy and speed of their processing.
An improved block based copy-move forgery detection technique
With the increase in demand for identification of authenticity of the digital images, researchers are widely studying the image forgery detection techniques. Copy-move forgery is amongst the commonly used forgery, which is performed by copying a part of an image and then pasting it on the same or different image. This results in the concealing of image content. Most of the existing copy-move forgery detection techniques are subjected to degradation in results, under the effect of geometric transformations. In this paper, a Discrete Cosine Transformation (DCT) and Singular Value Decomposition (SVD) based technique is proposed to detect the copy-move image forgery. DCT is used to transform the image from the spatial domain to the frequency domain and SVD is used to reduce the feature vector dimension. Combination of DCT and SVD makes the proposed scheme robust against compression, geometric transformations, and noise. For classification of images as forged or authentic, Support Vector Machine (SVM) classifier is used on the feature set. Once the image is detected as forged, then for the localization of forged region, K-means clustering is used on the feature vector. According to the distance threshold, similar blocks are identified and marked. The application of SVD provides stability and invariance from geometric transformations. Evaluation of the proposed scheme is done with and without post-processing operations on the images, both at the pixel level and image level. The proposed scheme outperforms the various state-of-the-art techniques of Copy-Move Forgery Detection (CMFD) in terms of accuracy, precision, recall and F1 parameters. Moreover, the proposed scheme also provides better results against rotation, scaling, noise and JPEG compression.
Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications
Geometric image transformations are fundamental to image processing, computer vision and graphics, with critical applications to pattern recognition and facial identification. The splitting-integrating method (SIM) is well suited to the inverse transformation T−1 of digital images and patterns, but it encounters difficulties in nonlinear solutions for the forward transformation T. We propose improved techniques that entirely bypass nonlinear solutions for T, simplify numerical algorithms and reduce computational costs. Another significant advantage is the greater flexibility for general and complicated transformations T. In this paper, we apply the improved techniques to the harmonic, Poisson and blending models, which transform the original shapes of images and patterns into arbitrary target shapes. These models are, essentially, the Dirichlet boundary value problems of elliptic equations. In this paper, we choose the simple finite difference method (FDM) to seek their approximate transformations. We focus significantly on analyzing errors of image greyness. Under the improved techniques, we derive the greyness errors of images under T. We obtain the optimal convergence rates O(H2)+O(H/N2) for the piecewise bilinear interpolations (μ=1) and smooth images, where H(≪1) denotes the mesh resolution of an optical scanner, and N is the division number of a pixel split into N2 sub-pixels. Beyond smooth images, we address practical challenges posed by discontinuous images. We also derive the error bounds O(Hβ)+O(Hβ/N2), β∈(0,1) as μ=1. For piecewise continuous images with interior and exterior greyness jumps, we have O(H)+O(H/N2). Compared with the error analysis in our previous study, where the image greyness is often assumed to be smooth enough, this error analysis is significant for geometric image transformations. Hence, the improved algorithms supported by rigorous error analysis of image greyness may enhance their wide applications in pattern recognition, facial identification and artificial intelligence (AI).
Adaptive power-law and cdf based geometric transformation for low contrast image enhancement
Image enhancement is a technique that manipulates an image to make it more meaningful and effective to user specific problem. In most of the enhancement techniques, input image intensities are transformed into either higher order or lower order intensities according to the designed algorithmic characteristic. But, in certain cases the input intensities might require to be transformed in a balanced combination of both higher and lower order intensity. Moreover, 2D Geometric Transformation is mainly used to transform the objects presents in an image. Here a contemplative fusion of gamma and 2D Geometric Transformation concept has been used for intensity transformation. The proposed method first divides the histogram into three sub-sections according to the homogeneity value representing the dark, gray and bright section of histogram. Then each sub-section is transformed locally using adaptive gamma and 2D Geometric scaling transformation. These transformed sub-sections are merged again by employing 2D translation operation. On the other hand, a global gamma transformation is obtained for entire histogram. At last, the final transformation matrix is obtained by combining previously computed local and global transformation. The comparison of this technique with other state of art technique has been discussed to depict the significance of the proposed method. The proposed method gives a new and innovative dimension of image enhancement.
Research on a Sliding Detection Method for an Elevator Traction Wheel Based on Machine Vision
To solve the problem that the elevator traction wheel slippage is difficult to detect quantitatively, a slippage detection method is proposed based on machine vision. The slip between the traction wheel and the wire rope will occur during the round-trip operation of the elevator, the displacement distance between the traction wheel and the wire rope in the circumferential direction is obtained through the image signal processing algorithm and related data analysis. First, the ROI (region of interest) of the collected original image is selected to reduce redundant information. Then, a nonlinear geometric transformation is carried out to transform the image into the target image with an equal object distance. Finally, the centroid method is used to obtain the slippage between the traction wheel and the wire rope. The field test results show that the absolute error of the system developed in this paper is 0.74 mm and the relative error is 2%, the extending uncertainty of the slip detection results is (33.8 ± 0.69) mm, the confidence probability is p = 0.95, and the degree of freedom is v = 8, which can meet accuracy requirements of elevator maintenance.