Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
2,345 result(s) for "Texture recognition"
Sort by:
Deep Filter Banks for Texture Recognition, Description, and Segmentation
Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.
Tactile Recognition of Shape and Texture on the Same Substrate
Due to the difficulties in designing sensor arrays with a wide detection range, high sensitivity, and the sensing ability to convert tangential force into normal force on the same substrate, the integration of shape and texture recognitions in one electronic skin (e‐skin) has not been realized so far. Herein, an e‐skin tactile‐sensing system is presented, based on resistive pressure‐sensing units (serving as Meissner corpuscles) unevenly distributed on a bionic hand‐shaped polyimide substrate, which can realize shape and texture recognitions concurrently. A multilayer microporous structure with different pores is designed and introduced into the sensor, which enables each sensing unit ultrahigh sensitivity and a wide detection range. Meanwhile, a customized micro‐pyramid array is developed and assembled to the sensor array, which realizes the transformation from tangential force to normal force. With the help of artificial intelligence technology, the recognition accuracy reaches 100% for 8 different shapes, and 99.7% for 10 different textures, respectively. The proposed design strategy enables compatible fabrication, simple signal processing, and convenient extension of bionic free‐shaped e‐skin, which paves a promising way for the popularization of e‐skin in large‐scale intelligent wearable fields. In this article, an electronic skin (e‐skin) tactile‐sensing system is designed based on the resistive pressure‐sensing units (serving as Meissner corpuscles) unevenly distributed on a bionic hand‐shaped polyimide substrate. With the help of artificial intelligence technology and micro‐pyramid module, this system can realize shape and texture recognitions concurrently.
Fast dynamic texture recognition based on block estimation and axial spatio-temporal motion vector components
Current dynamic texture motion-based features are practically all pixel-based signatures, therefore making the recognition slow and favoring the quality over computational speed, which is unreasonable for nowadays time-sensitive applications. Consequently, the goal of this work is to ensure a balanced accuracy and computational speed of recognition by exploring block-based motion features that are more likely adaptable for fast characterization rather than pixel-based. In this paper, we proposed four fast and innovative block-based motion approaches that characterize dynamic texture raw videos without any further segmentation for recognition purposes. Their originality is to adopt block motion estimation fast algorithms and introduce the novel Axial Spatio-temporal Motion Vector Components to be analyzed statistically using customized space-time texture features (first-order, second-order, and higher-order). Experimentations and results yielded a satisfying equilibrium between recognition and computational speed on multiple Dynamic texture datasets: DynTex, DynTex++ and UCLA compared to the pixel-based techniques with a high reduction of calculation time.
Efficient bat-inspired block matching algorithm with novel motion energy directional histograms for dynamic texture fast recognition
Motion estimation is a crucial step in Dynamic Texture (DT) Motion-based recognition systems, as it directly affects the system’s overall accuracy and computational proficiency. Whereas efficient motion estimation methods are pixel-based prioritizing recognition quality over computational complexity, this trade-off may not be acceptable in time-sensitive applications. Therefore, Block Matching algorithms (BMA) stand as potential alternative candidates, however, classic BMA often falls into local optimums ambushes resulting suboptimal solutions, particularly for complex motion videos such DT. In this paper, we propose a novel bat-inspired block matching approach for motion estimation to overcome the issue of falling into local optima. Our approach is inspired from the powerful search capacity of the bat algorithm seeking the best block within a search space; aiming accuracy improvement and a faster convergence. Afterwards, we extract motion features from the matched blocks fields to characterize the different DT videos. Two comprehensive sets of experiments were performed to validate the proposed approaches, Firstly the bat-inspired block motion estimation performance was compared against common algorithms on various standard video sequences. Moreover, the effectiveness of the introduced Dynamic Texture Recognition (DTR) system was demonstrated using well-known DT datasets, where comparative studies with state-of-art methods were presented. The results achieved a balance on computational speed and accuracy on multiple datasets.
Automatic Unsupervised Texture Recognition Framework Using Anisotropic Diffusion-Based Multi-Scale Analysis and Weight-Connected Graph Clustering
A novel unsupervised texture classification technique is proposed in this research work. The proposed method clusters automatically the textures of an image collection in similarity classes whose number is not a priori known. A nonlinear diffusion-based multi-scale texture analysis approach is introduced first. It creates an effective scale-space by using a well-posed anisotropic diffusion filtering model that is proposed and approximated numerically here. A feature extraction process using a bank of circularly symmetric 2D filters is applied at each scale, then a rotation-invariant texture feature vector is achieved for the current image by combining the feature vectors computed at all these scales. Next, a weighted similarity graph, whose vertices correspond to the texture feature vectors and the weights of its edges are obtained from the distances computed between these vectors, is created. A novel weighted graph clustering technique is then applied to this similarity graph, to determine the texture classes. Numerical simulations and method comparisons illustrating the effectiveness of the described framework are also discussed in this work.
A Deep Learning Approach to Manage and Reduce Plastic Waste in the Oceans
The accumulation of plastic objects in the Earth’s environment will adversely affect wildlife, wildlife habitat, and humans. The huge amount of unrecycled plastic ends up in landfill and thrown into unregulated dump sites. In many cases, specifically in the developing countries, plastic waste is thrown into rivers, streams and oceans. In this work, we employed the power of deep learning techniques in image processing and classification to recognize plastic waste. Our work aims to identify plastic texture and plastic objects in images in order to reduce plastic waste in the oceans, and facilitate waste management. For this, we use transfer learning in two ways: in the first one, a pre-trained CNN model on ImageNet is used as a feature extractor, then an SVM classifier for classification, the second strategy is based on fine tuning the pre-trained CNN model. Our approach was trained and tested using two (02) challenging datasets one is a texture recognition dataset and the other is for object detection, and achieves very satisfactory results using two (02) deep learning strategies.
Chaotic features for dynamic textures recognition
This paper presents a novel framework for dynamic textures (DTs) modeling and recognition, investigating the use of chaotic features. We propose to extract chaotic features from each pixel intensity series in a video. The chaotic features in each pixel intensity series are concatenated to a feature vector, chaotic feature vector. Then, a video is modeled as a feature vector matrix. Next, two approaches of DTs recognition are investigated. A bag of words approach is used to represent each video as a histogram of chaotic feature vector. The recognition is carried out by 1-nearest neighbor classifier. We also investigate the use of earth mover’s distance (EMD) method. Mean shift clustering algorithm is employed to cluster each feature vector matrix. EMD method is used to compare the similarity between two videos. The output of EMD matrix whose entry is the matching score can be used to DTs recognition. We have tested our approach on four datasets and obtained encouraging results which demonstrate the feasibility and validity of our proposed methods.
Haptic Recognition of Texture Surfaces Using Semi-Supervised Feature Learning Based on Sparse Representation
Haptic cognitive models are used to map the physical stimuli of texture surfaces to subjective haptic cognition, providing robotic systems with intelligent haptic cognition to perform dexterous manipulations in a manner that is similar to that of humans. Nevertheless, there is still the question of how to extract features that are stable and reflect the biological perceptual characteristics as the inputs of the models. To address this issue, a semi-supervised sparse representation method is developed to predict subjective haptic cognitive intensity in different haptic perceptual dimensions of texture surfaces. We conduct standardized interaction and perception experiments on textures that are part of common objects in daily life. Effective data cues sifting, perceptual filtering, and semi-supervised feature extraction steps are conducted in the process of sparse representation to ensure that the source data and features are complete and effective. The results indicate that the haptic cognitive model using the proposed method performs well in fitting and predicting perceptual intensity in the perceptual dimensions of “hardness,” “roughness,” and “slipperiness” for texture surfaces. Compared with previous methods, such as models using multilayer regression and hand-crafted features, the use of standardized interaction, cue sifting, perceptual filtering, and semi-supervised feature extraction could greatly improve the accuracy by improving the completeness of collected data, the effectiveness of features, and simulations of some physiological cognitive mechanisms. The improved method can be implemented to improve the performance of the haptic cognitive model for texture surfaces, and can also inspire research on intelligent cognition and haptic rendering systems.
Machine Vision Framework for Real-Time Surface Yarn Alignment Defect Detection in Carbon-Fiber-Reinforced Polymer Preforms
Carbon-fiber-reinforced polymer (CFRP) preforms are vital for high-performance composite structures, yet the real-time detection of surface yarn alignment defects is hindered by complex textures. This study introduces a novel machine vision framework to enable the precise, real-time identification of such defects in CFRP preforms. We proposed obtaining the frequency spectrum by removing the zero-frequency component from the projection curve of images of carbon fiber fabric, aiding in the identification of the cycle number for warp and weft yarns. A texture structure recognition method based on the artistic conception drawing (ACD) revert is applied to distinguishing the complex and diverse surface texture of the woven carbon fabric prepreg from potential surface defects. Based on the linear discriminant analysis for defect area threshold extraction, a defect boundary tracking algorithm rule was developed to achieve defect localization. Using over 1500 images captured from actual production lines to validate and compare the performance, the proposed method significantly outperforms the other inspection approaches, achieving a 97.02% recognition rate with a 0.38 s per image processing time. This research contributes new scientific insights into the correlation between yarn alignment anomalies and a machine-vision-based texture analysis in CFRP preforms, potentially advancing our fundamental understanding of the defect mechanisms in composite materials and enabling data-driven quality control in advanced manufacturing.