Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,273 result(s) for "texture feature"
Sort by:
Color–Texture Pattern Classification Using Global–Local Feature Extraction, an SVM Classifier, with Bagging Ensemble Post-Processing
Many applications in image analysis require the accurate classification of complex patterns including both color and texture, e.g., in content image retrieval, biometrics, and the inspection of fabrics, wood, steel, ceramics, and fruits, among others. A new method for pattern classification using both color and texture information is proposed in this paper. The proposed method includes the following steps: division of each image into global and local samples, texture and color feature extraction from samples using a Haralick statistics and binary quaternion-moment-preserving method, a classification stage using support vector machine, and a final stage of post-processing employing a bagging ensemble. One of the main contributions of this method is the image partition, allowing image representation into global and local features. This partition captures most of the information present in the image for colored texture classification allowing improved results. The proposed method was tested on four databases extensively used in color–texture classification: the Brodatz, VisTex, Outex, and KTH-TIPS2b databases, yielding correct classification rates of 97.63%, 97.13%, 90.78%, and 92.90%, respectively. The use of the post-processing stage improved those results to 99.88%, 100%, 98.97%, and 95.75%, respectively. We compared our results to the best previously published results on the same databases finding significant improvements in all cases.
Exploiting deep textures for image retrieval
Deep features and texture features each have advantages in image representation. However, exploiting deep textures for image retrieval is challenging because it is difficult to enhance the compatibility of texture features and deep features. To address this problem, we propose a novel image-retrieval method named the deep texture feature histogram (DTFH). The main highlights are: (1) We propose a novel method for identifying effective, limited-effectiveness, or non-valid feature maps via ranking based on Haralick’s statistics, which can help understand image content and identify objects, as these statistics have clear physical significance. (2) We use Gabor filtering to mimic the human orientation-selection mechanism, which allows deep texture features to contain a good representation of orientation, thereby enhancing discriminative power. (3) We combine the advantages of classical texture features and deep features to provide a compact representation. This provides a new, yet simple way to exploit deep features via the use of traditional classical texture features. Comparative experiments demonstrate that deep texture features provide highly competitive performance in image retrieval in terms of mean average precision (mAP), and provide new insights into the exploitation of traditional texture features and deep features.
Multi temporal multispectral UAV remote sensing allows for yield assessment across European wheat varieties already before flowering
High throughput field phenotyping techniques employing multispectral cameras allow extracting a variety of variables and features to predict yield and yield related traits, but little is known about which types of multispectral features are optimal to forecast yield potential in the early growth phase. In this study, we aim to identify multispectral features that are able to accurately predict yield and aid in variety classification at different growth stages throughout the season. Furthermore, we hypothesize that texture features (TFs) are more suitable for variety classification than for yield prediction. Throughout 2021 and 2022, a trial involving 19 and 18 European wheat varieties, respectively, was conducted. Multispectral images, encompassing visible, Red-edge, and near-infrared (NIR) bands, were captured at 19 and 22 time points from tillering to harvest using an unmanned aerial vehicle (UAV) in the first and second year of trial. Subsequently, orthomosaic images were generated, and various features were extracted, including single-band reflectances, vegetation indices (VI), and TFs derived from a gray level correlation matrix (GLCM). The performance of these features in predicting yield and classifying varieties at different growth stages was assessed using random forest models. Measurements during the flowering stage demonstrated superior performance for most features. Specifically, Red reflectance achieved a root mean square error (RMSE) of 52.4 g m -2 in the first year and 64.4 g m -2 in the second year. The NDRE VI yielded the most accurate predictions with an RMSE of 49.1 g m -2 and 60.6 g m -2 , respectively. Moreover, TFs such as CONTRAST and DISSIMILARITY displayed the best performance in predicting yield, with RMSE values of 55.5 g m -2 and 66.3 g m -2 across the two years of trial. Combining data from different dates enhanced yield prediction and stabilized predictions across dates. TFs exhibited high accuracy in classifying low and high-yielding varieties. The CORRELATION feature achieved an accuracy of 88% in the first year, while the HOMOGENEITY feature reached 92% accuracy in the second year. This study confirms the hypothesis that TFs are more suitable for variety classification than for yield prediction. The results underscore the potential of TFs derived from multispectral images in early yield prediction and varietal classification, offering insights for HTP and precision agriculture alike.
Wavelet statistical texture features-based segmentation and classification of brain computed tomography images
A computer software system is designed for segmentation and classification of benign and malignant tumour slices in brain computed tomography images. In this study, the authors present a method to select both dominant run length and co-occurrence texture features of wavelet approximation tumour region of each slice to be segmented by a support vector machine (SVM). Two-dimensional discrete wavelet decomposition is performed on the tumour image to remove the noise. The images considered for this study belong to 208 tumour slices. Seventeen features are extracted and six features are selected using Student's t-test. This study constructed the SVM and probabilistic neural network (PNN) classifiers with the selected features. The classification accuracy of both classifiers are evaluated using the k fold cross validation method. The segmentation results are also compared with the experienced radiologist ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and segmentation error. The proposed system provides some newly found texture features have an important contribution in classifying tumour slices efficiently and accurately. The experimental results show that the proposed SVM classifier is able to achieve high segmentation and classification accuracy effectiveness as measured by sensitivity and specificity.
Feature Extraction Methods: A Review
Feature extraction is the main core in diagnosis, classification, clustering, recognition, and detection. Many researchers may by interesting in choosing suitable features that used in the applications. In this paper, the most important features methods are collected, and explained each one. The features in this paper are divided into four groups; Geometric features, Statistical features, Texture features, and Color features. It explains the methodology of each method, its equations, and application. In this paper, we made acomparison among them by using two types of image, one type for face images (163 images divided into 113 for training and 50 for testing) and the other for plant images(130 images divided into 100 for training and 30 for testing) to test the features in geometric and textures. Each type of image group shows that each type of images may be used suitable features may differ from other types.
A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved.
Medical image augmentation for lesion detection using a texture-constrained multichannel progressive GAN
Lesion detectors based on deep learning can assist doctors in diagnosing diseases. However, the performance of current detectors is likely to be unsatisfactory due to the scarcity of training samples. Therefore, it is beneficial to use image generation to augment the training set of a detector. However, when the imaging texture of the medical image is relatively delicate, the synthesized image generated by an existing method may be too poor in quality to meet the training requirements of the detectors. In this regard, a medical image augmentation method, namely, a texture-constrained multichannel progressive generative adversarial network (TMP-GAN), is proposed in this work. TMP-GAN uses joint training of multiple channels to effectively avoid the typical shortcomings of the current generation methods. It also uses an adversarial learning-based texture discrimination loss to further improve the fidelity of the synthesized images. In addition, TMP-GAN employs a progressive generation mechanism to steadily improve the accuracy of the medical image synthesizer. Experiments on the publicly available dataset CBIS-DDMS and our pancreatic tumor dataset show that the precision/recall/F1-score of the detector trained on the TMP-GAN augmented dataset improves by 2.59%/2.70%/2.77% and 2.44%/2.06%/2.36%, respectively, compared to the optimal results of other data augmentation methods. The FROC curve of the detector is also better than the curve from the contrast-augmented trained dataset. Therefore, we believe the proposed TMP-GAN is a practical technique to efficiently implement lesion detection case studies. •A novel TMP-GAN is constructed for augmenting medical images with a delicate texture.•The constructed dual-channel generator with parameter sharing combines the advantages of existing generation methods.•The proposed loss improves the texture continuity and reduces the loss of lesion texture of the synthetic image.•A progressive generation mechanism is introduced to decompose the challenge of augmentation.•The performance of TMP-GAN is verified by comparison with some excellent peers.
Non-destructive monitoring of maize LAI by fusing UAV spectral and textural features
Leaf area index (LAI) is an essential indicator for crop growth monitoring and yield prediction. Real-time, non-destructive, and accurate monitoring of crop LAI is of great significance for intelligent decision-making on crop fertilization, irrigation, as well as for predicting and warning grain productivity. This study aims to investigate the feasibility of using spectral and texture features from unmanned aerial vehicle (UAV) multispectral imagery combined with machine learning modeling methods to achieve maize LAI estimation. In this study, remote sensing monitoring of maize LAI was carried out based on a UAV high-throughput phenotyping platform using different varieties of maize as the research target. Firstly, the spectral parameters and texture features were extracted from the UAV multispectral images, and the Normalized Difference Texture Index (NDTI), Difference Texture Index (DTI) and Ratio Texture Index (RTI) were constructed by linear calculation of texture features. Then, the correlation between LAI and spectral parameters, texture features and texture indices were analyzed, and the image features with strong correlation were screened out. Finally, combined with machine learning method, LAI estimation models of different types of input variables were constructed, and the effect of image features combination on LAI estimation was evaluated. The results revealed that the vegetation indices based on the red (650 nm), red-edge (705 nm) and NIR (842 nm) bands had high correlation coefficients with LAI. The correlation between the linearly transformed texture features and LAI was significantly improved. Besides, machine learning models combining spectral and texture features have the best performance. Support Vector Machine (SVM) models of vegetation and texture indices are the best in terms of fit, stability and estimation accuracy (R 2 = 0.813, RMSE = 0.297, RPD = 2.084). The results of this study were conducive to improving the efficiency of maize variety selection and provide some reference for UAV high-throughput phenotyping technology for fine crop management at the field plot scale. The results give evidence of the breeding efficiency of maize varieties and provide a certain reference for UAV high-throughput phenotypic technology in crop management at the field scale.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
Attention mechanism and texture contextual information for steel plate defects detection
In order to achieve rapid inference and generalization results, the majority of Convolutional Neural Network (CNN) based semantic segmentation models strive to mine high-level features that contain rich contextual semantic information. However, at steel plate defects detection scenario, some background textures’ noises are similar to the foreground leading to hard distinguishment, which will significantly interfere with feature extraction. Texture features themselves often hold the most plentiful contextual information. Despite this, semantic segmentation tasks rarely take texture features into account when identifying surface defects on steel plates. In that case, the essential details, such as the edge texture and other intuitive low-level features, will generally cannot be included into the final feature map. To address the problems of inefficient accuracy and slow speed of existing detection, this study proposed a steel plate surface defect detection method using contextual information and attention mechanism, and utilizes a multi-layer feature extraction method and fusion framework based on low-level statistical textures. Through the identification of pixel-level spatial and correlation relationships, characteristics of low-level defects are extracted. Furthermore, to effectively incorporate statistical texture in CNN, a novel quantization technique has been developed. This quantization method allows for the conversion of continuous texture into various levels of intensity. The network parameters were iterated in a gradient direction, facilitating the defects division. Empirical results have demonstrated the feasibility of applying the proposed approach to practical steel plate testing. Additionally, ablation experiments have demonstrated that the method is capable of effectively enhancing surface defect detection for steel plates, resulting in industry-leading performance.