Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,601 result(s) for "local feature"
Sort by:
An End-to-End Local-Global-Fusion Feature Extraction Network for Remote Sensing Image Scene Classification
Remote sensing image scene classification (RSISC) is an active task in the remote sensing community and has attracted great attention due to its wide applications. Recently, the deep convolutional neural networks (CNNs)-based methods have witnessed a remarkable breakthrough in performance of remote sensing image scene classification. However, the problem that the feature representation is not discriminative enough still exists, which is mainly caused by the characteristic of inter-class similarity and intra-class diversity. In this paper, we propose an efficient end-to-end local-global-fusion feature extraction (LGFFE) network for a more discriminative feature representation. Specifically, global and local features are extracted from channel and spatial dimensions respectively, based on a high-level feature map from deep CNNs. For the local features, a novel recurrent neural network (RNN)-based attention module is first proposed to capture the spatial layout information and context information across different regions. Gated recurrent units (GRUs) is then exploited to generate the important weight of each region by taking a sequence of features from image patches as input. A reweighed regional feature representation can be obtained by focusing on the key region. Then, the final feature representation can be acquired by fusing the local and global features. The whole process of feature extraction and feature fusion can be trained in an end-to-end manner. Finally, extensive experiments have been conducted on four public and widely used datasets and experimental results show that our method LGFFE outperforms baseline methods and achieves state-of-the-art results.
Color–Texture Pattern Classification Using Global–Local Feature Extraction, an SVM Classifier, with Bagging Ensemble Post-Processing
Many applications in image analysis require the accurate classification of complex patterns including both color and texture, e.g., in content image retrieval, biometrics, and the inspection of fabrics, wood, steel, ceramics, and fruits, among others. A new method for pattern classification using both color and texture information is proposed in this paper. The proposed method includes the following steps: division of each image into global and local samples, texture and color feature extraction from samples using a Haralick statistics and binary quaternion-moment-preserving method, a classification stage using support vector machine, and a final stage of post-processing employing a bagging ensemble. One of the main contributions of this method is the image partition, allowing image representation into global and local features. This partition captures most of the information present in the image for colored texture classification allowing improved results. The proposed method was tested on four databases extensively used in color–texture classification: the Brodatz, VisTex, Outex, and KTH-TIPS2b databases, yielding correct classification rates of 97.63%, 97.13%, 90.78%, and 92.90%, respectively. The use of the post-processing stage improved those results to 99.88%, 100%, 98.97%, and 95.75%, respectively. We compared our results to the best previously published results on the same databases finding significant improvements in all cases.
Missing Value Imputation Method for Multiclass Matrix Data Based on Closed Itemset
Handling missing values in matrix data is an important step in data analysis. To date, many methods to estimate missing values based on data pattern similarity have been proposed. Most previously proposed methods perform missing value imputation based on data trends over the entire feature space. However, individual missing values are likely to show similarity to data patterns in local feature space. In addition, most existing methods focus on single class data, while multiclass analysis is frequently required in various fields. Missing value imputation for multiclass data must consider the characteristics of each class. In this paper, we propose two methods based on closed itemsets, CIimpute and ICIimpute, to achieve missing value imputation using local feature space for multiclass matrix data. CIimpute estimates missing values using closed itemsets extracted from each class. ICIimpute is an improved method of CIimpute in which an attribute reduction process is introduced. Experimental results demonstrate that attribute reduction considerably reduces computational time and improves imputation accuracy. Furthermore, it is shown that, compared to existing methods, ICIimpute provides superior imputation accuracy but requires more computational time.
Recent advances in local feature detector and descriptor: a literature survey
The computer vision system is the technology that deals with identifying and detecting the objects of a particular class in digital images and videos. Local feature detection and description play an essential role in many computer vision applications like object detection, object classification, etc. The accuracy of these applications depends on the performance of local feature detectors and descriptors used in the methods. Over the past decades, new algorithms and techniques have been introduced with the development of machine learning and deep learning techniques. The machine learning techniques can lead the work to the next level when sufficient data is provided. Deep learning algorithms can handle a large amount of data efficiently. However, this may raise questions in a researcher’s mind about selecting the best algorithm and best method for a particular application to increase the performance. The selection of the algorithms highly depends on the type of application and amount of data to be handled. This encouraged us to write a comprehensive survey of local image feature detectors and descriptors from state-of-the-art to the recent ones. This paper presents feature detection and description methods in the visible band with their advantages and disadvantages. We also gave an overview of current performance evaluations and benchmark datasets. Besides, the methods and algorithms are described to find the features beyond the visible band. Finally, we concluded the survey with future directions. This survey may help researchers and serve as a reference in the field of the computer vision system.
Gait-Based Person Identification Robust to Changes in Appearance
The identification of a person from gait images is generally sensitive to appearance changes, such as variations of clothes and belongings. One possibility to deal with this problem is to collect possible subjects’ appearance changes in a database. However, it is almost impossible to predict all appearance changes in advance. In this paper, we propose a novel method, which allows robustly identifying people in spite of changes in appearance, without using a database of predicted appearance changes. In the proposed method, firstly, the human body image is divided into multiple areas, and features for each area are extracted. Next, a matching weight for each area is estimated based on the similarity between the extracted features and those in the database for standard clothes. Finally, the subject is identified by weighted integration of similarities in all areas. Experiments using the gait database CASIA show the best correct classification rate compared with conventional methods experiments.
Local-non-local complementary learning network for 3D point cloud analysis
Point cloud analysis is integral to numerous applications, including mapping and autonomous driving. However, the unstructured and disordered nature of point clouds presents significant challenges for feature extraction. While both local and non-local features are essential for effective 3D point cloud analysis, existing methods often fail to seamlessly integrate these complementary features. To address this limitation, we propose the Local-Non-Local Complementary Learning Network (LNLCL-Net), a novel framework that enhances feature extraction and representation. Leveraging partial convolution, LNLCL-Net divides the feature map into distinct local and non-local components. Local features are modeled through relative positional relationships, while non-local features capture absolute positional information. A Complementary Interactive Attention module is introduced to enable adaptive integration of these features, enriching their complementary relationship. Extensive experiments on benchmark datasets, including ModelNet40, ScanObjectNN, and ShapeNet Part, demonstrate the superiority of our approach in both quantitative and qualitative metrics, achieving state-of-the-art performance in classification and segmentation tasks.
Enhanced feature clustering method based on ant colony optimization for feature selection
The popular modified graph clustering ant colony optimization (ACO) algorithm (MGCACO) performs feature selection (FS) by grouping highly correlated features. However, the MGCACO has problems in local search, thus limiting the search for optimal feature subset. Hence, an enhanced feature clustering with ant colony optimization (ECACO) algorithm is proposed. The improvement constructs an ACO feature clustering method to obtain clusters of highly correlated features. The ACO feature clustering method utilizes the ability of various mechanisms, such as local and global search to provide highly correlated features. The performance of ECACO was evaluated on six benchmark datasets from the University California Irvine (UCI) repository and two deoxyribonucleic acid microarray datasets, and its performance was compared against that of five benchmark metaheuristic algorithms. The classifiers used are random forest, k-nearest neighbors, decision tree, and support vector machine. Experimental results on the UCI dataset show the superior performance of ECACO compared with other algorithms in all classifiers in terms of classification accuracy. Experiments on the microarray datasets, in general, showed that the ECACO algorithm outperforms other algorithms in terms of average classification accuracy. ECACO can be utilized for FS in classification tasks for high-dimensionality datasets in various application domains such as medical diagnosis, biological classification, and health care systems.
A Hybrid Features Extraction on Face for Efficient Face Recognition
Image Processing is one of the vibrant research areas nowadays and particularly face recognition is given much importance in all the sectors. Accordingly this research paper proposes a hybrid Face Recognition System to find facial changes due to the aging factor in a robust manner. The highly qualified sharp features are extracted using the algorithms SURF(Speed Up Robust Features), HOG(Histogram of Oriented Gradient) and MSER(Maximally Stable Extremal Regions) to get better results. The proposed method divides the face into five regions. The whole face area is named Region1 can have a complete set of face features extracted using the SURF and it acts as a holistic feature. The Region 2, the nasal bridge features are extracted using the HOG. The Region 3 and Region 4 extract the features of the eyes of the face and the Region 5 extracts the features of the region around the nose and the mouth. The features of these regions are extracted using MSER. These different features from five regions are matched by point matching technique with the database of the target image. Experimental results are evaluated using the datasets such as Yale, FGNET and MORPH dataset. The experimental results show that the proposed face recognition algorithm is superior to traditional methods in terms of recognition rate and time complexity.
Rotational Projection Statistics for 3D Local Surface Description and Object Recognition
Recognizing 3D objects in the presence of noise, varying mesh resolution, occlusion and clutter is a very challenging task. This paper presents a novel method named Rotational Projection Statistics (RoPS). It has three major modules: local reference frame (LRF) definition, RoPS feature description and 3D object recognition. We propose a novel technique to define the LRF by calculating the scatter matrix of all points lying on the local surface. RoPS feature descriptors are obtained by rotationally projecting the neighboring points of a feature point onto 2D planes and calculating a set of statistics (including low-order central moments and entropy) of the distribution of these projected points. Using the proposed LRF and RoPS descriptor, we present a hierarchical 3D object recognition algorithm. The performance of the proposed LRF, RoPS descriptor and object recognition algorithm was rigorously tested on a number of popular and publicly available datasets. Our proposed techniques exhibited superior performance compared to existing techniques. We also showed that our method is robust with respect to noise and varying mesh resolution. Our RoPS based algorithm achieved recognition rates of 100, 98.9, 95.4 and 96.0 % respectively when tested on the Bologna, UWA, Queen’s and Ca’ Foscari Venezia Datasets.
Striver: an image descriptor for fingerprint liveness detection
Discriminant image feature plays a key role in fingerprint liveness detection. In this paper, we propose a Fisher vector learning based image representation method for fingerprint liveness detection. Different from conventional methods, we consider the Fisher vector based image feature learning both in the spatial domain and the frequency domain. The contributions of our method are summarized as follows: (1) Image are transformed to the local frequency domain and global frequency domain by using the local Fourier transform and global Fourier transform, respectively. The frequency domain can not only preserve the image contextual information in the original spatial domain but also robust for image representation. (2) In the global frequency domain, image high frequency information is introduced in the feature extraction process rather than discard as usually did. (3) To take full advantage of the complementary in spatial and frequency domains, image local spatial feature, local frequency feature and global frequency feature are fused in the Fisher vector learning process. Extensive experimental results conducted on three benchmark databases demonstrate the superior performance of our method compared with other peer methods. Specifically, on LivDet 2011, LivDet 2013, and LivDet 2015 database, the average classification error are reduced to 5.16%, 1.40%, and 7.51%, respectively.