Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
41,074
result(s) for
"vector data"
Sort by:
Improved CNN Classification Method for Groups of Buildings Damaged by Earthquake, Based on High Resolution Remote Sensing Images
by
Ma, Haojie
,
Ren, Yuhuan
,
Liu, Yalan
in
Accuracy
,
artificial intelligence
,
Artificial neural networks
2020
Effective extraction of disaster information of buildings from remote sensing images is of great importance to supporting disaster relief and casualty reduction. In high-resolution remote sensing images, object-oriented methods present problems such as unsatisfactory image segmentation and difficult feature selection, which makes it difficult to quickly assess the damage sustained by groups of buildings. In this context, this paper proposed an improved Convolution Neural Network (CNN) Inception V3 architecture combining remote sensing images and block vector data to evaluate the damage degree of groups of buildings in post-earthquake remote sensing images. By using CNN, the best features can be automatically selected, solving the problem of difficult feature selection. Moreover, block boundaries can form a meaningful boundary for groups of buildings, which can effectively replace image segmentation and avoid its fragmentary and unsatisfactory results. By adding Separate and Combination layers, our method improves the Inception V3 network for easier processing of large remote sensing images. The method was tested by the classification of damaged groups of buildings in 0.5 m-resolution aerial imagery after the earthquake of Yushu. The test accuracy was 90.07% with a Kappa Coefficient of 0.81, and, compared with the traditional multi-feature machine learning classifier constructed by artificial feature extraction, this represented an improvement of 18% in accuracy. Our results showed that this improved method could effectively extract the damage degree of groups of buildings in each block in post-earthquake remote sensing images.
Journal Article
In-Wheel Motor Fault Diagnosis Using Affinity Propagation Minimum-Distance Discriminant Projection and Weibull-Kernel-Function-Based SVDD
by
Sun, Ning
,
Liu, Bingchen
,
Ding, Dianyong
in
affinity propagation minimum-distance discriminant
,
Algorithms
,
Classification
2023
To effectively ensure the operational safety of an electric vehicle with in-wheel motor drive, a novel diagnosis method is proposed to monitor each in-wheel motor fault, the creativity of which lies in two aspects. One aspect is that affinity propagation (AP) is introduced into a minimum-distance discriminant projection (MDP) algorithm to propose a new dimension reduction algorithm, which is defined as APMDP. APMDP not only gathers the intra-class and inter-class information of high-dimensional data but also obtains information on the spatial structure. Another aspect is that multi-class support vector data description (SVDD) is improved using the Weibull kernel function, and its classification judgment rule is modified into a minimum distance from the intra-class cluster center. Finally, in-wheel motors with typical bearing faults are customized to collect vibration signals under four operating conditions, respectively, to verify the effectiveness of the proposed method. The results show that the APMDP’s performance is better than traditional dimension reduction methods, and the divisibility is improved by at least 8.35% over the LDA, MDP, and LPP. A multi-class SVDD classifier based on the Weibull kernel function has high classification accuracy and strong robustness, and the classification accuracies of the in-wheel motor faults in each condition are over 95%, which is higher than the polynomial and Gaussian kernel function.
Journal Article
Encoding Geospatial Vector Data for Deep Learning: LULC as a Use Case
by
Giannopoulos, Ioannis
,
Mc Cutchan, Marvin
in
Accuracy
,
Annotations
,
Artificial neural networks
2022
Geospatial vector data with semantic annotations are a promising but complex data source for spatial prediction tasks such as land use and land cover (LULC) classification. These data describe the geometries and the types (i.e., semantics) of geo-objects, such as a Shop or an Amenity. Unlike raster data, which are commonly used for such prediction tasks, geospatial vector data are irregular and heterogenous, making it challenging for deep neural networks to learn based on them. This work tackles this problem by introducing novel encodings which quantify the geospatial vector data allowing deep neural networks to learn based on them, and to spatially predict. These encodings were evaluated in this work based on a specific use case, namely LULC classification. We therefore classified LULC based on the different encodings as input and an attention-based deep neural network (called Perceiver). Based on the accuracy assessments, the potential of these encodings is compared. Furthermore, the influence of the object semantics on the classification performance is analyzed. This is performed by pruning the ontology, describing the semantics and repeating the LULC classification. The results of this work suggest that the encoding of the geography and the semantic granularity of geospatial vector data influences the classification performance overall and on a LULC class level. Nevertheless, the proposed encodings are not restricted to LULC classification but can be applied to other spatial prediction tasks too. In general, this work highlights that geospatial vector data with semantic annotations is a rich data source unlocking new potential for spatial predictions. However, we also show that this potential depends on how much is known about the semantics, and how the geography is presented to the deep neural network.
Journal Article
An Object-Oriented Deep Multi-Sphere Support Vector Data Description Method for Impervious Surfaces Extraction Based on Multi-Sourced Data
2023
The effective extraction of impervious surfaces is critical to monitor their expansion and ensure the sustainable development of cities. Open geographic data can provide a large number of training samples for machine learning methods based on remote-sensed images to extract impervious surfaces due to their advantages of low acquisition cost and large coverage. However, training samples generated from open geographic data suffer from severe sample imbalance. Although one-class methods can effectively extract an impervious surface based on imbalanced samples, most of the current one-class methods ignore the fact that an impervious surface comprises varied geographic objects, such as roads and buildings. Therefore, this paper proposes an object-oriented deep multi-sphere support vector data description (OODMSVDD) method, which takes into account the diversity of impervious surfaces and incorporates a variety of open geographic data involving OpenStreetMap (OSM), Points of Interest (POIs), and trajectory GPS points to automatically generate massive samples for model learning, thereby improving the extraction of impervious surfaces with varied types. The feasibility of the proposed method is experimentally verified with an overall accuracy of 87.43%, and its superior impervious surface classification performance is shown via comparative experiments. This provides a new, accurate, and more suitable extraction method for complex impervious surfaces.
Journal Article
A Moment-Based Shape Similarity Measurement for Areal Entities in Geographical Vector Data
2018
Shape similarity measurement model is often used to solve shape-matching problems in geospatial data matching. It is widely used in geospatial data integration, conflation, updating and quality assessment. Many shape similarity measurements apply only to simple polygons. However, areal entities can be represented either by simple polygons, holed polygons or multipolygons in geospatial data. This paper proposes a new shape similarity measurement model that can be used for all kinds of polygons. In this method, convex hulls of polygons are used to extract boundary features of entities and local moment invariants are calculated to extract overall shape features of entities. Combined with convex hull and local moment invariants, polygons can be represented by convex hull moment invariant curves. Then, a shape descriptor is obtained by applying fast Fourier transform to convex hull moment invariant curves, and shape similarity between areal entities is measured by the shape descriptor. Through similarity measurement experiments of different lakes in multiple representations and matching experiments between two urban area datasets, results showed that the method could distinguish areal entities even if they are represented by different kinds of polygons.
Journal Article
Extracting Skeleton Lines from Building Footprints by Integration of Vector and Raster Data
2022
The extraction of skeleton lines of buildings is a key step in building spatial analysis, which is widely performed for building matching and updating. Several methods for vector data skeleton line extraction have been established, including the improved constrained Delaunay triangulation (CDT) and raster data skeleton line extraction methods, which are based on image processing technologies. However, none of the existing studies have attempted to combine these methods to extract the skeleton lines of buildings. This study aimed to develop a building skeleton line extraction method based on vector–raster data integration. The research object was buildings extracted from remote sensing images. First, vector–raster data mapping relationships were identified. Second, the buildings were triangulated using CDT. The extraction results of the Rosenfeld thin algorithm for raster data were then used to remove redundant triangles. Finally, the Shi–Tomasi corner detection algorithm was used to detect corners. The building skeleton lines were extracted by adjusting the connection method of the type three triangles in CDT. The experimental results demonstrate that the proposed method can effectively extract the skeleton lines of complex vector buildings. Moreover, the skeleton line extraction results included a few burrs and were robust against noise.
Journal Article
Measuring the Spatial Relationship Information of Multi-Layered Vector Data
2018
Geospatial data is a carrier of information that represents the geography of the real world. Measuring the information contents of geospatial data is always a hot topic in spatial-information science. As the main type of geospatial data, spatial vector data models provide an effective framework for encoding spatial relationships and manipulating spatial data. In particular, the spatial relationship information of vector data is a complicated problem but meaningful to help human beings evaluate the complexity of spatial data and thus guide further analysis. However, existing measures of spatial information usually focus on the ‘disjointed’ relationship in one layer and cannot cover the various spatial relationships within the multi-layered structure of vector data. In this study, a new method is proposed to measure the spatial relationship information of multi-layered vector data. The proposed method focuses on spatial distance and topological relationships and provides quantitative measurements by extending the basic thought of Shannon’s entropy. The influence of any vector feature is modeled by introducing the concept of the energy field, and the energy distribution of one layer is described by an energy map and a weight map. An operational process is also proposed to measure the overall information content. Two experiments are conducted to validate the proposed method. In the experiment with real-life data, the proposed method shows the efficiency of the quantification of spatial relationship information under a multi-layered structure. In another experiment with simulated data, the characteristics and advantages of our method are demonstrated through a comparison with classical measurements.
Journal Article
Survey of vector database management systems
2024
There are now over 20 commercial vector database management systems (VDBMSs), all produced within the past five years. But embedding-based retrieval has been studied for over ten years, and similarity search a staggering half century and more. Driving this shift from algorithms to systems are new data intensive applications, notably large language models, that demand vast stores of unstructured data coupled with reliable, secure, fast, and scalable query processing capability. A variety of new data management techniques now exist for addressing these needs, however there is no comprehensive survey to thoroughly review these techniques and systems. We start by identifying five main obstacles to vector data management, namely the ambiguity of semantic similarity, large size of vectors, high cost of similarity comparison, lack of structural properties that can be used for indexing, and difficulty of efficiently answering “hybrid” queries that jointly search both attributes and vectors. Overcoming these obstacles has led to new approaches to query processing, storage and indexing, and query optimization and execution. For query processing, a variety of similarity scores and query types are now well understood; for storage and indexing, techniques include vector compression, namely quantization, and partitioning techniques based on randomization, learned partitioning, and “navigable” partitioning; for query optimization and execution, we describe new operators for hybrid queries, as well as techniques for plan enumeration, plan selection, distributed query processing, data manipulation queries, and hardware accelerated query execution. These techniques lead to a variety of VDBMSs across a spectrum of design and runtime characteristics, including “native” systems that are specialized for vectors and “extended” systems that incorporate vector capabilities into existing systems. We then discuss benchmarks, and finally outline research challenges and point the direction for future work.
Journal Article
Detection of Fabric Defects by Auto-Regressive Spectral Analysis and Support Vector Data Description
2010
For the purpose of realizing fast and effective detection of defects in woven fabric, and in consideration of the inherent characteristics of fabric texture, i.e., periodicity and orientation, a new approach for fabric texture analysis, based on the modern spectral analysis of a time series rather than the classical spectral analysis of an image, is proposed in this paper. Traditionally, a power spectral estimated by a two-dimensional Fast Fourier transformation (FFT) is usually employed in the detection of fabric defects, which involves a large computational complexity and a relatively low accuracy of spectral estimation. To this effect, this paper makes a one-dimensional power spectral density (PSD) analysis of the fabric image via a Burg-algorithm-based Auto-Regressive (AR) spectral estimation model, and accordingly extracts features capable of effectively differentiating normal textures from defective ones. A support vector data description is adopted as a detector in order to deal with defect detection, a typical task of one-class classification. Experimental results for the detection of defects from several fabric collections with different texture backgrounds indicate that a low false alarm rate and a low missing rate can be simultaneously obtained with less computational complexity. Comparison of the detection results between the AR model and the FFT method confirms the superiority of the proposed method.
Journal Article