Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
59,064 result(s) for "Image classification"
Sort by:
Phishing detection using content based image classification
\"Phishing Detection using content-based image classification is an invaluable resource for any deep learning and cybersecurity professional and scholar trying to solve various cybersecurity tasks using new age technologies like Deep Learning and Computer Vision. With various rule-based phishing detection techniques at play which can be bypassed by phishers, this book provides a step-by-step approach to solve this problem using Computer Vision and Deep Learning techniques with significant accuracy. The book offers comprehensive coverage of the most essential topics, including: Programmatically reading and manipulating image data; Extracting relevant features from images; Building statistical models using image features; Using state of the art Deep Learning models for feature extraction; Build a robust phishing detection tool even with less data; Dimensionality reduction techniques; Class imbalance treatment; Feature Fusion techniques; Building performance metrics for multi-class classification task. Another unique aspect of this book is it comes with a completely reproducible code base developed by the author and shared via python notebooks for quick launch and running capabilities. They can be leveraged for further enhancing the provided models using new advancement in the field of computer vision and more advanced algorithms\"-- Provided by publisher.
Pooling in convolutional neural networks for medical image analysis: a survey and an empirical study
Convolutional neural networks (CNN) are widely used in computer vision and medical image analysis as the state-of-the-art technique. In CNN, pooling layers are included mainly for downsampling the feature maps by aggregating features from local regions. Pooling can help CNN to learn invariant features and reduce computational complexity. Although the max and the average pooling are the widely used ones, various other pooling techniques are also proposed for different purposes, which include techniques to reduce overfitting, to capture higher-order information such as correlation between features, to capture spatial or structural information, etc. As not all of these pooling techniques are well-explored for medical image analysis, this paper provides a comprehensive review of various pooling techniques proposed in the literature of computer vision and medical image analysis. In addition, an extensive set of experiments are conducted to compare a selected set of pooling techniques on two different medical image classification problems, namely HEp-2 cells and diabetic retinopathy image classification. Experiments suggest that the most appropriate pooling mechanism for a particular classification task is related to the scale of the class-specific features with respect to the image size. As this is the first work focusing on pooling techniques for the application of medical image analysis, we believe that this review and the comparative study will provide a guideline to the choice of pooling mechanisms for various medical image analysis tasks. In addition, by carefully choosing the pooling operations with the standard ResNet architecture, we show new state-of-the-art results on both HEp-2 cells and diabetic retinopathy image datasets.
Tree Species Classification of Drone Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks
Interest in drone solutions in forestry applications is growing. Using drones, datasets can be captured flexibly and at high spatial and temporal resolutions when needed. In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, biomass estimation, etc. Deep neural networks (DNN) have shown superior results when comparing with conventional machine learning methods such as multi-layer perceptron (MLP) in cases of huge input data. The objective of this research is to investigate 3D convolutional neural networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records. To find the most efficient set of feature combination, we compare the performances of 3D-CNN models trained with hyperspectral (HS) channels, Red-Green-Blue (RGB) channels, and canopy height model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively. The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. Our results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. Our results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.
Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network
Recent research has shown that using spectral–spatial information can considerably improve the performance of hyperspectral image (HSI) classification. HSI data is typically presented in the format of 3D cubes. Thus, 3D spatial filtering naturally offers a simple and effective method for simultaneously extracting the spectral–spatial features within such images. In this paper, a 3D convolutional neural network (3D-CNN) framework is proposed for accurate HSI classification. The proposed method views the HSI cube data altogether without relying on any preprocessing or post-processing, extracting the deep spectral–spatial-combined features effectively. In addition, it requires fewer parameters than other deep learning-based methods. Thus, the model is lighter, less likely to over-fit, and easier to train. For comparison and validation, we test the proposed method along with three other deep learning-based HSI classification methods—namely, stacked autoencoder (SAE), deep brief network (DBN), and 2D-CNN-based methods—on three real-world HSI datasets captured by different sensors. Experimental results demonstrate that our 3D-CNN-based method outperforms these state-of-the-art methods and sets a new record.
Urban Land Use and Land Cover Change Analysis Using Random Forest Classification of Landsat Time Series
Efficient implementation of remote sensing image classification can facilitate the extraction of spatiotemporal information for land use and land cover (LULC) classification. Mapping LULC change can pave the way to investigate the impacts of different socioeconomic and environmental factors on the Earth’s surface. This study presents an algorithm that uses Landsat time-series data to analyze LULC change. We applied the Random Forest (RF) classifier, a robust classification method, in the Google Earth Engine (GEE) using imagery from Landsat 5, 7, and 8 as inputs for the 1985 to 2019 period. We also explored the performance of the pan-sharpening algorithm on Landsat bands besides the impact of different image compositions to produce a high-quality LULC map. We used a statistical pan-sharpening algorithm to increase multispectral Landsat bands’ (Landsat 7–9) spatial resolution from 30 m to 15 m. In addition, we checked the impact of different image compositions based on several spectral indices and other auxiliary data such as digital elevation model (DEM) and land surface temperature (LST) on final classification accuracy based on several spectral indices and other auxiliary data on final classification accuracy. We compared the classification result of our proposed method and the Copernicus Global Land Cover Layers (CGLCL) map to verify the algorithm. The results show that: (1) Using pan-sharpened top-of-atmosphere (TOA) Landsat products can produce more accurate results for classification instead of using surface reflectance (SR) alone; (2) LST and DEM are essential features in classification, and using them can increase final accuracy; (3) the proposed algorithm produced higher accuracy (94.438% overall accuracy (OA), 0.93 for Kappa, and 0.93 for F1-score) than CGLCL map (84.4% OA, 0.79 for Kappa, and 0.50 for F1-score) in 2019; (4) the total agreement between the classification results and the test data exceeds 90% (93.37–97.6%), 0.9 (0.91–0.96), and 0.85 (0.86–0.95) for OA, Kappa values, and F1-score, respectively, which is acceptable in both overall and Kappa accuracy. Moreover, we provide a code repository that allows classifying Landsat 4, 5, 7, and 8 within GEE. This method can be quickly and easily applied to other regions of interest for LULC mapping.
Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification
Recently, Hyperspectral Image (HSI) classification has gradually been getting attention from more and more researchers. HSI has abundant spectral and spatial information; thus, how to fuse these two types of information is still a problem worth studying. In this paper, to extract spectral and spatial feature, we propose a Double-Branch Multi-Attention mechanism network (DBMA) for HSI classification. This network has two branches to extract spectral and spatial feature respectively which can reduce the interference between the two types of feature. Furthermore, with respect to the different characteristics of these two branches, two types of attention mechanism are applied in the two branches respectively, which ensures to extract more discriminative spectral and spatial feature. The extracted features are then fused for classification. A lot of experiment results on three hyperspectral datasets shows that the proposed method performs better than the state-of-the-art method.
An extensive review of hyperspectral image classification and prediction: techniques and challenges
Hyperspectral Image Processing (HSIP) is an essential technique in remote sensing. Currently, extensive research is carried out in hyperspectral image processing, involving many applications, including land cover classification, anomaly detection, plant classification, etc., Hyperspectral image processing is a powerful tool that enables us to capture and analyze an object's spectral information with greater accuracy and precision. Hyperspectral images are made up of hundreds of spectral bands, capturing an immense amount of information about the earth's surface. Accurately classifying and predicting land cover in these images is critical to understanding our planet's ecosystem and the impact of human activities on it. With the advent of deep learning techniques, the process of analyzing hyperspectral images has become more efficient and accurate than ever before. These techniques enable us to categorize land cover and predict Land Use/Land Cover (LULC) with exceptional precision, providing valuable insights into the state of our planet's environment. Image classification is difficult in hyperspectral image processing because of the large number of data samples but with a limited label. By selecting the appropriate bands from the image, we can get the finest classification results and predicted values. To our knowledge, the previous review papers concentrated only on the classification method. Here, we have presented an extensive review of various components of hyperspectral image processing, hyperspectral image analysis, pre-processing of an image, feature extraction and feature selection methods to select the number of features (bands), classification methods, and prediction methods. In addition, we also elaborated on the datasets used for classification, evaluation metrics used, various issues, and challenges. Thus, this review article will benefit new researchers in the hyperspectral image classification domain.
Deep Learning Approach for Early Detection of Alzheimer’s Disease
Alzheimer’s disease (AD) is a chronic, irreversible brain disorder, no effective cure for it till now. However, available medicines can delay its progress. Therefore, the early detection of AD plays a crucial role in preventing and controlling its progression. The main objective is to design an end-to-end framework for early detection of Alzheimer’s disease and medical image classification for various AD stages. A deep learning approach, specifically convolutional neural networks (CNN), is used in this work. Four stages of the AD spectrum are multi-classified. Furthermore, separate binary medical image classifications are implemented between each two-pair class of AD stages. Two methods are used to classify the medical images and detect AD. The first method uses simple CNN architectures that deal with 2D and 3D structural brain scans from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset based on 2D and 3D convolution. The second method applies the transfer learning principle to take advantage of the pre-trained models for medical image classifications, such as the VGG19 model. Due to the COVID-19 pandemic, it is difficult for people to go to hospitals periodically to avoid gatherings and infections. As a result, Alzheimer’s checking web application is proposed using the final qualified proposed architectures. It helps doctors and patients to check AD remotely. It also determines the AD stage of the patient based on the AD spectrum and advises the patient according to its AD stage. Nine performance metrics are used in the evaluation and the comparison between the two methods. The experimental results prove that the CNN architectures for the first method have the following characteristics: suitable simple structures that reduce computational complexity, memory requirements, overfitting, and provide manageable time. Besides, they achieve very promising accuracies, 93.61% and 95.17% for 2D and 3D multi-class AD stage classifications. The VGG19 pre-trained model is fine-tuned and achieved an accuracy of 97% for multi-class AD stage classifications.
Spectral-Spatial Attention Networks for Hyperspectral Image Classification
Many deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN), have been successfully applied to extracting deep features for hyperspectral tasks. Hyperspectral image classification allows distinguishing the characterization of land covers by utilizing their abundant information. Motivated by the attention mechanism of the human visual system, in this study, we propose a spectral-spatial attention network for hyperspectral image classification. In our method, RNN with attention can learn inner spectral correlations within a continuous spectrum, while CNN with attention is designed to focus on saliency features and spatial relevance between neighboring pixels in the spatial dimension. Experimental results demonstrate that our method can fully utilize the spectral and spatial information to obtain competitive performance.
A real-time constellation image classification method of wireless communication signals based on the lightweight network MobileViT
Automatic modulation classification (AMC) is a challenging topic in the development of cognitive radio, which can sense and learn surrounding electromagnetic environments and help to make corresponding decisions. In this paper, we propose to complete the real-time AMC through constructing a lightweight neural network MobileViT driven by the clustered constellation images. Firstly, the clustered constellation images are transformed from I/Q sequences to help extract robust and discriminative features. Then the lightweight neural network called MobileViT is developed for the real-time constellation image classification. Experimental results on the public dataset RadioML 2016.10a with edge computing platform demonstrate the superiority and efficiency of MobileViT. Furthermore, the extensive ablation tests prove the robustness of the proposed method to the learning rate and batch size. To the best of our knowledge, this is the first attempt to deploy the deep learning model to complete the real-time classification of modulation schemes of received signals at the edge.