Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,515 result(s) for "hyperspectral image"
Sort by:
An extensive review of hyperspectral image classification and prediction: techniques and challenges
Hyperspectral Image Processing (HSIP) is an essential technique in remote sensing. Currently, extensive research is carried out in hyperspectral image processing, involving many applications, including land cover classification, anomaly detection, plant classification, etc., Hyperspectral image processing is a powerful tool that enables us to capture and analyze an object's spectral information with greater accuracy and precision. Hyperspectral images are made up of hundreds of spectral bands, capturing an immense amount of information about the earth's surface. Accurately classifying and predicting land cover in these images is critical to understanding our planet's ecosystem and the impact of human activities on it. With the advent of deep learning techniques, the process of analyzing hyperspectral images has become more efficient and accurate than ever before. These techniques enable us to categorize land cover and predict Land Use/Land Cover (LULC) with exceptional precision, providing valuable insights into the state of our planet's environment. Image classification is difficult in hyperspectral image processing because of the large number of data samples but with a limited label. By selecting the appropriate bands from the image, we can get the finest classification results and predicted values. To our knowledge, the previous review papers concentrated only on the classification method. Here, we have presented an extensive review of various components of hyperspectral image processing, hyperspectral image analysis, pre-processing of an image, feature extraction and feature selection methods to select the number of features (bands), classification methods, and prediction methods. In addition, we also elaborated on the datasets used for classification, evaluation metrics used, various issues, and challenges. Thus, this review article will benefit new researchers in the hyperspectral image classification domain.
ATIS-Driven 3DCNet: A Novel Three-Stream Hyperspectral Fusion Framework with Knowledge from Downstream Classification Performance
Reconstructing high-resolution hyperspectral images (HR-HSIs) by fusing low-resolution hyperspectral images (LR-HSIs) and high-resolution multispectral images (HR-MSIs) is a significant challenge in image processing. Traditional fusion methods focus on visual and statistical metrics, often neglecting the requirements of downstream tasks. To address this gap, we propose a novel three-stream fusion network, 3DCNet, designed to integrate spatial and spectral information from LR-HSIs and HR-MSIs. The framework includes two dedicated branches for extracting spatial and spectral features, alongside a hybrid spatial–spectral branch (HSSI). The spatial block (SpatB) and the spectral block (SpecB) are designed to extract spatial and spectral details. The training process employs the global loss, spatial edge loss, and spectral angle loss for fusion tasks, with an alternating training iteration strategy (ATIS) to enhance downstream classification by iteratively refining the fusion and classification networks. Fusion experiments on seven datasets demonstrate that 3DCNet outperforms existing methods in generating high-quality HR-HSIs. Superior performance in downstream classification tasks on four datasets proves the importance of the ATIS. Ablation studies validate the importance of each module and the ATIS process. The 3DCNet framework not only advances the fusion process by leveraging downstream knowledge but also sets a new benchmark for classification-oriented hyperspectral fusion.
Tree Species Classification of Drone Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks
Interest in drone solutions in forestry applications is growing. Using drones, datasets can be captured flexibly and at high spatial and temporal resolutions when needed. In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, biomass estimation, etc. Deep neural networks (DNN) have shown superior results when comparing with conventional machine learning methods such as multi-layer perceptron (MLP) in cases of huge input data. The objective of this research is to investigate 3D convolutional neural networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records. To find the most efficient set of feature combination, we compare the performances of 3D-CNN models trained with hyperspectral (HS) channels, Red-Green-Blue (RGB) channels, and canopy height model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively. The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. Our results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. Our results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.
Hyperspectral Image Mixed Noise Removal Using Subspace Representation and Deep CNN Image Prior
The ever-increasing spectral resolution of hyperspectral images (HSIs) is often obtained at the cost of a decrease in the signal-to-noise ratio (SNR) of the measurements. The decreased SNR reduces the reliability of measured features or information extracted from HSIs, thus calling for effective denoising techniques. This work aims to estimate clean HSIs from observations corrupted by mixed noise (containing Gaussian noise, impulse noise, and dead-lines/stripes) by exploiting two main characteristics of hyperspectral data, namely low-rankness in the spectral domain and high correlation in the spatial domain. We take advantage of the spectral low-rankness of HSIs by representing spectral vectors in an orthogonal subspace, which is learned from observed images by a new method. Subspace representation coefficients of HSIs are learned by solving an optimization problem plugged with an image prior extracted from a neural denoising network. The proposed method is evaluated on simulated and real HSIs. An exhaustive array of experiments and comparisons with state-of-the-art denoisers were carried out.
Explainability Feature Bands Adaptive Selection for Hyperspectral Image Classification
Hyperspectral remote sensing images are widely used in resource exploration, urban planning, natural disaster assessment, and feature classification. Aiming at the problems of poor interpretability of feature classification algorithms for hyperspectral images, multiple feature dimensions, and difficulty in effectively improving classification accuracy, this paper proposes a feature band adaptive selection method for hyperspectral images. The proposed feature band adaptive selection model focuses on the joint salient feature regions of the hyperspectral image, visualizes the feature contribution of the bands, more intuitively reveals the selection basis of the feature bands of the hyperspectral features in the process of deep learning, and selects the feature bands with high contribution to carry out classification experiments for verification. Quantitative evaluations on four hyperspectral benchmarks (Pavia University/Centre, Washington DC, GF-5) demonstrate that EFBASN achieves state-of-the-art classification accuracy, with an overall accuracy (OA) of 97.68% on Pavia U, surpassing 12 recent methods including SSCFA (94.48%) and CNCMN (93.12%). Crucially, the attention weights of critical bands (e.g., Band 26 at 672 nm for iron oxide detection) are 3.2 times higher than redundant bands, providing physically interpretable selection criteria.
Stochastic image spectroscopy: a discriminative generative approach to hyperspectral image modelling and classification
This paper introduces a new latent variable probabilistic framework for representing spectral data of high spatial and spectral dimensionality, such as hyperspectral images. We use a generative Bayesian model to represent the image formation process and provide interpretable and efficient inference and learning methods. Surprisingly, our approach can be implemented with simple tools and does not require extensive training data, detailed pixel-by-pixel labeling, or significant computational resources. Numerous experiments with simulated data and real benchmark scenarios show encouraging image classification performance. These results validate the unique ability of our framework to discriminate complex hyperspectral images, irrespective of the presence of highly discriminative spectral signatures.
Hyperspectral Marine Oil Spill Monitoring Using a Dual-Branch Spatial–Spectral Fusion Model
Marine oil spills pose a crucial concern in the monitoring of marine environments, and optical remote sensing serves as a vital means for marine oil spill detection. However, optical remote sensing imagery is susceptible to interference from sunglints and shadows, leading to diminished spectral differences between oil films and seawater. This makes it challenging to accurately extract the boundaries of oil–water interfaces. To address these aforementioned issues, this paper proposes a model based on the graph convolutional architecture and spatial–spectral information fusion for the oil spill detection of real oil spill incidents. The model is experimentally evaluated using both spaceborne and airborne hyperspectral oil spill images. Research findings demonstrate the superior oil spill detection accuracy of the developed model when compared to Graph Convolutional Network (GCN) and CNN-Enhanced Graph Convolutional Network (CEGCN), across two hyperspectral datasets collected from the Bohai Sea. Moreover, the performance of the developed model in oil spill detection remains optimal, even with only 1% of the training samples. Similar conclusions are drawn from the oil spill hyperspectral data collected from the Yellow Sea. These results validate the efficacy and robustness of the proposed model for marine oil spill detection.
Stochastic Neighbor Embedding Feature-Based Hyperspectral Image Classification Using 3D Convolutional Neural Network
The ample amount of information from hyperspectral image (HSI) bands allows the non-destructive detection and recognition of earth objects. However, dimensionality reduction (DR) of hyperspectral images (HSI) is required before classification as the classifier may suffer from the curse of dimensionality. Therefore, dimensionality reduction plays a significant role in HSI data analysis (e.g., effective processing and seamless interpretation). In this article, a sophisticated technique established as t-Distributed Stochastic Neighbor Embedding (tSNE) following the dimension reduction along with a blended CNN was implemented to improve the visualization and characterization of HSI. In the procedure, first, we employed principal component analysis (PCA) to reduce the HSI dimensions and remove non-linear consistency features between the wavelengths to project them to a smaller scale. Then we proposed tSNE to preserve the local and global pixel relationships and check the HSI information visually and experimentally. Lastly, it yielded two-dimensional data, improving the visualization and classification accuracy compared to other standard dimensionality-reduction algorithms. Finally, we employed deep-learning-based CNN to classify the reduced and improved HSI intra- and inter-band relationship-feature vector. The evaluation performance of 95.21% accuracy and 6.2% test loss proved the superiority of the proposed model compared to other state-of-the-art DR reduction algorithms.
A Novel Hyperspectral Image Simulation Method Based on Nonnegative Matrix Factorization
Hyperspectral (HS) images can provide abundant and fine spectral information on land surface. However, their applications may be limited by their narrow bandwidth and small coverage area. In this paper, we propose an HS image simulation method based on nonnegative matrix factorization (NMF), which aims at generating HS images using existing multispectral (MS) data. Our main novelty is proposing a spectral transformation matrix and new simulation method. First, we develop a spectral transformation matrix that transforms HS endmembers into MS endmembers. Second, we utilize an iteration scheme to optimize the HS and MS endmembers. The test MS image is then factorized by the MS endmembers to obtain the abundance matrix. The result image is constructed by multiplying the abundance matrix by the HS endmembers. Experiments prove that our method provides high spectral quality by combining prior spectral endmembers. The iteration schemes reduce the simulation error and improve the accuracy of the results. In comparative trials, the spectral angle, RMSE, and correlation coefficient of our method are 5.986, 284.6, and 0.905, respectively. Thus, our method outperforms other simulation methods.
An Approach for the Customized High-Dimensional Segmentation of Remote Sensing Hyperspectral Images
This paper addresses three problems in the field of hyperspectral image segmentation: the fact that the way an image must be segmented is related to what the user requires and the application; the lack and cost of appropriately labeled reference images; and, finally, the information loss problem that arises in many algorithms when high dimensional images are projected onto lower dimensional spaces before starting the segmentation process. To address these issues, the Multi-Gradient based Cellular Automaton (MGCA) structure is proposed to segment multidimensional images without projecting them to lower dimensional spaces. The MGCA structure is coupled with an evolutionary algorithm (ECAS-II) in order to produce the transition rule sets required by MGCA segmenters. These sets are customized to specific segmentation needs as a function of a set of low dimensional training images in which the user expresses his segmentation requirements. Constructing high dimensional image segmenters from low dimensional training sets alleviates the problem of lack of labeled training images. These can be generated online based on a parametrization of the desired segmentation extracted from a set of examples. The strategy has been tested in experiments carried out using synthetic and real hyperspectral images, and it has been compared to state-of-the-art segmentation approaches over benchmark images in the area of remote sensing hyperspectral imaging.