Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
32,618
result(s) for
"Remote-sensing images."
Sort by:
Ropme region from space : an atlas of major habitats, processes and human activity in the Ropme sea area (phase 1)
by
Al-Awadi, Abdulrahman A. reviewer
,
Petrov, Peter preparator
,
Abdulraheem, Mahmoud preparator
in
Pollutants Persian Gulf Remote sensing images
,
Pollutants Arabian Sea Remote sensing images
2000
A Survey on Deep Learning-Driven Remote Sensing Image Scene Understanding: Scene Classification, Scene Retrieval and Scene-Guided Object Detection
2019
As a fundamental and important task in remote sensing, remote sensing image scene understanding (RSISU) has attracted tremendous research interest in recent years. RSISU includes the following sub-tasks: remote sensing image scene classification, remote sensing image scene retrieval, and scene-driven remote sensing image object detection. Although these sub-tasks have different goals, they share some communal hints. Hence, this paper tries to discuss them as a whole. Similar to other domains (e.g., speech recognition and natural image recognition), deep learning has also become the state-of-the-art technique in RSISU. To facilitate the sustainable progress of RSISU, this paper presents a comprehensive review of deep-learning-based RSISU methods, and points out some future research directions and potential applications of RSISU.
Journal Article
Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features
2016
In recent years, deep learning has been widely studied for remote sensing image analysis. In this paper, we propose a method for remotely-sensed image classification by using sparse representation of deep learning features. Specifically, we use convolutional neural networks (CNN) to extract deep features from high levels of the image data. Deep features provide high level spatial information created by hierarchical structures. Although the deep features may have high dimensionality, they lie in class-dependent sub-spaces or sub-manifolds. We investigate the characteristics of deep features by using a sparse representation classification framework. The experimental results reveal that the proposed method exploits the inherent low-dimensional structure of the deep features to provide better classification results as compared to the results obtained by widely-used feature exploration algorithms, such as the extended morphological attribute profiles (EMAPs) and sparse coding (SC).
Journal Article
LAM: Remote Sensing Image Captioning with Label-Attention Mechanism
2019
Significant progress has been made in remote sensing image captioning by encoder-decoder frameworks. The conventional attention mechanism is prevalent in this task but still has some drawbacks. The conventional attention mechanism only uses visual information about the remote sensing images without considering using the label information to guide the calculation of attention masks. To this end, a novel attention mechanism, namely Label-Attention Mechanism (LAM), is proposed in this paper. LAM additionally utilizes the label information of high-resolution remote sensing images to generate natural sentences to describe the given images. It is worth noting that, instead of high-level image features, the predicted categories’ word embedding vectors are adopted to guide the calculation of attention masks. Representing the content of images in the form of word embedding vectors can filter out redundant image features. In addition, it can also preserve pure and useful information for generating complete sentences. The experimental results from UCM-Captions, Sydney-Captions and RSICD demonstrate that LAM can improve the model’s performance for describing high-resolution remote sensing images and obtain better S m scores compared with other methods. S m score is a hybrid scoring method derived from the AI Challenge 2017 scoring method. In addition, the validity of LAM is verified by the experiment of using true labels.
Journal Article
HiSTENet: History-Integrated Spatial–Temporal Information Extraction Network for Time Series Remote Sensing Image Change Detection
2025
Time series remote sensing images (TSIs) offer essential data for time series remote sensing image change detection with remote sensing technology advances. However, most existing methods focus on bi-temporal images, lacking the exploration of temporal information between images. This presents a significant challenge in effectively utilizing the rich spatio-temporal and object information inherent to TSIs. In this work, we propose a History-Integrated Spatial–Temporal Information Extraction Network (HiSTENet), which comprehensively utilize the spatio-temporal information of TSIs to achieve change detection of continuous image pairs. A Spatial-Temporal Relationship Extraction Module is utilized to model the spatio-temporal relationship. Simultaneously, a Historical Integration Module is introduced to fuse the objects’ characteristics across historical temporal images, while leveraging the features of historical images. Furthermore, the Feature Alignment Fusion Module mitigates pseudo changes by computing feature offsets and aligning images in the feature space. Experiments on SpaceNet7 and DynamicEarthNet demonstrate that HiSTENet outperforms other representative methods, achieving a better balance between precision and recall.
Journal Article
Mapping Earth from space
by
Snedden, Robert
in
Earth sciences Remote sensing Juvenile literature.
,
Earth Remote-sensing images Juvenile literature.
2011
\"Due to specially equipped satellites and spacecraft, we know more about Earth now than we ever have. That's because these satellites have been able to accurately map and collect data from almost every inch of Earth's surface. What have scientists learned from these state-of-the-art devices? What do they still want to know? What else can this network of satellites do for us?\"--Back cover.
A Multi-Feature Fusion-Based Method for Crater Extraction of Airport Runways in Remote-Sensing Images
by
Chen, Derong
,
Gong, Jiulu
,
Zhao, Yalun
in
airport runway extraction
,
Airports
,
Comparative analysis
2024
Due to the influence of the complex background of airports and damaged areas of the runway, the existing runway extraction methods do not perform well. Furthermore, the accurate crater extraction of airport runways plays a vital role in the military fields, but there are few related studies on this topic. To solve these problems, this paper proposes an effective method for the crater extraction of runways, which mainly consists of two stages: airport runway extraction and runway crater extraction. For the previous stage, we first apply corner detection and screening strategies to runway extraction based on multiple features of the runway, such as high brightness, regional texture similarity, and shape of the runway to improve the completeness of runway extraction. In addition, the proposed method can automatically realize the complete extraction of runways with different degrees of damage. For the latter stage, the craters of the runway can be extracted by calculating the edge gradient amplitude and grayscale distribution standard deviation of the candidate areas within the runway extraction results. In four typical remote-sensing images and four post-damage remote-sensing images, the average integrity of the runway extraction reaches more than 90%. The comparative experiment results show that the extraction effect and running speed of our method are both better than those of state-of-the-art methods. In addition, the final experimental results of crater extraction show that the proposed method can effectively extract craters of airport runways, and the extraction precision and recall both reach more than 80%. Overall, our research is of great significance to the damage assessment of airport runways based on remote-sensing images in the military fields.
Journal Article