Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
by
Li, Daqun
, Wang, Yuqing
, Yu, Yi
, Zhang, Xiaoning
in
Algorithms
/ Artificial neural networks
/ Decoders
/ Deep learning
/ Design
/ domain-specific prompting module
/ Feature extraction
/ Image retrieval
/ Information processing
/ Localization
/ Modules
/ Neural networks
/ Object recognition
/ optical remote sensing images
/ parameter-efficient fine-tuning
/ progressive self-prompting decoder module
/ Prompt engineering
/ Remote sensing
/ Salience
/ salient object detection
/ Segment Anything Model
/ Segments
/ Semantics
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
by
Li, Daqun
, Wang, Yuqing
, Yu, Yi
, Zhang, Xiaoning
in
Algorithms
/ Artificial neural networks
/ Decoders
/ Deep learning
/ Design
/ domain-specific prompting module
/ Feature extraction
/ Image retrieval
/ Information processing
/ Localization
/ Modules
/ Neural networks
/ Object recognition
/ optical remote sensing images
/ parameter-efficient fine-tuning
/ progressive self-prompting decoder module
/ Prompt engineering
/ Remote sensing
/ Salience
/ salient object detection
/ Segment Anything Model
/ Segments
/ Semantics
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
by
Li, Daqun
, Wang, Yuqing
, Yu, Yi
, Zhang, Xiaoning
in
Algorithms
/ Artificial neural networks
/ Decoders
/ Deep learning
/ Design
/ domain-specific prompting module
/ Feature extraction
/ Image retrieval
/ Information processing
/ Localization
/ Modules
/ Neural networks
/ Object recognition
/ optical remote sensing images
/ parameter-efficient fine-tuning
/ progressive self-prompting decoder module
/ Prompt engineering
/ Remote sensing
/ Salience
/ salient object detection
/ Segment Anything Model
/ Segments
/ Semantics
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
Journal Article
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
2025
Request Book From Autostore
and Choose the Collection Method
Overview
With the continuous advancement of deep neural networks, salient object detection (SOD) in natural images has made significant progress. However, SOD in optical remote sensing images (ORSI-SOD) remains a challenging task due to the diversity of objects and the complexity of backgrounds. The primary challenge lies in generating robust features that can effectively integrate both global semantic information for salient object localization and local spatial details for boundary reconstruction. Most existing ORSI-SOD methods rely on pre-trained CNN- or Transformer-based backbones to extract features from ORSIs, followed by multi-level feature aggregation. Given the significant differences between ORSIs and the natural images used in pre-training, the generalization capability of these backbone networks is often limited, resulting in suboptimal performance. Recently, prompt engineering has been employed to enhance the generalization ability of networks in the Segment Anything Model (SAM), an emerging vision foundation model that has achieved remarkable success across various tasks. Despite its success, directly applying the SAM to ORSI-SOD without prompts from manual interaction remains unsatisfactory. In this paper, we propose a novel progressive self-prompting model based on the SAM, termed PSP-SAM, which generates both internal and external prompts to enhance the network and overcome the limitations of SAM in ORSI-SOD. Specifically, domain-specific prompting modules, consisting of both block-shared and block-specific adapters, are integrated into the network to learn domain-specific visual prompts within the backbone, facilitating its adaptation to ORSI-SOD. Furthermore, we introduce a progressive self-prompting decoder module that performs prompt-guided multi-level feature integration and generates stage-wise mask prompts progressively, enabling the prompt-based mask decoders outside the backbone to predict saliency maps in a coarse-to-fine manner. The entire network is trained end-to-end with parameter-efficient fine-tuning. Extensive experiments on three benchmark ORSI-SOD datasets demonstrate that our proposed network achieves state-of-the-art performance.
Publisher
MDPI AG
This website uses cookies to ensure you get the best experience on our website.