Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
by
Xu, Yongyang
, Wang, Biao
, Wu, Yanlan
, Wu, Penghai
, Yao, Xuedong
, Yang, Hui
in
Algorithms
/ attention mechanism
/ building extraction
/ Buildings
/ Coders
/ Deep learning
/ Disaster management
/ Emergency preparedness
/ Feature maps
/ High resolution
/ Image resolution
/ imagery
/ Machine learning
/ Networks
/ Neural networks
/ Pattern recognition
/ Photogrammetry
/ Remote sensing
/ Response time
/ Semantics
/ Urban planning
/ very high resolution
2018
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
by
Xu, Yongyang
, Wang, Biao
, Wu, Yanlan
, Wu, Penghai
, Yao, Xuedong
, Yang, Hui
in
Algorithms
/ attention mechanism
/ building extraction
/ Buildings
/ Coders
/ Deep learning
/ Disaster management
/ Emergency preparedness
/ Feature maps
/ High resolution
/ Image resolution
/ imagery
/ Machine learning
/ Networks
/ Neural networks
/ Pattern recognition
/ Photogrammetry
/ Remote sensing
/ Response time
/ Semantics
/ Urban planning
/ very high resolution
2018
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
by
Xu, Yongyang
, Wang, Biao
, Wu, Yanlan
, Wu, Penghai
, Yao, Xuedong
, Yang, Hui
in
Algorithms
/ attention mechanism
/ building extraction
/ Buildings
/ Coders
/ Deep learning
/ Disaster management
/ Emergency preparedness
/ Feature maps
/ High resolution
/ Image resolution
/ imagery
/ Machine learning
/ Networks
/ Neural networks
/ Pattern recognition
/ Photogrammetry
/ Remote sensing
/ Response time
/ Semantics
/ Urban planning
/ very high resolution
2018
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
Journal Article
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
2018
Request Book From Autostore
and Choose the Collection Method
Overview
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.
This website uses cookies to ensure you get the best experience on our website.