Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A Dual Multi-Head Contextual Attention Network for Hyperspectral Image Classification
by
Liang, Miaomiao
, Yu, Xiangchun
, He, Qinghua
, Meng, Zhe
, Jiao, Licheng
, Wang, Huai
in
Classification
/ Computation
/ contextual keys
/ Domains
/ dual attention
/ grouping perception
/ hyperspectral image classification
/ hyperspectral imagery
/ Hyperspectral imaging
/ image analysis
/ Image classification
/ Mathematical models
/ multi-head self-attention
/ Neighborhoods
/ Neural networks
/ Optimization
/ Parameters
/ Queries
/ Remote sensing
/ Sampling
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
A Dual Multi-Head Contextual Attention Network for Hyperspectral Image Classification
by
Liang, Miaomiao
, Yu, Xiangchun
, He, Qinghua
, Meng, Zhe
, Jiao, Licheng
, Wang, Huai
in
Classification
/ Computation
/ contextual keys
/ Domains
/ dual attention
/ grouping perception
/ hyperspectral image classification
/ hyperspectral imagery
/ Hyperspectral imaging
/ image analysis
/ Image classification
/ Mathematical models
/ multi-head self-attention
/ Neighborhoods
/ Neural networks
/ Optimization
/ Parameters
/ Queries
/ Remote sensing
/ Sampling
2022
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A Dual Multi-Head Contextual Attention Network for Hyperspectral Image Classification
by
Liang, Miaomiao
, Yu, Xiangchun
, He, Qinghua
, Meng, Zhe
, Jiao, Licheng
, Wang, Huai
in
Classification
/ Computation
/ contextual keys
/ Domains
/ dual attention
/ grouping perception
/ hyperspectral image classification
/ hyperspectral imagery
/ Hyperspectral imaging
/ image analysis
/ Image classification
/ Mathematical models
/ multi-head self-attention
/ Neighborhoods
/ Neural networks
/ Optimization
/ Parameters
/ Queries
/ Remote sensing
/ Sampling
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
A Dual Multi-Head Contextual Attention Network for Hyperspectral Image Classification
Journal Article
A Dual Multi-Head Contextual Attention Network for Hyperspectral Image Classification
2022
Request Book From Autostore
and Choose the Collection Method
Overview
To learn discriminative features, hyperspectral image (HSI), containing 3-D cube data, is a preferable means of capturing multi-head self-attention from both spatial and spectral domains if the burden in model optimization and computation is low. In this paper, we design a dual multi-head contextual self-attention (DMuCA) network for HSI classification with the fewest possible parameters and lower computation costs. To effectively capture rich contextual dependencies from both domains, we decouple the spatial and spectral contextual attention into two sub-blocks, SaMCA and SeMCA, where depth-wise convolution is employed to contextualize the input keys in the pure dimension. Thereafter, multi-head local attentions are implemented as group processing when the keys are alternately concatenated with the queries. In particular, in the SeMCA block, we group the spatial pixels by evenly sampling and create multi-head channel attention on each sampling set, to reduce the number of the training parameters and avoid the storage increase. In addition, the static contextual keys are fused with the dynamic attentional features in each block to strengthen the capacity of the model in data representation. Finally, the decoupled sub-blocks are weighted and summed together for 3-D attention perception of HSI. The DMuCA module is then plugged into a ResNet to perform HSI classification. Extensive experiments demonstrate that our proposed DMuCA achieves excellent results over several state-of-the-art attention mechanisms with the same backbone.
Publisher
MDPI AG
Subject
This website uses cookies to ensure you get the best experience on our website.