Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
GC-YOLOv3: You Only Look Once with Global Context Block
by
Yang, Yang
, Deng, Hongmin
in
Accuracy
/ Algorithms
/ Context
/ Feature extraction
/ Feature maps
/ Methods
/ Neural networks
/ Object recognition
/ Semantics
/ Sensors
2020
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
GC-YOLOv3: You Only Look Once with Global Context Block
by
Yang, Yang
, Deng, Hongmin
in
Accuracy
/ Algorithms
/ Context
/ Feature extraction
/ Feature maps
/ Methods
/ Neural networks
/ Object recognition
/ Semantics
/ Sensors
2020
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
GC-YOLOv3: You Only Look Once with Global Context Block
2020
Request Book From Autostore
and Choose the Collection Method
Overview
In order to make the classification and regression of single-stage detectors more accurate, an object detection algorithm named Global Context You-Only-Look-Once v3 (GC-YOLOv3) is proposed based on the You-Only-Look-Once (YOLO) in this paper. Firstly, a better cascading model with learnable semantic fusion between a feature extraction network and a feature pyramid network is designed to improve detection accuracy using a global context block. Secondly, the information to be retained is screened by combining three different scaling feature maps together. Finally, a global self-attention mechanism is used to highlight the useful information of feature maps while suppressing irrelevant information. Experiments show that our GC-YOLOv3 reaches a maximum of 55.5 object detection mean Average Precision (mAP)@0.5 on Common Objects in Context (COCO) 2017 test-dev and that the mAP is 5.1% higher than that of the YOLOv3 algorithm on Pascal Visual Object Classes (PASCAL VOC) 2007 test set. Therefore, experiments indicate that the proposed GC-YOLOv3 model exhibits optimal performance on the PASCAL VOC and COCO datasets.
Publisher
MDPI AG
Subject
This website uses cookies to ensure you get the best experience on our website.