Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Action Recognition Using Close-Up of Maximum Activation and ETRI-Activity3D LivingLab Dataset
by
Kim, Dohyung
, Lee, Inwoong
, Lee, Sanghoon
, Kim, Doyoung
in
Accuracy
/ action recognition
/ Classification
/ dataset shift
/ Datasets
/ Deep learning
/ Localization
/ self-attention map
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Action Recognition Using Close-Up of Maximum Activation and ETRI-Activity3D LivingLab Dataset
by
Kim, Dohyung
, Lee, Inwoong
, Lee, Sanghoon
, Kim, Doyoung
in
Accuracy
/ action recognition
/ Classification
/ dataset shift
/ Datasets
/ Deep learning
/ Localization
/ self-attention map
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Action Recognition Using Close-Up of Maximum Activation and ETRI-Activity3D LivingLab Dataset
Journal Article
Action Recognition Using Close-Up of Maximum Activation and ETRI-Activity3D LivingLab Dataset
2021
Request Book From Autostore
and Choose the Collection Method
Overview
The development of action recognition models has shown great performance on various video datasets. Nevertheless, because there is no rich data on target actions in existing datasets, it is insufficient to perform action recognition applications required by industries. To satisfy this requirement, datasets composed of target actions with high availability have been created, but it is difficult to capture various characteristics in actual environments because video data are generated in a specific environment. In this paper, we introduce a new ETRI-Activity3D-LivingLab dataset, which provides action sequences in actual environments and helps to handle a network generalization issue due to the dataset shift. When the action recognition model is trained on the ETRI-Activity3D and KIST SynADL datasets and evaluated on the ETRI-Activity3D-LivingLab dataset, the performance can be severely degraded because the datasets were captured in different environments domains. To reduce this dataset shift between training and testing datasets, we propose a close-up of maximum activation, which magnifies the most activated part of a video input in detail. In addition, we present various experimental results and analysis that show the dataset shift and demonstrate the effectiveness of the proposed method.
Publisher
MDPI AG,MDPI
Subject
This website uses cookies to ensure you get the best experience on our website.