Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Object Priors for Classifying and Localizing Unseen Actions
by
Mettes Pascal
, Snoek Cees G M
, Thong, William
in
Image classification
/ Localization
/ Matching
/ Object recognition
/ Semantics
/ Tubes
/ Video
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Object Priors for Classifying and Localizing Unseen Actions
by
Mettes Pascal
, Snoek Cees G M
, Thong, William
in
Image classification
/ Localization
/ Matching
/ Object recognition
/ Semantics
/ Tubes
/ Video
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Object Priors for Classifying and Localizing Unseen Actions
Journal Article
Object Priors for Classifying and Localizing Unseen Actions
2021
Request Book From Autostore
and Choose the Collection Method
Overview
This work strives for the classification and localization of human actions in videos, without the need for any labeled video training examples. Where existing work relies on transferring global attribute or object information from seen to unseen action videos, we seek to classify and spatio-temporally localize unseen actions in videos from image-based object information only. We propose three spatial object priors, which encode local person and object detectors along with their spatial relations. On top we introduce three semantic object priors, which extend semantic matching through word embeddings with three simple functions that tackle semantic ambiguity, object discrimination, and object naming. A video embedding combines the spatial and semantic object priors. It enables us to introduce a new video retrieval task that retrieves action tubes in video collections based on user-specified objects, spatial relations, and object size. Experimental evaluation on five action datasets shows the importance of spatial and semantic object priors for unseen actions. We find that persons and objects have preferred spatial relations that benefit unseen action localization, while using multiple languages and simple object filtering directly improves semantic matching, leading to state-of-the-art results for both unseen action classification and localization.
Publisher
Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.