Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
An Underwater Human–Robot Interaction Using a Visual–Textual Model for Autonomous Underwater Vehicles
by
Qi, Hong
, Wei, Fenglin
, Wang, Yuehang
, Zhang, Yongji
, Wang, Kai
, Zhao, Minghao
, Jiang, Yu
in
Accuracy
/ Algorithms
/ autonomous underwater vehicle
/ Autonomous underwater vehicles
/ Classification
/ Communication
/ Datasets
/ gesture recognition
/ Gestures
/ Human-computer interaction
/ Humans
/ Learning
/ Object recognition (Computers)
/ Pattern recognition
/ Pattern Recognition, Automated - methods
/ Remote submersibles
/ Robotics
/ Robots
/ Semantics
/ Supervision
/ underwater human–robot interaction
/ Upper Extremity
/ visual–textual association
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
An Underwater Human–Robot Interaction Using a Visual–Textual Model for Autonomous Underwater Vehicles
by
Qi, Hong
, Wei, Fenglin
, Wang, Yuehang
, Zhang, Yongji
, Wang, Kai
, Zhao, Minghao
, Jiang, Yu
in
Accuracy
/ Algorithms
/ autonomous underwater vehicle
/ Autonomous underwater vehicles
/ Classification
/ Communication
/ Datasets
/ gesture recognition
/ Gestures
/ Human-computer interaction
/ Humans
/ Learning
/ Object recognition (Computers)
/ Pattern recognition
/ Pattern Recognition, Automated - methods
/ Remote submersibles
/ Robotics
/ Robots
/ Semantics
/ Supervision
/ underwater human–robot interaction
/ Upper Extremity
/ visual–textual association
2022
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
An Underwater Human–Robot Interaction Using a Visual–Textual Model for Autonomous Underwater Vehicles
by
Qi, Hong
, Wei, Fenglin
, Wang, Yuehang
, Zhang, Yongji
, Wang, Kai
, Zhao, Minghao
, Jiang, Yu
in
Accuracy
/ Algorithms
/ autonomous underwater vehicle
/ Autonomous underwater vehicles
/ Classification
/ Communication
/ Datasets
/ gesture recognition
/ Gestures
/ Human-computer interaction
/ Humans
/ Learning
/ Object recognition (Computers)
/ Pattern recognition
/ Pattern Recognition, Automated - methods
/ Remote submersibles
/ Robotics
/ Robots
/ Semantics
/ Supervision
/ underwater human–robot interaction
/ Upper Extremity
/ visual–textual association
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
An Underwater Human–Robot Interaction Using a Visual–Textual Model for Autonomous Underwater Vehicles
Journal Article
An Underwater Human–Robot Interaction Using a Visual–Textual Model for Autonomous Underwater Vehicles
2022
Request Book From Autostore
and Choose the Collection Method
Overview
The marine environment presents a unique set of challenges for human–robot interaction. Communicating with gestures is a common way for interacting between the diver and autonomous underwater vehicles (AUVs). However, underwater gesture recognition is a challenging visual task for AUVs due to light refraction and wavelength color attenuation issues. Current gesture recognition methods classify the whole image directly or locate the hand position first and then classify the hand features. Among these purely visual approaches, textual information is largely ignored. This paper proposes a visual–textual model for underwater hand gesture recognition (VT-UHGR). The VT-UHGR model encodes the underwater diver’s image as visual features, the category text as textual features, and generates visual–textual features through multimodal interactions. We guide AUVs to use image–text matching for learning and inference. The proposed method achieves better performance than most existing purely visual methods on the dataset CADDY, demonstrating the effectiveness of using textual patterns for underwater gesture recognition.
Publisher
MDPI AG,MDPI
This website uses cookies to ensure you get the best experience on our website.