Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A multi-modal learning method for pick-and-place task based on human demonstration
by
Fan, Xinggang
, Li, Yaonan
, Chen, Heping
, Yu, Diqing
, Li, Han
, Jin, Yuao
in
Assembly lines
/ Behavior
/ Datasets
/ Decision making
/ Deep learning
/ Design
/ Industrial robots
/ Language
/ Learning
/ Pick and place tasks
/ Robots
/ Simulation
/ Video
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
A multi-modal learning method for pick-and-place task based on human demonstration
by
Fan, Xinggang
, Li, Yaonan
, Chen, Heping
, Yu, Diqing
, Li, Han
, Jin, Yuao
in
Assembly lines
/ Behavior
/ Datasets
/ Decision making
/ Deep learning
/ Design
/ Industrial robots
/ Language
/ Learning
/ Pick and place tasks
/ Robots
/ Simulation
/ Video
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A multi-modal learning method for pick-and-place task based on human demonstration
by
Fan, Xinggang
, Li, Yaonan
, Chen, Heping
, Yu, Diqing
, Li, Han
, Jin, Yuao
in
Assembly lines
/ Behavior
/ Datasets
/ Decision making
/ Deep learning
/ Design
/ Industrial robots
/ Language
/ Learning
/ Pick and place tasks
/ Robots
/ Simulation
/ Video
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
A multi-modal learning method for pick-and-place task based on human demonstration
Journal Article
A multi-modal learning method for pick-and-place task based on human demonstration
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Robot pick-and-place for unknown objects is still a very challenging research topic. This paper proposes a multi-modal learning method for robot one-shot imitation of pick-and-place tasks. This method aims to enhance the generality of industrial robots while reducing the amount of data and training costs the one-shot imitation method relies on. The method first categorizes human demonstration videos into different tasks, and these tasks are classified into six types to symbolize as many types of pick-and-place tasks as possible. Second, the method generates multi-modal prompts and finally predicts the action of the robot and completes the symbolic pick-and-place task in industrial production. A carefully curated dataset is created to complement the method. The dataset consists of human demonstration videos and instance images focused on real-world scenes and industrial tasks, which fosters adaptable and efficient learning. Experimental results demonstrate favorable success rates and loss results both in simulation environments and real-world experiments, confirming its effectiveness and practicality.
This website uses cookies to ensure you get the best experience on our website.