Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots
by
Ikeuchi, Katsushi
, Nakamura, Minako
, Kudoh, Shunsuke
, Ma, Zhaoyuan
, Yan, Zengqiang
in
Automation
/ Hardware
/ Humanoid
/ Robots
2018
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots
by
Ikeuchi, Katsushi
, Nakamura, Minako
, Kudoh, Shunsuke
, Ma, Zhaoyuan
, Yan, Zengqiang
in
Automation
/ Hardware
/ Humanoid
/ Robots
2018
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots
Journal Article
Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots
2018
Request Book From Autostore
and Choose the Collection Method
Overview
We have been developing a paradigm that we call learning-from-observation for a robot to automatically acquire a robot program to conduct a series of operations, or for a robot to understand what to do, through observing humans performing the same operations. Since a simple mimicking method to repeat exact joint angles or exact end-effector trajectories does not work well because of the kinematic and dynamic differences between a human and a robot, the proposed method employs intermediate symbolic representations, tasks, for conceptually representing what-to-do through observation. These tasks are subsequently mapped to appropriate robot operations depending on the robot hardware. In the present work, task models for upper-body operations of humanoid robots are presented, which are designed on the basis of Labanotation. Given a series of human operations, we first analyze the upper-body motions and extract certain fixed poses from key frames. These key poses are translated into tasks represented by Labanotation symbols. Then, a robot performs the operations corresponding to those task models. Because tasks based on Labanotation are independent of robot hardware, different robots can share the same observation module, and only different task-mapping modules specific to robot hardware are required. The system was implemented and demonstrated that three different robots can automatically mimic human upper-body operations with a satisfactory level of resemblance.
Publisher
Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.