Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Clink! Chop! Thud! -- Learning Object Sounds from Real-World Interactions
by
Yang, Mengyu
, Chen, Yiming
, Agarwal, Siddhant
, Haozheng Pei
, Arun Balajee Vasudevan
, Hays, James
in
Carpets
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Clink! Chop! Thud! -- Learning Object Sounds from Real-World Interactions
by
Yang, Mengyu
, Chen, Yiming
, Agarwal, Siddhant
, Haozheng Pei
, Arun Balajee Vasudevan
, Hays, James
in
Carpets
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Clink! Chop! Thud! -- Learning Object Sounds from Real-World Interactions
Paper
Clink! Chop! Thud! -- Learning Object Sounds from Real-World Interactions
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Can a model distinguish between the sound of a spoon hitting a hardwood floor versus a carpeted one? Everyday object interactions produce sounds unique to the objects involved. We introduce the sounding object detection task to evaluate a model's ability to link these sounds to the objects directly involved. Inspired by human perception, our multimodal object-aware framework learns from in-the-wild egocentric videos. To encourage an object-centric approach, we first develop an automatic pipeline to compute segmentation masks of the objects involved to guide the model's focus during training towards the most informative regions of the interaction. A slot attention visual encoder is used to further enforce an object prior. We demonstrate state of the art performance on our new task along with existing multimodal action understanding tasks.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.