Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames
by
Koppula, Skanda
, Patraucean, Viorica
, Chiu, Justin
, Pathak, Shreya
, Zisserman, Andrew
, Papalampidi, Pinelopi
, Nematzadeh, Aida
, Shen, Jiajun
, Heyward, Joe
, Miech, Antoine
in
Coders
/ Frames (data processing)
/ Masking
/ Parameters
/ Video
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames
by
Koppula, Skanda
, Patraucean, Viorica
, Chiu, Justin
, Pathak, Shreya
, Zisserman, Andrew
, Papalampidi, Pinelopi
, Nematzadeh, Aida
, Shen, Jiajun
, Heyward, Joe
, Miech, Antoine
in
Coders
/ Frames (data processing)
/ Masking
/ Parameters
/ Video
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames
by
Koppula, Skanda
, Patraucean, Viorica
, Chiu, Justin
, Pathak, Shreya
, Zisserman, Andrew
, Papalampidi, Pinelopi
, Nematzadeh, Aida
, Shen, Jiajun
, Heyward, Joe
, Miech, Antoine
in
Coders
/ Frames (data processing)
/ Masking
/ Parameters
/ Video
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames
Paper
A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Understanding long, real-world videos requires modeling of long-range visual dependencies. To this end, we explore video-first architectures, building on the common paradigm of transferring large-scale, image--text models to video via shallow temporal fusion. However, we expose two limitations to the approach: (1) decreased spatial capabilities, likely due to poor video--language alignment in standard video datasets, and (2) higher memory consumption, bottlenecking the number of frames that can be processed. To mitigate the memory bottleneck, we systematically analyze the memory/accuracy trade-off of various efficient methods: factorized attention, parameter-efficient image-to-video adaptation, input masking, and multi-resolution patchification. Surprisingly, simply masking large portions of the video (up to 75%) during contrastive pre-training proves to be one of the most robust ways to scale encoders to videos up to 4.3 minutes at 1 FPS. Our simple approach for training long video-to-text models, which scales to 1B parameters, does not add new architectural complexity and is able to outperform the popular paradigm of using much larger LLMs as an information aggregator over segment-based information on benchmarks with long-range temporal dependencies (YouCook2, EgoSchema).
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.