Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
by
Wu, Haoning
, Li, Dongxu
, Li, Junnan
, Chen, Bei
in
Benchmarks
/ Context
/ Information retrieval
/ Multiple choice
/ Performance evaluation
/ Questions
/ Reasoning
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
by
Wu, Haoning
, Li, Dongxu
, Li, Junnan
, Chen, Bei
in
Benchmarks
/ Context
/ Information retrieval
/ Multiple choice
/ Performance evaluation
/ Questions
/ Reasoning
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
Paper
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Large multimodal models (LMMs) are processing increasingly longer and richer inputs. Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark that features video-language interleaved inputs up to an hour long. Our benchmark includes 3,763 varying-length web-collected videos with their subtitles across diverse themes, designed to comprehensively evaluate LMMs on long-term multimodal understanding. To achieve this, we interpret the primary challenge as to accurately retrieve and reason over detailed multimodal information from long inputs. As such, we formulate a novel video question-answering task termed referring reasoning. Specifically, as part of the question, it contains a referring query that references related video contexts, called referred context. The model is then required to reason over relevant video details from the referred context. Following the paradigm of referring reasoning, we curate 6,678 human-annotated multiple-choice questions in 17 fine-grained categories, establishing one of the most comprehensive benchmarks for long-form video understanding. Evaluations suggest that the LongVideoBench presents significant challenges even for the most advanced proprietary models (e.g. GPT-4o, Gemini-1.5-Pro, GPT-4-Turbo), while their open-source counterparts show an even larger performance gap. In addition, our results indicate that model performance on the benchmark improves only when they are capable of processing more frames, positioning LongVideoBench as a valuable benchmark for evaluating future-generation long-context LMMs.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.