Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
BenchBrowser: Retrieving Evidence for Evaluating Benchmark Validity
by
Yauney, Gregory
, Swayamdipta, Swabha
, Diddee, Harshita
, Ippolito, Daphne
in
Benchmarks
2026
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
BenchBrowser: Retrieving Evidence for Evaluating Benchmark Validity
by
Yauney, Gregory
, Swayamdipta, Swabha
, Diddee, Harshita
, Ippolito, Daphne
in
Benchmarks
2026
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
BenchBrowser: Retrieving Evidence for Evaluating Benchmark Validity
Paper
BenchBrowser: Retrieving Evidence for Evaluating Benchmark Validity
2026
Request Book From Autostore
and Choose the Collection Method
Overview
Do language model benchmarks actually measure what practitioners intend them to ? High-level metadata is too coarse to convey the granular reality of benchmarks: a \"poetry\" benchmark may never test for haikus, while \"instruction-following\" benchmarks will often test for an arbitrary mix of skills. This opacity makes verifying alignment with practitioner goals a laborious process, risking an illusion of competence even when models fail on untested facets of user interests. We introduce BenchBrowser, a retriever that surfaces evaluation items relevant to natural language use cases over 20 benchmark suites. Validated by a human study confirming high retrieval precision, BenchBrowser generates evidence to help practitioners diagnose low content validity (narrow coverage of a capability's facets) and low convergent validity (lack of stable rankings when measuring the same capability). BenchBrowser, thus, helps quantify a critical gap between practitioner intent and what benchmarks actually test.
Publisher
Cornell University Library, arXiv.org
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.