Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining
by
Mentzer, Kaleigh
, Gaza, Bogdan
, Deng, Alvin
, DatologyAI
, Burstein, Paul
, Teh, Darren
, Wang, Zhengping
, Fang, Alex
, Blakeney, Cody
, Joshi, Siddharth
, Amro Abbas
, Monti, Ricardo
, Pan, Fan
, Adiga, Rishabh
, Baek, Christina
, Yin, Haoli
, Leavitt, Matthew
, Maini, Pratyush
, Wills, Josh
, Das, Spandan
, Morcos, Ari
, Larsen, Brett
, Vineeth Dorna
, Schwab, David
, Mongstad, Haakon
, Bannur, Charvi
, Merrick, Luke
, Doshi, Parth
, Urbanek, Jack
, Carranza, Aldo
in
Datasets
/ Large language models
/ Synthetic data
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining
by
Mentzer, Kaleigh
, Gaza, Bogdan
, Deng, Alvin
, DatologyAI
, Burstein, Paul
, Teh, Darren
, Wang, Zhengping
, Fang, Alex
, Blakeney, Cody
, Joshi, Siddharth
, Amro Abbas
, Monti, Ricardo
, Pan, Fan
, Adiga, Rishabh
, Baek, Christina
, Yin, Haoli
, Leavitt, Matthew
, Maini, Pratyush
, Wills, Josh
, Das, Spandan
, Morcos, Ari
, Larsen, Brett
, Vineeth Dorna
, Schwab, David
, Mongstad, Haakon
, Bannur, Charvi
, Merrick, Luke
, Doshi, Parth
, Urbanek, Jack
, Carranza, Aldo
in
Datasets
/ Large language models
/ Synthetic data
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining
by
Mentzer, Kaleigh
, Gaza, Bogdan
, Deng, Alvin
, DatologyAI
, Burstein, Paul
, Teh, Darren
, Wang, Zhengping
, Fang, Alex
, Blakeney, Cody
, Joshi, Siddharth
, Amro Abbas
, Monti, Ricardo
, Pan, Fan
, Adiga, Rishabh
, Baek, Christina
, Yin, Haoli
, Leavitt, Matthew
, Maini, Pratyush
, Wills, Josh
, Das, Spandan
, Morcos, Ari
, Larsen, Brett
, Vineeth Dorna
, Schwab, David
, Mongstad, Haakon
, Bannur, Charvi
, Merrick, Luke
, Doshi, Parth
, Urbanek, Jack
, Carranza, Aldo
in
Datasets
/ Large language models
/ Synthetic data
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining
Paper
BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-scale Pretraining
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Recent advances in large language model (LLM) pretraining have shown that simply scaling data quantity eventually leads to diminishing returns, hitting a data wall. In response, the use of synthetic data for pretraining has emerged as a promising paradigm for pushing the frontier of performance. Despite this, the factors affecting synthetic data quality remain poorly understood. In this work, we introduce BeyondWeb, a synthetic data generation framework that produces high-quality synthetic data for pretraining. BeyondWeb significantly extends the capabilities of traditional web-scale datasets, outperforming state-of-the-art synthetic pretraining datasets such as Cosmopedia and Nemotron-CC's high-quality synthetic subset (Nemotron-Synth) by up to 5.1 percentage points (pp) and 2.6pp, respectively, when averaged across a suite of 14 benchmark evaluations. It delivers up to 7.7x faster training than open web data and 2.7x faster than Nemotron-Synth. Remarkably, a 3B model trained for 180B tokens on BeyondWeb outperforms an 8B model trained for the same token budget on Cosmopedia. We also present several insights from BeyondWeb on synthetic data for pretraining: what drives its benefits, which data to rephrase and how, and the impact of model size and family on data quality. Overall, our work shows that there's no silver bullet for generating high-quality synthetic pretraining data. The best outcomes require jointly optimizing many factors, a challenging task that requires rigorous science and practical expertise. Naive approaches can yield modest improvements, potentially at great cost, while well-executed methods can yield transformative improvements, as exemplified by BeyondWeb.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.