Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Automatic Expert Discovery in LLM Upcycling via Sparse Interpolated Mixture-of-Experts
by
Wei, Ying
, Chen, Shengzhuang
, Schwarz, Jonathan Richard
in
Large language models
/ Mixtures
/ Tuning
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Automatic Expert Discovery in LLM Upcycling via Sparse Interpolated Mixture-of-Experts
by
Wei, Ying
, Chen, Shengzhuang
, Schwarz, Jonathan Richard
in
Large language models
/ Mixtures
/ Tuning
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Automatic Expert Discovery in LLM Upcycling via Sparse Interpolated Mixture-of-Experts
Paper
Automatic Expert Discovery in LLM Upcycling via Sparse Interpolated Mixture-of-Experts
2025
Request Book From Autostore
and Choose the Collection Method
Overview
We present Sparse Interpolated Mixture-of-Experts (SIMoE) instruction-tuning, an end-to-end algorithm designed to fine-tune a dense pre-trained Large Language Model (LLM) into a MoE-style model that possesses capabilities in multiple specialized domains. During instruction-tuning, SIMoE automatically identifies multiple specialized experts under a specified sparsity constraint, with each expert representing a structurally sparse subset of the seed LLM's parameters that correspond to domain-specific knowledge within the data. SIMoE simultaneously learns an input-dependent expert merging strategy via a router network, leveraging rich cross-expert knowledge for superior downstream generalization that surpasses existing baselines. Empirically, SIMoE consistently achieves state-of-the-art performance on common instruction-tuning benchmarks while maintaining an optimal performance-compute trade-off compared to all baselines.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.