Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
32
result(s) for
"McQuade, Mark"
Sort by:
Spectrographic analysis–there is more to see than singer's formant: second formant tuning in males above the secondo passaggio
2014
McQuade discusses how advances in technology have contributed to the demystification of vocal registration. Specifically, he examines how spectrographic analysis and voice synthesizers have revealed the importance of formant tuning in achieving vocal aesthetic goals.
Journal Article
Teacher Perceptions of Standards-Based Mindset and Student Outcomes
2019
The purpose of this study was to examine teacher perceptions of how their instruction improved since adopting a standards-based mindset (SBM), and how each of the practices of SBM affected student outcomes. It collected qualitative interview data from participants in three Wisconsin high schools that had fully implemented standards-based reform (SBR) and standards-based grading (SBG). The researcher drew five conclusions. The first conclusion is to redefine accountability is critical for teachers to embrace giving full credit for what students know. The second conclusion is that homework as practice needs to be reported to parents. The third conclusion is that teachers who embraced SBR gave better feedback. The fourth conclusion is that grades were more accurate when teachers used SBM practices if behavior was reported. The fifth conclusion is that students considered at risk under statute and students eligible for special-education may benefit from SBM practices. The researcher recommends that districts consider engaging stakeholders to determine what behaviors they feel are most important for their community and develop a system and policies to report on those behaviors in addition to academic progress. Suggestions are made for future research, including examining site report cards from before and after SBR. Although this study was limited by sample size and observer bias, its findings and conclusions should nonetheless be useful to practitioners considering standards-based grading.
Dissertation
Cinderella Meets Cendrillon: Music Theater and Opera Living Under the Same Roof
by
Sisco, David
,
Henderson, Allen
,
McQuade, Jennifer
in
Alexander technique
,
Audiences
,
Awards & honors
2018
Since its inception, the company has annually mounted one opera that played to audiences averaging 200-250 patrons for each of the two performances. A DELICATE BALANCE Enlightening as this data is, it is also essential to learn how this perceived increased demand for stylistic versatility impacts creative and performing artists in the industry, especially the professional singers. [...]the authors reached out to colleagues who worked in the U.S. and abroad as professional singers, conductors, and directors in order to gain first-hand perspective. [...]the artistic staff and performers are usually from the U.K. or the U.S.\"10 Active performer and president of the European Voice Teachers Association, Susan Yarnall Monks said, \"Certainly in the U.K., opera companies will put on musicals by Sondheim, Weill, and Rodgers & Hammerstein. More than 50% of respondents said they had performed at least one music theater role during their post secondary studies. Since their graduation, almost 40% of respondents have auditioned for a pop/rock-influenced musical, including shows like Evita, Footloose, Frozen, Hairspray, If/Then, Kinky Boots, Legally Blonde, Rent, Rock of Ages, Spring Awakening, and Wicked.
Journal Article
Arcee's MergeKit: A Toolkit for Merging Large Language Models
by
Karpukhin, Vlad
,
Meyers, Luke
,
McQuade, Mark
in
Large language models
,
Machine learning
,
Open source software
2025
The rapid expansion of the open-source language model landscape presents an opportunity to merge the competencies of these model checkpoints by combining their parameters. Advances in transfer learning, the process of fine-tuning pretrained models for specific tasks, has resulted in the development of vast amounts of task-specific models, typically specialized in individual tasks and unable to utilize each other's strengths. Model merging facilitates the creation of multitask models without the need for additional training, offering a promising avenue for enhancing model performance and versatility. By preserving the intrinsic capabilities of the original models, model merging addresses complex challenges in AI - including the difficulties of catastrophic forgetting and multitask learning. To support this expanding area of research, we introduce MergeKit, a comprehensive, open-source library designed to facilitate the application of model merging strategies. MergeKit offers an extensible framework to efficiently merge models on any hardware, providing utility to researchers and practitioners. To date, thousands of models have been merged by the open-source community, leading to the creation of some of the worlds most powerful open-source model checkpoints, as assessed by the Open LLM Leaderboard. The library is accessible at https://github.com/arcee-ai/MergeKit.
Arcee's MergeKit: A Toolkit for Merging Large Language Models
by
Karpukhin, Vlad
,
Meyers, Luke
,
McQuade, Mark
in
Large language models
,
Libraries
,
Open source software
2024
The rapid expansion of the open-source language model landscape presents an opportunity to merge the competencies of these model checkpoints by combining their parameters. Advances in transfer learning, the process of fine-tuning pretrained models for specific tasks, has resulted in the development of vast amounts of task-specific models, typically specialized in individual tasks and unable to utilize each other's strengths. Model merging facilitates the creation of multitask models without the need for additional training, offering a promising avenue for enhancing model performance and versatility. By preserving the intrinsic capabilities of the original models, model merging addresses complex challenges in AI - including the difficulties of catastrophic forgetting and multitask learning. To support this expanding area of research, we introduce MergeKit, a comprehensive, open-source library designed to facilitate the application of model merging strategies. MergeKit offers an extensible framework to efficiently merge models on any hardware, providing utility to researchers and practitioners. To date, thousands of models have been merged by the open-source community, leading to the creation of some of the worlds most powerful open-source model checkpoints, as assessed by the Open LLM Leaderboard. The library is accessible at https://github.com/arcee-ai/MergeKit.
Merging in a Bottle: Differentiable Adaptive Merging (DAM) and the Path from Averaging to Automation
2024
By merging models, AI systems can combine the distinct strengths of separate language models, achieving a balance between multiple capabilities without requiring substantial retraining. However, the integration process can be intricate due to differences in training methods and fine-tuning, typically necessitating specialized knowledge and repeated refinement. This paper explores model merging techniques across a spectrum of complexity, examining where automated methods like evolutionary strategies stand compared to hyperparameter-driven approaches such as DARE, TIES-Merging and simpler methods like Model Soups. In addition, we introduce Differentiable Adaptive Merging (DAM), an efficient, adaptive merging approach as an alternative to evolutionary merging that optimizes model integration through scaling coefficients, minimizing computational demands. Our findings reveal that even simple averaging methods, like Model Soups, perform competitively when model similarity is high, underscoring each technique's unique strengths and limitations. We open-sourced DAM, including the implementation code and experiment pipeline, on GitHub: https://github.com/arcee-ai/DAM.
Roboflow 100: A Rich, Multi-Domain Object Detection Benchmark
by
Zuppichini, Francesco Saverio
,
Guerrie, Paul
,
Ciaglia, Floriana
in
Annotations
,
Applications programs
,
Benchmarks
2022
The evaluation of object detection models is usually performed by optimizing a single metric, e.g. mAP, on a fixed set of datasets, e.g. Microsoft COCO and Pascal VOC. Due to image retrieval and annotation costs, these datasets consist largely of images found on the web and do not represent many real-life domains that are being modelled in practice, e.g. satellite, microscopic and gaming, making it difficult to assert the degree of generalization learned by the model. We introduce the Roboflow-100 (RF100) consisting of 100 datasets, 7 imagery domains, 224,714 images, and 805 class labels with over 11,170 labelling hours. We derived RF100 from over 90,000 public datasets, 60 million public images that are actively being assembled and labelled by computer vision practitioners in the open on the web application Roboflow Universe. By releasing RF100, we aim to provide a semantically diverse, multi-domain benchmark of datasets to help researchers test their model's generalizability with real-life data. RF100 download and benchmark replication are available on GitHub.
Domain Adaptation of Llama3-70B-Instruct through Continual Pre-Training and Model Merging: A Comprehensive Evaluation
2024
We conducted extensive experiments on domain adaptation of the Meta-Llama-3-70B-Instruct model on SEC data, exploring its performance on both general and domain-specific benchmarks. Our focus included continual pre-training (CPT) and model merging, aiming to enhance the model's domain-specific capabilities while mitigating catastrophic forgetting. Through this study, we evaluated the impact of integrating financial regulatory data into a robust language model and examined the effectiveness of our model merging techniques in preserving and improving the model's instructive abilities. The model is accessible at hugging face: https://huggingface.co/arcee-ai/Llama-3-SEC-Base, arcee-ai/Llama-3-SEC-Base. This is an intermediate checkpoint of our final model, which has seen 20B tokens so far. The full model is still in the process of training. This is a preprint technical report with thorough evaluations to understand the entire process.
Arcee Trinity Large Technical Report
2026
We present the technical report for Arcee Trinity Large, a sparse Mixture-of-Experts model with 400B total parameters and 13B activated per token. Additionally, we report on Trinity Nano and Trinity Mini, with Trinity Nano having 6B total parameters with 1B activated per token, Trinity Mini having 26B total parameters with 3B activated per token. The models' modern architecture includes interleaved local and global attention, gated attention, depth-scaled sandwich norm, and sigmoid routing for Mixture-of-Experts. For Trinity Large, we also introduce a new MoE load balancing strategy titled Soft-clamped Momentum Expert Bias Updates (SMEBU). We train the models using the Muon optimizer. All three models completed training with zero loss spikes. Trinity Nano and Trinity Mini were pre-trained on 10 trillion tokens, and Trinity Large was pre-trained on 17 trillion tokens. The model checkpoints are available at https://huggingface.co/arcee-ai.
Richard Miller: A life of contributions to vocal pedagogy
2006
Richard Miller is one of the most prominent and influential figures in the field of vocal pedagogy from the second half of the twentieth and early twenty-first centuries. The purpose of this project is to give an accurate history of his life and work. Until now no such research in this area has been undertaken. The study takes an in-depth look at the events in both Miller's personal and professional lives that shaped his approach to and understanding of the teaching of singing. From his early childhood achievements as a boy-soprano, to his internationally successful singing career as a professional lyric tenor, to his teaching career that spanned nearly fifty years in more than thirteen countries and thirty-eight states, Miller dedicated his life to making music and advancing the field of vocal pedagogy. The data for this project was collected through a series of written and personal interviews with Miller, his wife, and many of his former students. Miller's eight books and over one hundred articles were also consulted. His books are given special consideration and are discussed at length, as are his eight vocal pedagogy videos. Miller's teaching style and pedagogical philosophies are examined from both his and his students' points of view. Detailed accounts of his personal life, including his service in the U.S. Army during World War II, education and Fulbright studies in Italy, professional singing career, teaching and master classes, and his interest in voice research, allow the reader to gain a more complete vision of Miller as a person. Also included are numerous performance reviews and several personal greetings---to the maestro from his former students---that show the respect, admiration, love, and gratitude they feel for him. Miller's success as a teacher of singing is not judged by the books and articles, but by the successes of his students---of which there are countless performing and teaching around the world. This project will serve as a guide, for all vocalists, voice pedagogues, voice scientists, and any other interested parties, to Richard Miller's life and work in the field of vocal pedagogy.
Dissertation