Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
4
result(s) for
"Ginsburg, Max"
Sort by:
Mississippi bridge
by
Taylor, Mildred D
,
Ginsburg, Max
in
Race relations Juvenile fiction.
,
African Americans Juvenile fiction.
,
Discrimination Juvenile fiction.
2000
During a heavy rainstorm in 1930s rural Mississippi, a ten-year-old white boy sees a bus driver order all the black passengers off a crowded bus to make room for late-arriving white passengers and then set off across the raging Rosa Lee River.
A Rapid Host Response Blood Test for Bacterial/Viral Infection Discrimination Using a Portable Molecular Diagnostic Platform
by
Ko, Emily R
,
Tillekeratne, L Gayani
,
Henao, Ricardo
in
Bacterial infections
,
Coronaviruses
,
Infections
2025
Abstract
Background
Difficulty discriminating bacterial versus viral etiologies of infection drives unwarranted antibacterial prescriptions and, therefore, antibacterial resistance.
Methods
Utilizing a rapid portable test that measures peripheral blood host gene expression to discriminate bacterial and viral etiologies of infection (the HR-B/V assay on Biomeme's polymerase chain reaction–based Franklin platform), we tested 3 cohorts of subjects with suspected infection: the HR-B/V training cohort, the HR-B/V technical correlation cohort, and a coronavirus disease 2019 cohort.
Results
The Biomeme HR-B/V test showed very good performance at discriminating bacterial and viral infections, with a bacterial model accuracy of 84.5% (95% confidence interval [CI], 80.8%–87.5%), positive percent agreement (PPA) of 88.5% (95% CI, 81.3%–93.2%), negative percent agreement (NPA) of 83.1% (95% CI, 78.7%–86.7%), positive predictive value of 64.1% (95% CI, 56.3%–71.2%), and negative predictive value of 95.5% (95% CI, 92.4%–97.3%). The test showed excellent agreement with a previously developed BioFire HR-B/V test, with 100% (95% CI, 85.7%–100.0%) PPA and 94.9% (95% CI, 86.1%–98.3%) NPA for bacterial infection, and 100% (95% CI, 93.9%–100.0%) PPA and 100% (95% CI, 85.7%–100.0%) NPA for viral infection. Among subjects with acute severe acute respiratory syndrome coronavirus 2 infection of ≤7 days, accuracy was 93.3% (95% CI, 78.7%–98.2%) for 30 outpatients and 75.9% (95% CI, 57.9%–87.8%) for 29 inpatients.
Conclusions
The Biomeme HR-B/V test is a rapid, portable test with high performance at identifying patients unlikely to have bacterial infection, offering a promising antibiotic stewardship strategy that could be deployed as a portable, laboratory-based test.
This study shows the performance of the Biomeme HR-B/V rapid test, which measures peripheral blood host gene expression to discriminate bacterial versus viral etiologies of infection, with the goal of improving antibacterial use.
Journal Article
The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
by
Armstrong, George
,
Dutta, Ashmit
,
Rathi, Vedant
in
Prompt engineering
,
Reasoning
,
Task complexity
2025
Reasoning-capable language models achieve state-of-the-art performance in diverse complex tasks by generating long, explicit Chain-of-Thought (CoT) traces. While recent works show that base models can acquire such reasoning traces via reinforcement learning or distillation from stronger models like DeepSeek-R1, previous works demonstrate that even short CoT prompting without fine-tuning is able to improve reasoning. We ask whether long CoT can be induced in a base model using only prompting or minimal tuning. Using just 20 long CoT examples from the reasoning model QwQ-32B-Preview, we lightly fine-tune the base model Qwen2.5-32B. The resulting model outperforms the much larger Qwen2.5-Math-72B-Instruct, showing that a handful of high-quality examples can unlock strong reasoning capabilities. We further explore using CoT data from non-reasoning models and human annotators, enhanced with prompt engineering, multi-pass editing, and structural guidance. However, neither matches the performance of reasoning model traces, suggesting that certain latent qualities of expert CoT are difficult to replicate. We analyze key properties of reasoning data, such as problem difficulty, diversity, and answer length, that influence reasoning distillation. While challenges remain, we are optimistic that carefully curated human-written CoT, even in small quantities, can activate reasoning behaviors in base models. We release our human-authored dataset across refinement stages and invite further investigation into what makes small-scale reasoning supervision so effective.