Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
6,555
result(s) for
"Jargon"
Sort by:
Sloganization in language education discourse : conceptual thinking in the age of academic marketization
\"This volume focuses on sloganization as an emergent phenomenon in language education discourse. Motivated by an increasing uneasiness with a number of concepts in current research that have become sloganized, this volume scrutinizes the discourse of language education, identifies popular slogans and reconstructs the sloganization processes\"-- Provided by publisher.
Automatic jargon identifier for scientists engaging with the public and science communication educators
by
Yosef, Roy
,
Segev, Elad
,
Rakedzon, Tzipora
in
Analysis
,
Applied mathematics
,
Biology and Life Sciences
2017
Scientists are required to communicate science and research not only to other experts in the field, but also to scientists and experts from other fields, as well as to the public and policymakers. One fundamental suggestion when communicating with non-experts is to avoid professional jargon. However, because they are trained to speak with highly specialized language, avoiding jargon is difficult for scientists, and there is no standard to guide scientists in adjusting their messages. In this research project, we present the development and validation of the data produced by an up-to-date, scientist-friendly program for identifying jargon in popular written texts, based on a corpus of over 90 million words published in the BBC site during the years 2012-2015. The validation of results by the jargon identifier, the De-jargonizer, involved three mini studies: (1) comparison and correlation with existing frequency word lists in the literature; (2) a comparison with previous research on spoken language jargon use in TED transcripts of non-science lectures, TED transcripts of science lectures and transcripts of academic science lectures; and (3) a test of 5,000 pairs of published research abstracts and lay reader summaries describing the same article from the journals PLOS Computational Biology and PLOS Genetics. Validation procedures showed that the data classification of the De-jargonizer significantly correlates with existing frequency word lists, replicates similar jargon differences in previous studies on scientific versus general lectures, and identifies significant differences in jargon use between abstracts and lay summaries. As expected, more jargon was found in the academic abstracts than lay summaries; however, the percentage of jargon in the lay summaries exceeded the amount recommended for the public to understand the text. Thus, the De-jargonizer can help scientists identify problematic jargon when communicating science to non-experts, and be implemented by science communication instructors when evaluating the effectiveness and jargon use of participants in science communication workshops and programs.
Journal Article
The PRISMA 2020 statement: An updated guideline for reporting systematic reviews
by
Li, Tianjing
,
Oregon Health and Science University [Portland] (OHSU)
,
Mcdonald, Steve
in
Careers
,
Editorials
,
Endorsements
2021
The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication. Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: EL is head of research for the BMJ; MJP is an editorial board member for PLOS Medicine; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews. [...]technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence,[22–24] methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate,[25–27] and new methods have been developed to assess the risk of bias in results of included studies. Summary points * To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found * The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies * The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews * We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders Development of PRISMA 2020 A complete description of the methods used to develop PRISMA 2020 is available elsewhere.
Journal Article
Virtual words : language on the edge of science and technology
In 45 short essays, Keats examines how words get coined, what relationship they have to their subject matter, and why some, like blog, succeed while others, like flog, fail. Divided into broad categories--such as euphemism, polemic, jargon, and slang, in addition to scientific and technological neologisms--chapters each consider one exemplary word, its definition, origin, context, and significance.
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
2024
Studies have shown that patients have difficulty understanding medical jargon in electronic health record (EHR) notes, particularly patients with low health literacy. In creating the NoteAid dictionary of medical jargon for patients, a panel of medical experts selected terms they perceived as needing definitions for patients.
This study aims to determine whether experts and laypeople agree on what constitutes medical jargon.
Using an observational study design, we compared the ability of medical experts and laypeople to identify medical jargon in EHR notes. The laypeople were recruited from Amazon Mechanical Turk. Participants were shown 20 sentences from EHR notes, which contained 325 potential jargon terms as identified by the medical experts. We collected demographic information about the laypeople's age, sex, race or ethnicity, education, native language, and health literacy. Health literacy was measured with the Single Item Literacy Screener. Our evaluation metrics were the proportion of terms rated as jargon, sensitivity, specificity, Fleiss κ for agreement among medical experts and among laypeople, and the Kendall rank correlation statistic between the medical experts and laypeople. We performed subgroup analyses by layperson characteristics. We fit a beta regression model with a logit link to examine the association between layperson characteristics and whether a term was classified as jargon.
The average proportion of terms identified as jargon by the medical experts was 59% (1150/1950, 95% CI 56.1%-61.8%), and the average proportion of terms identified as jargon by the laypeople overall was 25.6% (22,480/87,750, 95% CI 25%-26.2%). There was good agreement among medical experts (Fleiss κ=0.781, 95% CI 0.753-0.809) and fair agreement among laypeople (Fleiss κ=0.590, 95% CI 0.589-0.591). The beta regression model had a pseudo-R
of 0.071, indicating that demographic characteristics explained very little of the variability in the proportion of terms identified as jargon by laypeople. Using laypeople's identification of jargon as the gold standard, the medical experts had high sensitivity (91.7%, 95% CI 90.1%-93.3%) and specificity (88.2%, 95% CI 86%-90.5%) in identifying jargon terms.
To ensure coverage of possible jargon terms, the medical experts were loose in selecting terms for inclusion. Fair agreement among laypersons shows that this is needed, as there is a variety of opinions among laypersons about what is considered jargon. We showed that medical experts could accurately identify jargon terms for annotation that would be useful for laypeople.
Journal Article
FORCE FIELDS: Afropessimism and the Figural Negro Ruminations on American Fiction
2024
Would it matter if I published this essay under a pseudonym, which hid all the markers of my identity, including my race? Does that identity always matter to what T say, drawing on the authority of the experience I bring to the task? Or might it matter more to how it is read by you, the reader, who carries a cargo of expectations about what that experience really is or what, in the current jargon, my \"positionality\" should be? Would a pseudonym successfully short-circuit posing these questions, or would it merely imitate the stratagem chosen by the flawed hero of the movie about which I'm writing, who hides behind a nam deplume in order to sell a pseudo-autobiographical potboiler about allegedly authentic Black life? Is it, however, possible to reveal an \"authentic\" self or do we always present an artificial persona, perhaps many, to the world?
Journal Article